id
stringlengths
10
10
title
stringlengths
26
192
abstract
stringlengths
172
1.92k
authors
stringlengths
7
591
published_date
stringlengths
20
20
link
stringlengths
33
33
markdown
stringlengths
269
344k
2305.07479
VC-PINN: Variable Coefficient Physical Information Neural Network For Forward And Inverse PDE Problems with Variable Coefficient
The paper proposes a deep learning method specifically dealing with the forward and inverse problem of variable coefficient partial differential equations -- Variable Coefficient Physical Information Neural Network (VC-PINN). The shortcut connections (ResNet structure) introduced into the network alleviates the "Vanishing gradient" and unifies the linear and nonlinear coefficients. The developed method was applied to four equations including the variable coefficient Sine-Gordon (vSG), the generalized variable coefficient Kadomtsev-Petviashvili equation (gvKP), the variable coefficient Korteweg-de Vries equation (vKdV), the variable coefficient Sawada-Kotera equation (vSK). Numerical results show that VC-PINN is successful in the case of high dimensionality, various variable coefficients (polynomials, trigonometric functions, fractions, oscillation attenuation coefficients), and the coexistence of multiple variable coefficients. We also conducted an in-depth analysis of VC-PINN in a combination of theory and numerical experiments, including four aspects, the necessity of ResNet, the relationship between the convexity of variable coefficients and learning, anti-noise analysis, the unity of forward and inverse problems/relationship with standard PINN.
Zhengwu Miao, Yong Chen
2023-05-12T13:47:40Z
http://arxiv.org/abs/2305.07479v2
VC-PINN: variable coefficient physical information neural network for forward and inverse PDEs problems with variable coefficient ###### Abstract. The paper proposes a deep learning method specifically dealing with the forward and inverse problem of variable coefficient partial differential equations - Variable Coefficient Physical Information Neural Network (VC-PINN). The shortcut connections (ResNet structure) introduced into the network alleviates the "vanishing gradient" and unify the linear and nonlinear coefficients. The developed method was applied to four equations including the variable coefficient Sine-Gordon (vSG), the generalized variable coefficient Kadomtsev-Petviashvili equation (gvKP), the variable coefficient Korteweg-de Vries equation (vKdV), the variable coefficient Sawada-Kotera equation (vSK). Numerical results show that VC-PINN is successful in the case of high dimensionality, various variable coefficients (polynomials, trigonometric functions, fractions, oscillation attenuation coefficients), and the coexistence of multiple variable coefficients. We also conducted an in-depth analysis of VC-PINN in a combination of theory and numerical experiments, including four aspects: the necessity of ResNet; the relationship between the convexity of variable coefficients and learning; anti-noise analysis; the unity of forward and inverse problems/relationship with standard PINN. ## 1. Introduction Deep learning has achieved success in applications including image recognition [38], natural language processing [39], and more. The research on using neural networks to solve partial differential equations (PDEs) can be traced back to the work of [17] in 1994. However, limited by the computing power of computers at that time, they stopped after trying on shallow networks. Until 2019, the emergence of a deep learning framework based on physical constraints - Physical Information Neural Network (PINN) provided a new idea for the numerical solution of PDEs [62]. Under the theoretical support of the universal approximation theorem [14], PINN implements a mesh-free numerical algorithm by embedding PDEs into the loss of the neural network by using the automatic differentiation technique (AD) [2]. PINN can easily incorporate various mechanism-based constraints and data measurements into the loss function. Its strong scalability and generality make it more flexible than the finite difference method (FDM) and the finite element method (FEM). The above capabilities of PINN and its advantages in high-dimensional and inverse problems make it quickly become a powerful tool for solving the forward and inverse problems of PDEs through deep neural networks (DNNs). Although the success that has been achieved is encouraging, PINN has to face the challenges brought about by more complex problems, the need for higher accuracy and efficiency, and the requirement for stronger robustness. To improve the performance of the network, researchers have extended the standard PINN method in various aspects. For example, the locally adaptive activation function is proposed to accelerate network training [33], and various adaptive weight methods [70, 71, 74] and point-weighted methods [42, 51] are developed to balance each loss item. The form of the loss function is optimized by using Meta-learning PINNs [58] and adding gradient information of PDEs residuals (gPINNs) [83]. The proposed adaptive sampling method (RAR, RAD) [73] and the combination of PINN with numerical discrete format (CAN-PINN) [12] can effectively improve accuracy. Facing the problem of large space-time domain, cPINN [34], XPINNs [32], Parallel PINNs [66], etc. based on the regional decomposition strategy have been developed. DeepM&Mnet [50] and Multi-Head PINNs (MH-PINNs) [90] are proposed to deal with multi-scale/multi-physics problems and multi-task collaboration problems respectively. Two mainstream operator networks, DeepONet [47] and FNO [44], are proposed to learn the mapping from infinite dimension to infinite dimension, which avoids the problem of repeated training. In addition, there are various variants of PINNs including B-PINNs [78], fPINN [56], hPINN [49], hp-VPINNs [36], gwPINNs [75], etc., which have injected new vitality into this field. PINN and its extensions have been widely applied to many scientific and engineering fields including fluid mechanics [63], nano-optics [9], biological systems [80], thermodynamics [6], to deal with various forms of equations (standard PDEs, integro-differential equation [48], stochastic differential equation [86], fractional differential equation [56], etc.). Of course, other frameworks based on deep learning to solve the forward and inverse problem of PDE are also worthy of attention, such as Deep Ritz Method (DRM) [82], Deep Galerkin Method (DGM) [67], Weak Adversarial Networks (WAN) [85], Sparse Identification of Nonlinear Dynamics (SINDy) [5], PINN with sparse regression [10], Local Extreme Learning Machines (loc-ELM) [18]. In short, the proposal of the PINN method is revolutionary and has a milestone significance, which greatly promotes the development of scientific computing and related fields. The coefficients of a constant coefficient differential equation are fixed constants, and it is usually a model described in a homogeneous medium and constant physical quantities. In the variable coefficient differential equation, the coefficient is a variable quantity, which is a function of the space-time variables \(\boldsymbol{x}\) and \(t\). The variable coefficient equation is more complex than the constant coefficient equation, but it can more realistically simulate the abundant physical phenomena in the real world, especially when we consider boundary effects, non-uniform media, and variable external forces. For example, the generalized variable coefficient Kadomtsev-Petviashvili equation [43] describing water waves in channels with variable width, depth, and density; the heat equation with variable coefficient [13] describing heat diffusion in heterogeneous media, and so on. Variable coefficient differential equations are not only more meaningful in practical applications but also more mathematically challenging. In view of the importance of the variable coefficient model, the idea of applying the powerful PINN method to the variable coefficient equation is very natural. However, to the best of our knowledge research in this area is still lacking [68, 89]. The standard PINN is difficult to deal with the case where the independent variable dimension of the coefficient function is different from the equation dimension. For example, in \((1+1)\)-dimensional equations, variable coefficients related only to the time variable \(t\) are common. But PINN can't do anything about this situation except to use the unsatisfactory strategy of adding soft constraints to the loss function. Therefore, this paper aims to propose a deep learning method that specifically deals with variable coefficient forward and inverse problems - Variable Coefficient-Physical Information Neural Network (VC-PINN). It adds branch networks responsible for approximating variable coefficients on the basis of standard PINN, which adds hard constraints to variable coefficients, avoiding the problem of different dimensions between coefficients and equations. In addition, shortcut connections (ResNet structure) are introduced into the feed-forward neural network. This structure has solved an important problem ("network degradation") that hinders the learning of deep neural networks in the field of image processing [28, 29]. However, in the context of variable coefficient problems, ResNet not only alleviates gradient vanishing but also serves a new purpose. It has the function of unifying linear and nonlinear coefficients (to solve the network degradation problem of linear coefficients). For the proposed new method, it is necessary to conduct extensive numerical experiments to test its performance. Therefore, the accuracy of the standard results required in the comparison of performance tests is extremely important. However, it is not easy to obtain the exact solution of the nonlinear partial differential equation, and it is even more difficult to obtain the exact solution of the variable coefficient problem. But there is a special kind of PDEs with good properties - Integrable System. Its abundant exact solutions (multi-soliton, lump, breather, rogue wave) have provided sufficient sample space for PINN with constant coefficients [52, 57, 59, 60, 23, 41, 69]. In addition, special integrable structures and integrable properties including conserved quantities [45], symmetries [87] and Mirua transformations [46] have been successfully incorporated into PINN. And it is surprising that in the field of integrable systems, the exact solution of the variable coefficient problem can be obtained by generalizing the classical integrable method. This provides an exact sample rather than a high-precision numerical sample for testing the performance of the developed VC-PINN. It is worth mentioning that the combination of deep learning and integrable methods such as Lax pair, Darboux transform, inverse scattering transform, and Hirota bilinear is also expected in the future. This paper is organized as follows. In Section 2, VC-PINN is introduced from four aspects: problem setting, ResNet structure, forward problem, and inverse problem. Section 3 and Section 4 test the performance of the proposed method on forward and inverse problems on four equations (the variable coefficient Sine-Gordon, the generalized variable coefficient Kadomtsev-Petviashvili equation, the variable coefficient Korteweg-de Vries equation, the variable coefficient Sawada-Kotera equation), including the case of high dimensionality and coexistence of multiple variable coefficients. Several different forms of variable coefficients are involved (polynomials, trigonometric functions, fractions, oscillation damping coefficients). In Section 5, an in-depth analysis of VC-PINN is made by combining theory and numerical experiments, including the necessity of ResNet; the relationship between the convexity of variable coefficients and learning; anti-noise analysis; the unity of forward and inverse problems/relationship with standard PINN. Finally, a summary of the article and some empirical conclusions are given in Section 6. ## 2. Variable coefficient Physics-informed neural networks Based on the classic PINN, this section will propose a new deep learning framework to specifically deal with the forward and inverse problems of variable coefficient PDE, which is called VC-PINN. This section introduces this framework from four aspects: problem setting, ResNet structure, forward problem, and inverse problem. ### Problem setup Consider a class of evolution equations with time-varying coefficients in real space, as follows: \[u_{t}=\mathbf{N}[u]\cdot\mathbf{C}[t]^{T},\ \mathbf{x}\in\Omega,\ t\in[T_{0},T_{1}], \tag{2.1}\] where \(u=u(\mathbf{x},t)\) represents the real-valued solution of equation (2.1), \(\Omega\) is a subset of \(\mathbb{R}^{N}\), and the \(N\)-dimensional space vector \(\mathbf{x}\) is recorded as \(\mathbf{x}=(x_{1},x_{2},\cdots,x_{N})\), so it is actually a \((N+1)\)-dimensional evolution equation. \(\mathbf{N}[\cdot]\) represents an operator vector, expressed in component form as \(\mathbf{N}[u]=(N_{1}[u],N_{2}[u],\cdots)\). Each component \(N_{i}\) is an operator, which usually includes but is not limited to linear or nonlinear differential operators. However, \(\mathbf{C}[t]=(c_{1}(t),c_{2}(t),\cdots)\) is a coefficient vector whose component \(c_{i}(t)\) is an analytical function of the time variable \(t\), and \(\mathbf{C}[t]\) has the same dimension as \(\mathbf{N}[u]\). Furthermore, the product of the corresponding components of \(\mathbf{N}[u]\) and \(\mathbf{C}[t]\) represents an operator with time-varying coefficients \((c_{i}[t]N_{i}[u])\), usually interpreted as dispersion terms, higher-order terms of the equation. In particular, the variable coefficient considered here is only a function of the time variable \(t\). The case where the variable coefficients are also correlated with the spatial variable \(\mathbf{x}\) will be discussed in Section 5. For forward problems in the continuous sense, the expressions for the variable coefficients are fully known and written into the equations. In the treatment of such problems, it is sufficient to use the standard PINN method like the constant coefficient problem. But in engineering applications, the requirement of fully knowing the expressions of variable coefficients is harsh. Therefore, subsequent discussions based on forward and inverse problems are carried out in a discrete sense. Specifically, we use whether the variable coefficients are known or not in the discrete sense as the basis for distinguishing forward and inverse problems. Although this division is not too strict, it can be seen from the discussion and analysis in Section 5 that there are indeed essential differences in the performance of forward and inverse problems in this sense. Therefore, it is reasonable and meaningful to make such a distinction. In view of the difference between the variable coefficient equation and the constant coefficient equation in the description of the forward and inverse problem, it is necessary to give a formal definition of the forward and inverse problem of PDEs under the variable coefficient version. As follows (Fig. 1): * In the forward problem, variable coefficients \(c_{i}(t)\) are known in a discrete sense. Specifically, the values of the coefficients \(c_{i}(t)\) at a finite number of discrete points on \([T_{0},T_{1}]\) are known information. These coefficient values correspond to some observable physical quantities (with varying degrees of noise) in practical applications. The forward problem with variable coefficients is thus formulated as solving \(u\) over the region using the initial boundary value conditions for \(u\) (consistent with constant coefficients) and the above discrete coefficient values. * In the inverse problem, the coefficients to be determined are no longer several fixed constants, but a set of functions related to the time variable \(t\). Consistent with the constant coefficient problem, the value of \(u\) at finite discrete points in the region is known information. Besides, due to the multi-solution nature of the inverse problem, it is necessary to provide boundary values (\(c_{i}(T_{0})\) and \(c_{i}(T_{1})\)) of the variable coefficients \(c_{i}(t)\). 1 This boundary information corresponds to the initial and terminal values of the observed quantities in the experiment. In summary, the inverse problem is described as using the above two known information to obtain the complete coefficient variation in the time domain \([T_{0},T_{1}]\), that is, the discrete value of \(c_{i}(t)\) at any time. Footnote 1: This is not a mandatory condition, and the requirements for boundary values can be appropriately relaxed according to the difficulty of the problem and the limitations of observation conditions. Because of the difference between the description of forward and inverse problems in the time-varying coefficient equation and the constant coefficient equation, it is necessary to propose a new PINN framework to deal with this type of specific equation. In addition, variable coefficients also bring new problems. It is well known that most complex physical phenomena are described by nonlinear models, but the nonlinearity of the equation does not mean that the function coefficients must also be nonlinear. In fact, many familiar physical quantities are linear as a function of an Figure 1. (Color online) Schematic diagram of the forward and inverse problem of the variable coefficient equation in the discrete sense. (for simplicity, consider the space variables to be one-dimensional) independent variable (not necessarily the time variable \(t\)), and of course, they may become function coefficients in nonlinear models. Therefore, how to unify linearity and nonlinearity in neural network methods will be a challenge brought by variable coefficients. ### ResNet structure In the variable coefficient problem, not only \(u\) needs to be represented by a neural network, but the variable coefficient \(c_{i}(t)\) with different numbers of independent variables also needs to be approximated by a new network. Without loss of generality, in the method description, it is assumed that equation (2.1) only involves a single variable coefficient, i.e. \(\mathbf{C}[t]=c_{1}(t)\) and the corresponding operator vector \(\mathbf{N}[u]\) is also abbreviated as \(\mathcal{N}[u]\). The method in this paper is also applicable to the case of multiple variable coefficients. This simplification is only for a clearer description. In fact, examples of multiple variable coefficients are also shown in numerical experiments. In 2015, He et al. discovered an important problem that hinders the learning of deep neural networks-the problem of network degradation [28]. That is, when using a deep network directly stacked by a shallow network, it is not only difficult to use the powerful feature extraction capabilities of the deep network, but even the accuracy is reduced. At the same time, they proposed a network structure (residual network: ResNet) that adds shortcut connections between network layers, which not only alleviates the disappearance of gradients but also solves the problem of network degradation. This structure is applied to image processing problems based on convolutional neural networks, and the accuracy has been improved unprecedentedly. This simple yet effective design is widely used and has developed many variants including DenseNet [31]. However, ResNet also appears in the known research on solving PDEs using deep learning frameworks. Our team introduces residual blocks in PINN to handle sine-Gordon with highly nonlinear terms that make classical PINN difficult to solve [40]. Cheng et al. used PINN with ResNet blocks to achieve better results than classic PINN in fluid flow problems such as the Burgers equation and the Navier-Stokes equation [11]. Niu's team respectively proposed adaptive learning rate residual network [7] and adaptive multi-scale neural network with resnet blocks [8] based on the idea of shortcut connection to alleviate the gradient imbalance and multi-frequency oscillation encountered in the process of solving PDE. In the variable coefficient problem, in addition to the above-mentioned known advantages, ResNet can better unify linearity and nonlinearity in a network to adapt to different variable coefficients. However, the unification of the above two is the problem of how the deep network approaches identity mapping, which is what He et al. mentioned in [28]. Therefore, this paper also hopes to introduce the design of shortcut connections in the network structure of VC-PINN. Moreover, in Section 5.1, the necessity of using the ResNet structure in variable coefficient problems will be discussed more deeply in combination with the results of numerical experiments. The ResNet used in this paper is not the original ResNet, but a ResNet with a new residual unit [29], which was also proposed by He et al. shortly after [28] was published. The difference between the new ResNet and the original ResNet lies in the relative position of the activation function and shortcut connections. The activation function of the new ResNet is moved before the shortcut connection, and this connection mode is called "pre-activation", which makes the prediction accuracy of the network improved again. (Unless otherwise specified, the ResNet mentioned later refers to the new ResNet.) First, consider the most common feed-forward neural network (FNN) with a depth of \(D\). The \(0^{th}\) layer and the \(D^{th}\) layer are respectively called the input layer and the output layer, and naturally there are \(D-1\) hidden layers. A special requirement that must be considered before introducing shortcut connections is that the two vectors to be connected need to have the same dimension. A common approach is to use linear projections to match dimensions to satisfy the above conditions. In line with the principle of not introducing new network parameters as much as possible, it may be assumed that the number of nodes in each hidden layer is \(N_{d}\). Then, a ResNet structure diagram with \(N_{B}\) residual blocks and each residual block containing \(N_{h}\) hidden layers is as follows: As shown in Fig. 2, \(X^{[d]}\) represents the state vector of the \(d^{th}\) layer node, and \(R^{[i]}\) is both the output of the \(i^{th}\) residual block and the input of the \((i+1)^{th}\) residual block. The \(i^{th}\) residual block consists of layers \([(i-1)N_{h}+1]^{th}\) to \((iN_{h})^{th}\) of the network. So \(R^{[i]}\) is equal to the state vector \(X^{[iN_{h}+1]}\) of the \((iN_{h}+1)^{th}\) layer node. In order to show the structure of the network more clearly, and to express ResNet and ordinary FNN in a unified way, a more mathematical expression is used here to replace the simplified representation of ResNet mentioned in [29]. The transformation relationship between the input and output of the residual block is expressed as: \[X_{L}^{[0]} =W^{[0]}X^{[0]}+b^{[0]},\] \[X^{[1]} =\mathcal{R}^{[0]} =\mathcal{K}X_{L}^{[0]}+(1-\mathcal{K})\mathcal{F}(X_{L}^{[0]}),\] \[X^{[iN_{h}+1]} =\mathcal{R}^{[i]} =\mathcal{L}_{i}(\mathcal{R}^{[i-1]})+\mathcal{K}\mathcal{R}^{[i -1]},\ i=1,2,...,N_{B}, \tag{2.2}\] \[X^{[D]} =W^{[D-1]}X^{[D-1]}+b^{[D-1]},\] \[=W^{[D-1]}\mathcal{R}^{[N_{B}]}+b^{[D-1]},\] where \(X_{L}^{[0]}\) is an intermediate variable, and the coefficient \(\mathcal{K}\in\{0,1\}\) is mainly controls whether shortcut connections are included. Specifically, when \(\mathcal{K}=1\), it is the ResNet structure, and when \(\mathcal{K}=0\) is the ordinary FNN structure. The nonlinear map \(\mathcal{L}_{i}\) is the nonlinear part between the input and output of the \(i^{th}\) residual block, defined as follows: \[\mathcal{L}_{i} \triangleq\mathcal{T}_{iN_{h}}\circ\mathcal{T}_{iN_{h}-1}\circ \cdots\circ\mathcal{T}_{(i-1)N_{h}+1},\ i=1,2,...,N_{B}, \tag{2.3}\] \[\mathcal{T}_{i}(X) \triangleq\mathcal{F}(W^{[i]}X+b^{[i]}),\ i=1,2,...,N_{B}N_{h},\] The above \(W^{[i]}\in\mathbb{R}^{N_{i}\times N_{i+1}}\) and \(b^{[i]}\in\mathbb{R}^{N_{i+1}}\) respectively represent the weight matrix and bias vector between the \(i^{th}\) layer and the \((i+1)^{th}\) layer network, where \(N_{i}\) is the number of nodes in the \(i^{th}\) layer network, and \(N_{i}=N_{d},i=1,2,...,D-1\), which is the previous assumption. However, "\(\circ\)" represents a function composite operator, and \(\mathcal{F}\) is a nonlinear activation function, usually chosen as \(Sigmoid\) function, \(Tanh\) function, or \(RelU\) function, etc. In addition, \(\mathcal{T}_{i}\) is a nonlinear transformation, which is composed of a nonlinear activation function \(\mathcal{F}\) and an affine transformation. There are a few key points to note about this ResNet structure: * In order to make the function represented by the above ResNet structure more directly approximate any linear function from a mathematical point of view, not only the connection mode of "pre-activation" is adopted here, but also the linear projection without activation function is used to match the dimensions of the input layer and the first hidden layer. In this design, as long as the appropriate weight and bias are found to make \(\mathcal{F}(W^{[i]}X+b^{[i]})=0\), the input will be truncated in the nonlinear layer, and only rely on the shortcut connections to propagate in the network, so it is easy for the linear output layer to approach any linear function (Fig. 3). In particular, for activation functions that satisfy \(\mathcal{F}(0)=0\) (such as \(tanh\), etc.), only \(W^{[i]}=0,\ b^{[i]}=0\) is required. This requirement is also very suitable for initialization methods with zero mean characteristics (such as \(Glorot\), \(Kaiming\), etc.). Figure 3. (Color online) The ResNet network information flow diagram when approximating a linear map. The dotted line indicates that the information flow is \(0\) or a small amount. The figure also shows the “pre-activation” connection mode. Figure 2. (Color online) ResNet network structure diagram. It includes \(N_{B}\) residual blocks, and each residual block contains \(N_{h}\) hidden layers. * The ResNet structure represented by formulas (2.2) and (2.3) is a very special case, each residual block has the same number of internal layers, and there is no nonlinear fully connected layer independent of the residual block except for the input layer and output layer (This means there is relation \(D=N_{B}N_{h}+2\)). In fact, according to the needs of mathematical and physical problems, the structure of residual blocks with different numbers of internal layers or the strategy of cross-use of residual blocks and nonlinear full connection layers are feasible. Of course, from the perspective of unified linearity and nonlinearity, the idea that we do not add additional independent nonlinear fully connected layers is particularly appropriate. In order to distinguish different networks in the subsequent introduction, a specific ResNet is expressed as a structural parameter list form \(NN\{D,N_{d},N_{B},N_{h}\}_{\mathcal{K}=1}\), where the definition of \(\mathcal{K}\) is equivalent to formula (2.2). To revisit, \(D\) is the depth of the neural network, \(N_{d}\) is the number of hidden layer nodes, \(N_{B}\) is the number of residual blocks, and \(N_{h}\) represents the number of network layers contained in each residual block. In fact, a specific ResNet depends entirely on the above four structural parameters when the dimensions of the input and output are given. On this basis, after choosing an appropriate initialization strategy, the initial state of the network will be completely clear. In particular, when \(\mathcal{K}=0\), the parameter list \(NN\{D,N_{d},N_{B},N_{h}\}_{\mathcal{K}=0}\) represents a common FNN, then the network structure is only determined by \(D\) and \(N_{h}\), so it is abbreviated as \(NN\{D,N_{h}\}_{\mathcal{K}=0}\). The \(Glorot\) initialization method is taken into consideration, which is described as the biases \(b^{[i]}\) are initialized to zero vectors, and the weights obey the normal distribution with a mathematical expectation of zero, as follows: \[W^{[i]}_{j,k}\sim N(0,\sigma_{i}^{2}),\ \sigma_{i}^{2}=\frac{2}{N_{i}+N_{i+1}},\ i= 0,2,...,D-1, \tag{2.4}\] where \(W^{[i]}_{j,k}\) represents the elements in the weight matrix \(W^{[i]}\), and each element in the same weight matrix is independent and identically distributed. As mentioned at the beginning of this section, two networks need to be constructed to approximate the solution \(u\) and the variable coefficient \(c_{1}(t)\) respectively. Both networks will adopt the above mentioned ResNet structure, call them trunk network \(NN_{u}\{D_{u},N_{d}^{u},N_{B}^{u},N_{h}^{u}\}\) and branch network \(NN_{c}\{D_{c},N_{d}^{c},N_{B}^{c},N_{h}^{c}\}\).2 Footnote 2: The parameter \(\mathcal{K}\) will be omitted here and in the following description, because the number of parameters in the list can determine whether the network is a ResNet structure or FNN. ### Forward Problem This section introduces the VC-PINN method from the forward problem of the time-varying coefficient equation articulated in Section 2.1. Consider an initial boundary value problem (\(Dirichlet\) boundary condition) for a partial differential equation involving a single time-varying coefficient: \[\begin{split}& u_{t}=c_{1}(t)\mathcal{N}[u],\ \mathbf{x}\in\Omega,\ t\in[T_{0},T_{1}],\\ & u(\mathbf{x},T_{0})=g_{0}(\mathbf{x}),\ \mathbf{x}\in\Omega,\\ & u(\mathbf{x},t)=g_{\Gamma}(\mathbf{x},t),\ \mathbf{x}\in\partial\Omega,\ t\in[T_{0},T_{1}], \end{split} \tag{2.5}\] where \(\partial\Omega\) represents the boundary of the space domain \(\Omega\), the first equation of (2.5) is a special case of (2.1), and the last two equations of (2.5) correspond to the initial value condition and the \(Dirichlet\) boundary condition respectively. When \(c_{1}(t)\) is known (known in the discrete sense), the neural network method is used to solve the initial boundary value problem (2.5), and the key is to construct an optimization problem. The function represented by the trunk network \(NN_{u}\{D_{u},N_{B}^{u},N_{h}^{u},N_{d}^{u}\}\) is denoted as \(\tilde{u}(\mathbf{x},t;\theta_{u})\), which will approach the real solution \(u(\mathbf{x},t)\) of the initial boundary value problem (2.5), and the function represented by the branch network \(NN_{c}\{D_{c},N_{B}^{c},N_{h}^{c},N_{d}^{c}\}\) which is used to approach the real variable coefficient \(c_{1}(t)\) is denoted as \(\tilde{c}(t;\theta_{c})\). \(\theta_{u}\in\Theta_{u}\) and \(\theta_{c}\in\Theta_{c}\) are the parameter spaces (weight and bias spaces) of the two networks of \(NN_{u}\) and \(NN_{c}\), respectively. In order to introduce the loss function, define the residual of the equation at the point \((\tilde{\mathbf{x}},\tilde{t})\) as follows: \[f(\tilde{\mathbf{x}},\tilde{t};u,c):=\mathcal{N}_{0}[u,c]_{\mathbf{x}=\tilde{\mathbf{ x}},t=\tilde{t};}\ \mathcal{N}_{0}[u,c]=\partial_{t}[u]-c\mathcal{N}[u]. \tag{2.6}\] The residual is derived from the first equation of (2.5), and \(\mathcal{N}_{0}[\cdot,\cdot]\) is a new operator composed of \(\mathcal{N}[\cdot]\) and \(\partial_{t}\). However, \(f(\sim,\sim;u,c)\) is treated as a function of functions that maps a point \((u,c)\) in the function space to a function on domain \(\Omega\times[T_{0},T_{1}]\), where \(u=u(\mathbf{x},t)\) and \(c=c(t)\) are interpreted as function types parameters. So the residual defined by (2.6) measures how well the equation satisfies at the point \((\tilde{\mathbf{x}},\tilde{t})\) given the function \(u(\mathbf{x},t)\) and the coefficient \(c(t)\). In particular, if \(u_{0}\) is the solution of the initial boundary value problem (2.5) under the variable coefficient \(c_{1}(t)\), then obviously \(f(\tilde{\mathbf{x}},\tilde{t};u_{0},c_{1})=0,\ \forall\tilde{\mathbf{x}}\in\Omega, \tilde{t}\in[T_{0},T_{1}]\). Each set of parameters \(\theta=\{\theta_{u},\theta_{c}\}\) in parameter space \(\Theta=\{\Theta_{u},\Theta_{c}\}\) defines a function \(\tilde{u}(\mathbf{x},t;\theta_{u})\) and variable coefficients \(\tilde{c}(t;\theta_{c})\). However, finding a suitable parameter \(\theta^{\star}=\{\theta_{u}^{\star},\theta_{c}^{\star}\}\) from the parameter space \(\Theta\) so that the residual \(f(\tilde{\mathbf{x}},\tilde{t};\tilde{u}^{\star},\tilde{c}^{\star})\) is close enough to zero on the domain \(\Omega\times[T_{0},T_{1}]\) is our goal. At the same time, if \(\tilde{u}^{\star}\) satisfies the initial boundary value condition, then it is close enough to the real solution of the initial boundary value problem (2.5). The loss function is the key to network optimization. In order to better measure the gap between real solution \(u_{0}(\mathbf{x},t)\) and \(\tilde{u}(\mathbf{x},t;\theta_{u})\), a loss function composed of initial value constraints, boundary constraints, coefficient constraints, and physical equation constraints is constructed: \[Loss(\theta)=Loss_{I}(\theta)+Loss_{b}(\theta)+Loss_{f}(\theta)+Loss_{c}(\theta), \tag{2.7}\] where \[Loss_{I}(\theta) =\frac{1}{n_{I}}\sum_{i=1}^{n_{I}}|\tilde{u}(\mathbf{x}_{I}^{i},T_{0}; \theta_{u})-g_{0}(\mathbf{x}_{I}^{i})|^{2}, \tag{2.8}\] \[Loss_{b}(\theta) =\frac{1}{n_{b}}\sum_{i=1}^{n_{b}}|\tilde{u}(\mathbf{x}_{b}^{i},t_{b} ^{i};\theta_{u})-g_{\Gamma}(\mathbf{x}_{b}^{i},t_{b}^{i})|^{2},\] (2.9) \[Loss_{f}(\theta) =\frac{1}{n_{f}}\sum_{i=1}^{n_{f}}|f(\mathbf{x}_{f}^{i},t_{f}^{i}; \tilde{u}(\mathbf{x},t;\theta_{u}),\tilde{c}(t;\theta_{c}))|^{2}\] (2.10) \[Loss_{c}(\theta) =\frac{1}{n_{c}}\sum_{i=1}^{n_{c}}|\tilde{c}(t_{c}^{i};\theta_{c} )-c_{c}^{i}|^{2}. \tag{2.11}\] However, \(\{\mathbf{x}_{I}^{i},u_{I}^{i}\}_{i=1}^{n_{I}}\), \(\{\mathbf{x}_{b}^{i},t_{b}^{i},u_{b}^{i}\}_{i=1}^{n_{b}}\), \(\{\mathbf{x}_{f}^{i},t_{f}^{i}\}_{i=1}^{n_{f}}\) and \(\{t_{c}^{i},c_{c}^{i}\}_{i=1}^{n_{c}}\) represent four different types of point sets, which may be referred to as _I-type_ points, _b-type_ points, _f-type_ points, and _c-type_ points. _I-type_ points are initial value discrete points, and \(u_{I}^{i}=g_{0}(\mathbf{x}_{I}^{i})\) is the value of the real solution \(u_{0}\) at the spatial position \(\mathbf{x}_{I}^{i}\) at \(T_{0}\). Similarly, _b-type_ points are boundary value discrete points, and the value of the real solution \(u_{0}\) at the spatiotemporal position \((\mathbf{x}_{b}^{i},t_{b}^{i})\) is \(u_{b}^{i}=g_{\Gamma}(\mathbf{x}_{b}^{i},t_{b}^{i})\). The _f-type_ points represent internal collocation points, which are obtained by random sampling (uniform random sampling or Latin hypercube sampling, etc.) in \(\Omega\times[T_{0},T_{1}]\), and only contain space-time position information but not function values. Finally, the _c-type_ points are coefficient discrete points, and \(c_{c}^{i}\) represents the real coefficient value at time \(t_{c}^{i}\), which is the discretization of variable coefficients in the entire time domain in the forward problem. In the loss function (2.7), \(Loss_{I}\) and \(Loss_{b}\) are initial value constraints and boundary constraints respectively, \(Loss_{f}\) is the physical constraint, and \(Loss_{c}\) is a unique coefficient constraint in variable coefficient problems. As an initial attempt at the variable coefficient problem, only the simplest balanced weight loss is considered here, which is beneficial to the analysis of the effect of the ResNet structure. Nevertheless, the performance improvement of regularization technology and a series of weight adjustment-based methods such as \(Self\)-\(Adaptive\)\(Loss\), \(Point\)-\(Weighting\)\(Method\) and \(Soft\)\(Attention\)\(Mechanism\) on traditional PINN is obvious to all. Therefore, the effect of transplanting these modular technologies into our framework in the future is also expected. In order to find a local minimum point with good generalization of the loss function as a substitute for the global minimum point, our training strategy is to use a combination of first-order (\(Adam\)) and second-order (\(L\)-\(BFGS\)) optimization algorithms. From the perspective of the landscape, Adam first makes the iteration point quickly reach a good-quality area, and L-BFGS takes advantage of its second-order accuracy to explore ideal extreme points in this area. Furthermore, finding the derivatives of the network output \(\tilde{u}(\mathbf{x},t;\theta_{u})\) with respect to \(\mathbf{x}\) and \(t\) involved in \(Loss_{f}\) is trivial for AD. The generalization error is a measure of the generalization effect of the model and is defined as: \[e_{u}^{r}=\frac{\sqrt{\sum_{i=1}^{n_{gu}}|\tilde{u}(\mathbf{x}_{gu}^{i},t_{gu}^{i };\theta^{\star})-u_{gu}^{i}|^{2}}}{\sqrt{\sum_{i=1}^{n_{gu}}|u_{gu}^{i}|^{2}}}, \quad e_{c}^{r}=\frac{\sqrt{\sum_{i=1}^{n_{gu}}|\tilde{c}(t_{gc}^{i};\theta^{ \star})-c_{gc}^{i}|^{2}}}{\sqrt{\sum_{i=1}^{n_{gc}}|c_{gc}^{i}|^{2}}}, \tag{2.12}\] where \(e_{u}^{r}\) and \(e_{c}^{r}\) are the relative errors of the solution \(u\) and the variable coefficient \(c_{1}\), respectively. And \(\tilde{u}(\mathbf{x}_{gu}^{i},t_{gu}^{i};\theta^{\star})\) represents the generalization result of the optimal solution of the model at \((\mathbf{x}_{gu}^{i},t_{gu}^{i})\), which is similar to \(\tilde{c}(t_{gc}^{i};\theta^{\star})\). The points in the set \(\{\mathbf{x}_{gu}^{i},t_{gu}^{i},u_{gu}^{i}\}_{i=1}^{n_{gu}}\) are called \(g_{u}\)-\(type\) points, which are the grid points in the full space-time domain and the corresponding value of the real solution \(u_{0}\). The equidistant discontinuities in the time domain and the corresponding real values of the variable coefficient \(c_{1}\) constitute point set \(\{t_{yc}^{i},c_{yc}^{i}\}_{i=1}^{n_{gc}}\), and the points included in it are called \(g_{c}\)-\(type\) points. The above two types of point sets are the "Standard Euler" to measure the errors of solutions and coefficients. However (2.12) is a relative error calculated based on the \(L^{2}\) norm, which mainly measures the average level of the error. If the generalization error is considered from different levels, the error formula based on other norms can also be used. For example, the error based on the \(L^{\infty}\) norm measures the maximum error of the model. ### Inverse Problem As described in Section 2.1, the real solution \(u_{0}\) of the equation in the inverse problem is known in a discrete sense, and the variable coefficient \(c_{1}\) becomes our goal. Therefore, the formulation of the question has also changed from (2.5) to \[\begin{split}& u_{t}=c_{1}(t)\mathcal{N}[u],\ \mathbf{x}\in\Omega,\ t\in[T_{0},T_{1}],\\ & c_{1}(T_{0})=\mathcal{C}_{0},\ c_{1}(T_{1})=\mathcal{C}_{1}, \end{split} \tag{2.13}\] where the first line is the original equation, and the second line represents the two-terminal conditions of the variable coefficients. In simple problems, this condition can be relaxed to a single endpoint or even not needed, whereas in more challenging problems information on higher derivatives at both endpoints and even interior point information is required. The condition for higher-order derivatives is also given here: \[\left.\frac{\partial^{k}c_{1}}{\partial t^{k}}\right|_{t=T_{0}}=\mathcal{C}_{0}^ {(k)},\ \left.\frac{\partial^{k}c_{1}}{\partial t^{k}}\right|_{t=T_{1}}=\mathcal{C}_{1 }^{(k)},k=1,2,\cdots, \tag{2.14}\] where \(\mathcal{C}_{0}^{(k)}\) and \(\mathcal{C}_{1}^{(k)}\) are the corresponding higher order derivative values at both ends. This condition is mentioned here because we used it in the multiple variable coefficient example in Section 4.2, but it is not required for all problems. Under the framework of VC-PINN, dealing with the inverse problem of variable coefficients has a certain unity with its forward problem, and the change almost only occurs in the composition of the loss function. The specific differences are as follows: \[\begin{split}& Loss_{I}(\theta)+Loss_{b}(\theta)\to Loss_{s}( \theta),\\ & Loss_{s}(\theta)=\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}|\tilde{u}(x _{s}^{i},t_{s}^{i};\theta_{u})-u_{s}^{i}|^{2},\end{split} \tag{2.15}\] where the sampling points of the real solution \(u_{0}\) in the full space-time region are called \(s\)-\(type\) points, they constitute the set \(\{x_{s}^{i},t_{s}^{i},u_{s}^{i}\}_{i=1}^{n_{s}}\), and \(u_{s}^{i}\) is the value of the real solution \(u_{0}\) at point \((x_{s}^{i},t_{s}^{i})\). The \(s\)-\(type\) point represents the known solution information (considered as an observable quantity in practice), and it is regarded as obtained by random sampling in the method introduction.3 Thus, the loss function of inverse problem (2.13) is (regardless of condition (2.14)): Footnote 3: In specific problems, because efficiency and cost need to be considered, the distribution of \(s\)-type points is usually determined after careful consideration, such as the distribution of observation buoys in the ocean. \[Loss(\theta)=Loss_{s}(\theta)+Loss_{f}(\theta)+Loss_{c}(\theta). \tag{2.16}\] The loss function of the inverse problem replaces the \(I\)-\(type\) point and the \(b\)-\(type\) point with the \(s\)-\(type\) point. In fact, they all represent the known information of the solution. The difference is that one represents the initial value and boundary information, and the other represents internal information. In addition, the change to the loss item \(Loss_{c}\) is exactly the opposite of \(Loss_{s}\). The form of the loss remains unchanged, but the known information changes from the inside to the boundary, so the \(c\)-\(type\) points in the inverse problem usually only contain 2 points. In summary, Fig. 4 shows the framework of the VC-PINN method, which includes forward and inverse problems. All code in this article is based on Python 3.7 and TensorFlow 1.15, and all numerical examples reported later were run on a DELL Precision 7920 Tower computer with a 2.10 GHz 8-core Xeon Silver 4110 processor, 64 GB of memory, and a GTX 1080Ti GPU. Figure 4. (Color online) Flow chart of VC-PINN in forward and inverse problems. ## 3. Numerical experiments on forward problems This section shows two different numerical examples of the VC-PINN method in the variable coefficient forward problem. They are the variable coefficient Sine-Gordon equation and the generalized variable coefficient Kadomtsev-Petviashvili equation. The two equations are different in dimension (the first equation is \((1+1)\)-dimensional, while the second equation is \((2+1)\)-dimensional), and this setting is for the consideration of testing the effect of the proposed method in different dimensions. ### Sine-Gordon equation with variable coefficient The Sine-Gordon (SG) equation with constant coefficients is \[u_{xt}+\sin(u)=0, \tag{3.1}\] which is a hyperbolic partial differential equation. Edmond Bour originally proposed it in his study of surfaces with constant negative curvature [3], and Frenkel and Kontorova rediscovered it in their study of crystal dislocations in 1938 [37]. In addition to differential geometry and crystal dislocation motion, the SG equations explain important nonlinear phenomena in branches of modern science including nonlinear quantum field theory, plasma physics, ultra-short optical pulse propagation, and DNA soliton dynamics [21, 25, 65, 79]. However, the non-uniformity exhibited by non-autonomous SG equations with time-varying coefficients is also worthy of attention. Give the SG equation with variable coefficients (vSG): \[u_{xt}+h(t)\text{sin}(u)=0, \tag{3.2}\] where \(h(t)\) is an analytical function that represents how the coefficients of the equation change over time. The vSG equation plays a crucial role in spin-wave propagation with variable interaction strength and in the flux dynamics of Josephson junctions with impurities [4]. The work of [72] proves that for any analytical function \(h(t)\), equation (3.2) passes the Painleve test (verifying the integrability) and provides its analytical solution, as follows: \[u(x,t)=4\text{arctan}\left(\frac{f(x,t)}{g(x,t)}\right). \tag{3.3}\] The form of this solution is consistent with the SG with constant coefficients, the difference is the constraints on the two auxiliary functions. In particular, considering the single soliton solution (multiple solitons solution is usually singular) of equation (3.2), then we have \[f(x,t)=e^{k_{1}x-\omega_{1}(t)},\ g(x,t)=1, \tag{3.4}\] \[\omega_{1}(t)=\int\frac{h(t)}{k_{1}}dt, \tag{3.5}\] where \(k_{1}\in\mathbb{R}\) is a free parameter. Then the single soliton solution of (3.2) is derived as: \[u(x,t)=4\text{arctan}\left(e^{k_{1}x-\omega_{1}(t)}\right), \tag{3.6}\] where \(\omega_{1}\) is determined by (3.5). Although the solution (3.6) represents a single soliton solution, in fact, \(\omega_{1}\) in the formula is obtained by the indefinite integral of the coefficient function \(h(t)\), so choosing different \(h(t)\) will produce many solutions with rich dynamic behavior, which is not found in the constant coefficient problem. Next, we discuss the coefficient functions of the three forms (first-degree polynomial, quadratic polynomial, trigonometric function), and use the proposed VC-PINN method to obtain a data-driven solution to the corresponding initial boundary value problem. The initial boundary value data required in \(Loss_{I}\) and \(Loss_{b}\) and the discrete coefficient value required in \(Loss_{c}\) are respectively obtained by the exact solution (3.6) and the given coefficient function. Details are as follows: * **First-degree polynomial:** Assuming \(h(t)=t\), and taking the integral constant4 as 0 in the indefinite integral (3.5), the exact solution of equation (3.2) is given as follows: Footnote 4: Unless otherwise specified, when it comes to integral constants, it defaults to 0. \[u_{1}^{(vSG)}=4\text{arctan}\left(e^{-\frac{t^{2}}{2k_{1}}+k_{1}x}\right).\] (3.7) The two data-driven solutions found by the VC-PINN approach in the scenario where the free parameter \(k_{1}=\pm 1\) and the accompanying error outcomes are shown in Fig. 5. The positive or negative of the parameter \(k_{1}\) controls whether the solution presents a bulging convex hull or a collapsed concave hull. In fact, they all evolved from the single kink solution of the SG equation in the case of constant coefficients, and the coefficient function determines the moment and position of the kink. The linear coefficient function becomes a quadratic polynomial form through indefinite integration, which is why the kink shown in Fig. 5 appears roughly as a parabola. The \(L^{2}\) relative generalization errors \(e_{u}^{r}\) of the data-driven solution for the case \(k_{1}=\pm 1\) are \(1.92\times 10^{-4}\) and \(4.11\times 10^{-4}\), respectively, which shows that the proposed method well captures the dynamical behavior of the kink-like solution of the vSG equation. * **Quadratic polynomial:** Assuming that the coefficient function is \(h(t)=t^{2}\), the exact solution of equation (3.2) is derived as: \[u_{2}^{(vSG)}=4\text{arctan}\left(e^{-\frac{\beta}{3R_{1}}+k_{1}x}\right).\] (3.8) From the expression (3.8) of the solution \(u_{2}^{(vSG)}\), it is observed that when the coefficient function is a quadratic polynomial, the degree of the time variable \(t\) is no longer an even power as in (3.7), but an odd power. Therefore, the data-driven solution obtained by taking the free parameter \(k_{1}\) as the opposite number is only a mirror symmetry of the space-time coordinates (it does not present two completely different behaviors as in Fig. 5), so only \(k_{1}=1\) is discussed here. Similar to the linear coefficients, the locations where the kinks occur completely reveal the shape of a cubic polynomial, which is evident in Fig. 6. The \(L^{2}\) relative generalization errors of the obtained data-driven solution is \(e_{u}^{r}=9.55\times 10^{-5}\). Combined with the error density map and three time snapshots, it can be seen that the dynamic behavior of the vSG equation under the quadratic polynomial coefficients has been successfully learned. * **Trigonometric function:** When the coefficient function is a cosine function with periodic properties, that is, \(h(t)=3\text{cos}(2t)\), the corresponding exact solution is \[u_{3}^{(vSG)}=4\text{arctan}\left(e^{-\frac{\text{bin}(2t)}{2k_{1}}+k_{1}x} \right).\] (3.9) The coefficient function with periodic properties determines that kink also appear periodically. The density plot of the error in Fig. 7 clearly shows that where the kink occurs is accompanied by a larger error (in a relative sense), as we found in [52], there is a strong correlation between high error and large gradient. Surprisingly, the proposed method also achieves satisfactory results under coefficients with periodic properties: the relative \(L^{2}\) generalization error of the data-driven solution is \(e_{u}^{r}=1.73\times 10^{-3}\). More detailed graphical results are displayed in Fig. 7. Figure 5. (Color online) The kink-like solution \(u_{1}^{(vSG)}\) for vSG equation with linear coefficients((a)-(b) corresponds to \(k_{1}=1\); (c)-(d) corresponds to \(k_{1}=-1\)). (a) and (c): The density plot of the data-driven solution and corresponding error are located in the upper part. The comparison of the three time snapshot curves of the exact solution and the data-driven solution is located in the lower part. (b) and (d): \(3D\) surface plots of the data-driven solution. In the above numerical experiments, although the spatiotemporal regions discussed in each example are different, the same \(513\times 201\) equidistant discrete method is used to mesh the region to obtain the initial boundary value data preliminarily screened and \(g_{u}\)-\(type\) points required for generalization error analysis. All examples use a unified Tanh activation function and a unified network structure: the trunk network is \(NN_{u}\{10,40,4,2\}\), and the branch network is \(NN_{c}\{6,30,2,2\}\). (exception: in the case of \(k_{1}=1\) under the linear coefficient, the main network structure is \(NN_{u}\{10,40\}\).) The training strategy is \(L\)-\(BFGS\) optimization after \(5000\)\(Adam\) iterations, and the number of various types of points involved in the loss function is set to \(\{n_{I}+n_{b},n_{f},n_{c}\}=\{800,20000,60\}\).5 In Appendix B.1, more detailed preset model parameters and experimental results including random seeds, training time, number of iterations, etc. In general, the proposed method shows good performance in the forward problem of \((1+1)\) dimensional variable coefficients. In the face of various coefficient types, the \(L^{2}\) generalization error reaches \(10^{-4}\) or even \(10^{-5}\) level. Footnote 5: In practice, \(I\)-\(type\) points and \(b\)-\(type\) points are sampled together. The same is true unless otherwise specified in the following examples. ### Generalized Kadomtsev-Petviashvili equation with variable coefficient Various forms of generalized KP equations with variable coefficients have been proposed a long time ago [15, 16, 27]. The motivation for these models was to describe water waves propagating in straits or rivers, rather than waves propagating on unbounded surfaces like oceans. Additional terms and variable coefficients allow them to handle channels of varying width, depth, and density and even take eddies into account, providing a more realistic description of surface waves than the standard KP equations. In [43] and [81], the Painleve analysis and Grammian solution of the following generalized variable coefficient KP equation are respectively given. The specific form of the equation is Figure 6. (Color online) The kink-like solution \(u_{2}^{(vSG)}\) of the vSG equation under quadratic coefficients (\(k_{1}=1\)). (a): The density plot of the data-driven solution and corresponding error are located in the upper part. The comparison of the three time snapshot curves of the exact solution and the data-driven solution is located in the lower part. (b): \(3D\) surface plots of the data-driven solution. Figure 7. (Color online) The kink-like solution \(u_{3}^{(vSG)}\) of the vSG equation under the cosine coefficient (\(k_{1}=1\)). (a): The density plot of the data-driven solution and corresponding error are located in the upper part. The comparison of the three time snapshot curves of the exact solution and the data-driven solution is located in the lower part. (b): \(3D\) surface plots of the data-driven solution. as follows: \[(u_{t}+f(t)uu_{x}+g(t)u_{xxx}+l(t)u+q(t)u_{x}+n(t)u_{y})_{x}+m(t)u_{yy}=0, \tag{3.10}\] where \(f(t)\neq 0\) and \(g(t)\neq 0\) represent the coefficients of nonlinearity and dispersion respectively, \(l(t)\), \(q(t)\) and \(n(t)\) are regarded as the coefficients of perturbation effects, and \(m(t)\) is the disturbed wave velocity along the y direction, and these variable coefficients are all analytical functions about t. Equation (3.10) can degenerate into standard KP equation [1] and cylindrical KP equation [19] under certain coefficients. In order to test the performance of our method in \((2+1)\)-dimensional scenarios, let the variable coefficient \(n(t)=q(t)=0\) in (3.10), thus considering a simpler generalized KP equation with variable coefficient (gvKP): \[(u_{t}+f(t)uu_{x}+g(t)u_{xxx}+l(t)u_{x}+m(t)u_{yy}=0, \tag{3.11}\] where \(u=u(x,y,t)\), \(x\), \(y\) are space variables, and \(t\) is time variable. [43] gives the exact solution of equation (3.10) based on the auto-Backlund transformation. Equation (3.11) is a special case of equation (3.10), and its exact solution can naturally be obtained. Specifically, consider the following coefficient constraints: \[\begin{split} g(t)&=\gamma f(t)e^{-\int l(t)dt},\\ m(t)&=\rho f(t)e^{-\int l(t)dt},\end{split} \tag{3.12}\] where \(\gamma\) and \(\rho\) are arbitrary parameters. When these two parameters are fixed, it can be seen from constraint (3.12) that the equation is completely determined by variable coefficients \(f(t)\) and \(l(t)\). Given an analytical solution of equation (3.11) under constraints (3.12): \[u(x,y,t)=12\frac{g}{f}\frac{\partial^{2}}{\partial x^{2}}\ln\,\phi \tag{3.13}\] with \[\phi=1+e^{px+ry-\int 4p^{3}g(t)-m(t)dt},\,\,r=\sqrt{\frac{3\gamma}{\rho}}p^{2}, \tag{3.14}\] where \(p\) is an arbitrary constant, and \(r\) is determined by \(\gamma\), \(\rho\) and the second formula of (3.14). When different function combinations of \(f(t)\) and \(l(t)\) are selected, the solution (3.13) presents a completely different form. Next, four coefficient combinations are discussed to test the performance of the proposed method and reveal the abundant dynamical behavior of the solution of the gvKP equation. Before that, make some settings, let the parameter \(\gamma=\rho=1\), so that the variable coefficient \(g(t)=m(t)\), so only three variable coefficients are involved in the discussion of the following forward problem, and they are all free. (Although the acquisition of the exact solution (3.13) depends on constraint (3.12), this constraint is not involved in solving the forward problem with variable coefficients by VC-PINN method, and they are considered independent of each other in the neural network.) The initial boundary value data (_I-type_ points and _b-type_ points) required in the forward problem come from the discreteness of the exact solution (3.13) at the corresponding position. Of course, the initial boundary value data at this time are distributed on an initial value surface and 4 boundary surfaces (as we described in [52]). Then the data-driven solutions in the four cases are as follows: * **Case 1:** If \(f(t)=\sin(t),l(t)=\frac{1}{10}\), it is naturally derived from (3.12) and \(\gamma=\rho=1\): \[g(t)=m(t)=e^{-\frac{t}{10}}{\sin(t)},\] (3.15) where the variable coefficients \(g(t)\) and \(m(t)\) formed by the product of the exponential function and the trigonometric function both oscillate and decay over time, while the exact solution (3.13) becomes (let \(p=1\)) \[u_{{}_{1}}^{(gvKP)}=\frac{12e^{-\frac{t}{10}+\tau\sqrt{3}p+\frac{40}{10}e^{-t/ 10}[10\cos(t)+\sin(t)]}}{(1+e^{x+\sqrt{3}p+\frac{40}{10}e^{-t/10}[10\cos(t)+ \sin(t)]})^{2}}.\] (3.16) The discrete values of the coefficients required for the forward problem are directly obtained from the expression of the variable coefficients, so the data-driven solution under Case 1 is shown in Fig. 8. From the expression (3.13) of the exact solution, it can be seen that the change of the coefficient directly affects the form of the term related to the time variable \(t\), but hardly affects the form of the term only related to the space variable \(x\), \(y\). The indefinite integral of the variable coefficient \(g(t)\) and \(m(t)\) is still in the form of the product of the exponential function and the trigonometric function, which is why the trend of the wave in Fig. 8 displays a similar property to the variable coefficient \(g(t)\) and \(m(t)\). ("sperpine movement" that oscillates and decays over time). Not only the direction of the wave, when \(t\) gradually increases, the amplitude of the wave also gradually decays, and the wave has the property of time localization on the positive semi-axis of \(t\). In this case, the relative \(L^{2}\) error of the obtained data-driven solution is \(e_{u}^{r}=3.48\times 10^{-4}\), which preliminarily shows that the proposed method also has the expected effect in the \((2+1)\)-dimension. * **Case 2:** When both \(f(t)\) and \(l(t)\) are in the form of trigonometric functions (i.e.\(f(t)=\sin(t)\cos(t),l(t)=\sin(t)\)), the other two variable coefficients are \[g(t)=m(t)=e^{\cos(t)}{\cos(t)}{\sin(t)},\] (3.17) where it is obvious that they are both periodic functions, then the exact solution to the gvKP equation is derived from (3.13) as follows (\(p=1\)): \[u_{2}^{(gvKP)}=\frac{12e^{x+\sqrt{3}y+4e^{\text{con}(t)}[-1+\text{cos}(t)]+\text{ cos}(t)}}{(1+e^{x+\sqrt{3}y+4e^{\text{con}(t)}[-1+\text{cos}(t)])^{2}}}. \tag{3.18}\] The indefinite integral of the variable coefficients \(g(t)\) and \(m(t)\) still maintains the periodic nature, which is consistent with the phenomenon that the wave appears periodically along the t direction seen in Fig. 9. And the solution presents a "swallowtail" waveform in each time period, which is quite different from the soliton or breather in the constant coefficient equation. The reason for the formation of this waveform is closely related to the form of the indefinite integral of \(g(t)\), which is completely symmetrical in each time period, but \(g(t)\) is not. The predicted and exact curves on both sides of the \(3D\) graph fit perfectly, which is very clear in Fig. 9, and the numerical results show that the relative \(L^{2}\) error of the data-driven solution is \(e_{t}^{r}=4.83\times 10^{-4}\). The above evidence fully demonstrates that our method predicts the dynamic behavior of \(u_{2}^{(gvKP)}\) with high accuracy. * **Case 3:** When \(f(t)\) and \(l(t)\) are both linear functions (\(f(t)=l(t)=t\)), \(g(t)\) and \(m(t)\) are the product of the exponential function and the polynomial, which is \[g(t)=m(t)=e^{-\frac{t^{2}}{2}}t.\] (3.19) Then substitute them into the exact solution (3.13) to obtain (\(p=1\)) \[u_{3}^{(gvKP)}=\frac{6e^{-\frac{t^{2}}{2}}}{1+\text{cosh}(4e^{-\frac{t^{2}}{2 }+x+\sqrt{3}y})}.\] (3.20) Figure 8. (Color online) Data-driven solution \(u_{1}^{(gvKP)}\) of the gvKP equation in Case 1: \(3D\) plot of the data-driven solution at 3 fixed \(y\)-axis coordinates. The curves on both sides represent the cross-section of the data-driven solution on the central axis of the \(x\) and \(t\) coordinates. (The blue solid line and the red dashed line correspond to the predicted solution and the exact solution, respectively.) Figure 9. (Color online) Data-driven solution \(u_{2}^{(gvKP)}\) of the gvKP equation in Case 2: \(3D\) plot of the data-driven solution at 3 fixed \(y\)-axis coordinates. The curves on both sides represent the cross-section of the data-driven solution on the central axis of the \(x\) and \(t\) coordinates. (The blue solid line and the red dashed line correspond to the predicted solution and the exact solution, respectively.) Fig.10 depicts the data-driven solution under Case 3, and the waveform at this time seems to be very similar to the waveform restricted to a single time period in Case 2. It is an interesting finding that the solutions under linear coefficients and triangular periodic coefficients have such similar waveforms. Going back to the result we are most concerned about, under the proposed method, the \(L^{2}\) relative generalization error of the data-driven solution is \(e_{u}^{r}=2.23\times 10^{-3}\), and the dynamic behavior of the solution of the gvKP equation is successfully restored again. * **Case 4:** When we reselect the variable coefficient \(f(t)\) in Case 3 as a quadratic function (i.e.\(f(t)=t^{2}\), \(l(t)=t\)), variable coefficients \(g(t)\) and \(m(t)\) become \[g(t)=m(t)=e^{-\frac{t^{2}}{2}}t^{2}.\] (3.21) The corresponding exact solution also becomes \[u_{4}^{(gvKP)}=\frac{6e^{-\frac{t^{2}}{2}}}{1+\cosh\left[4e^{-\frac{t^{2}}{2}} t+x+\sqrt{3}y-2\sqrt{2\pi}\text{erf}(\frac{t}{\sqrt{2}})\right]}\] (3.22) where \(\text{erf}(\cdot)\) represents the Gaussian error function, which is defined as \[\text{erf}(t)=\frac{2}{\sqrt{\pi}}\int_{0}^{t}e^{-\eta^{2}}d\eta.\] (3.23) The reason for the appearance of this non-elementary function in solution (3.22) is the indefinite integral with variable coefficients in (3.21). Among all the examples of the gvKP equation, only the waveform in this example is the closest to the shape of the soliton under the constant coefficient equation. But when we focus on the \(3D\) diagram in Fig. 11, we find that the wave is different from the soliton, and actually presents the shape of a cubic function, which is inseparable from the fact that the variable coefficient \(f(t)\) is a quadratic function. Combined with the example of quadratic coefficients in the vSG equation (Fig. 6), variable coefficients of the same form can find connections even in completely different equations. Then give the relative \(L^{2}\) generalization error in this case as \(e_{u}^{r}=1.30\times 10^{-3}\). In all the above examples, we display the 3D map at different position coordinates of \(y\) rather than at different time \(t\). This is because the variable coefficients of the discussed equation (3.11) are functions only related to \(t\). If we fix the time \(t\), what we can see is a traveling wave in space, and its traveling direction and speed change with time. But these changes are difficult for us to feel through the \(3D\) map. In these numerical examples, we fully feel the magical power of variable coefficients, which makes the waveform ever-changing to meet the requirements of natural phenomena for mathematical models. At the end of this section, we declare the parameter settings in the method. An equidistant discretization of \(101\times 101\times 101\) is used for all examples. The other settings are also uniform in all examples: the trunk network and the branch network are \(NN_{u}\{10,40,4,2\}\) and \(NN_{c}\{8,30,2,2\}\) respectively, the activation function is Tanh, \(5000\)\(Adam\) iterations are performed before using \(L\)-\(BFGS\), and other parameters are \(\{n_{I}+n_{b},n_{f},n_{c}\}=\{6000,50000,100\}\). More detailed numerical results and parameter settings are in Appendix B.2. In general, the results of 4 numerical examples prove that in \((2+1)\)-dimensional variable coefficient equations, our proposed method is not inferior and performs satisfactorily. We have reason to believe that it can still perform well in higher-dimensional equations, and this may also be our future work. Figure 10. (Color online) Data-driven solution \(u_{3}^{(gvKP)}\) of the gvKP equation in Case 3: \(3D\) plot of the data-driven solution at 3 fixed \(y\)-axis coordinates. The curves on both sides represent the cross-section of the data-driven solution on the central axis of the \(x\) and \(t\) coordinates. (The blue solid line and the red dashed line correspond to the predicted solution and the exact solution, respectively.) ## 4. Numerical experiments on inverse problems This section presents numerical examples of the VC-PINN method in variable coefficient inverse problems. In addition to the most common \((1+1)\)-dimensional equations, we also tried inverse problems in high-dimensional situations, and inverse problems with multiple variable coefficients simultaneously. This section involves the previously discussed gvKP equation as well as two new equations: the variable-coefficient Korteweg-de Vries equation and the variable-coefficient Sawada-Kotera equation. ### Korteweg-de Vries equation with variable coefficient #### 4.1.1. Single variable coefficient The Korteweg-de Vries equation is one of the most important equations in the field of integrable systems. It was first used to describe waves on shallow water surfaces, and it is a completely solvable model. (solved by inverse scattering transformation [24].) The equation we discuss in this section is its variable coefficient version, that is, the variable coefficient Korteweg-de Vries equation (vKdV), which was first proposed by Grimshaw [26]. The specific form is \[u_{t}+f(t)uu_{x}+g(t)u_{xxx}=0, \tag{4.1}\] where \(f(t)\) and \(g(t)\) are arbitrary analytic functions. Then in the case of polynomial coefficients, the auto-Backlund transformation, Painleve property, and similarity reduction of this equation are obtained by techniques such as the WTC method and classical Lie group method [53, 54]. In addition, Fan also gives a Lax pair, a symmetry, two conservation laws, and an analytical solution to the vKdV equation by means of homogeneous balance [22]. This analytical solution is of concern in the numerical practice of this section, and it is an important sample of the inverse problem of VC-PINN. Assume that the variable coefficients in equation (4.1) satisfy the constraints \[g(t)=cf(t), \tag{4.2}\] where \(c\) is an arbitrary constant. Then under this constraint, the exact solution given in [22] has the following form: \[u(x,t)=3c\alpha^{2}\text{sech}^{2}\left[\frac{1}{2}\alpha(x-c\alpha^{2}\int f (t)dt)\right], \tag{4.3}\] where \(\alpha\) is a free parameter. An obvious fact is that once the variable coefficient \(f(t)\), parameters \(c\) and \(\alpha\) are determined, the analytical solution (4.3) is fully determined. Let \(c=\alpha=1\), under this parameter setting, we discuss 3 different forms of f to test the performance of the proposed method on the inverse problem. As the first attempt of VC-PINN on the inverse problem, our example is also the simplest and most general. (neither high-dimension nor coexistence of multiple variable coefficients.) The internal data (\(s\)-\(type\) points) required in the inverse problem are completely derived from the discretization of the exact solution (4.3), and the coefficient information provided in this example only contains two endpoints (boundary), but no other derivative information (that is, does not include (2.14)). The following shows the discovery of function parameters in three variable coefficient forms: * **First-degree polynomial:** When the coefficient \(f(t)\) is linear (i.e. \(f(t)=t\)), the exact solution (4.3) becomes \[u_{1}^{(vKdV)}=3\text{sech}^{2}\left[\frac{1}{4}(t^{2}-2x)\right].\] (4.4) Fig.12(b) shows a parabolic soliton with linear coefficients, which has a completely different shape from a line soliton with constant coefficients. The reason for the formation of the parabolic shape is consistent with the example of the vSG equation: the indefinite integral of a linear function is of quadratic polynomial type. Figure 11. (Color online) Data-driven solution \(u_{4}^{(gvKP)}\) of the gvKP equation in Case 4: \(3D\) plot of the data-driven solution at 3 fixed \(y\)-axis coordinates. The curves on both sides represent the cross-section of the data-driven solution on the central axis of the \(x\) and \(t\) coordinates. (The blue solid line and the red dashed line correspond to the predicted solution and the exact solution, respectively.) But the difference is that the convex hull or concave hull in the vSG equation evolves from kink, but here it evolves from line solitons, so it is localized. The comparison of the exact value and the predicted value of the coefficient \(f(t)\) in Fig. 12(a) tells us that the proposed method is also successful in the inverse problem, and the \(L^{2}\) relative error of the coefficient \(f(t)\) is \(e_{c}^{r}=2.82\times 10^{-4}\). * **Cubic polynomial:** Suppose the variable coefficient \(f(t)\) is a cubic polynomial, that is, \(f(t)=t^{3}\), and the exact solution of equation (4.1) is \[u_{2}^{(vKdV)}=3\text{sech}^{2}\left[\frac{1}{8}(t^{4}-4x)\right].\] (4.5) The quartic term of \(t\) in solution (4.5) is obtained by the indefinite integral of the coefficient of the cubic term, which directly leads to the approximation of the wave direction of the soliton in Fig. 13(b) to a quartic curve. Compared with the parabolic soliton under the linear coefficient, the trajectory of the soliton in this case is more convex (note that the \(t\)-axis coordinate ranges of Fig. 12 and Fig. 13 are different). Another notable point is that the error increases from the order of \(10^{-3}\) to the order of \(10^{-2}\) when we change the coefficients from linear to cubic polynomial. And the error curve is more fluctuating than the case of linear coefficients (other network settings have achieved control variables), which shows that the inverse problem in this case is more difficult. Finally, the \(L^{2}\) relative error of the coefficient \(f(t)\) is given as \(e_{c}^{r}=2.56\times 10^{-4}\). * **Trigonometric functions:** When the coefficient function is a cosine function, that is, \(f(t)=\cos(t)\), the exact solution of the corresponding vKdV equation is \[u_{3}^{(vKdV)}=3\text{sech}^{2}\left[\frac{1}{2}(x-\sin(t))\right].\] (4.6) Figure 12. (Color online) Function parameter discovery for the vKdV equation under linear coefficients: (a) The real solution (red dotted line), predicted solution (blue solid line) and error curve (black dotted line, real minus predicted) of the function parameter \(f(t)\), the former two follow the left coordinates, the latter follow right coordinates. (b) Data-driven dynamics of solution \(u_{1}^{(vKdV)}\). Figure 13. (Color online) Function parameter discovery for the vKdV equation under Cubic polynomial: (a) The real solution (red dotted line), predicted solution (blue solid line) and error curve (black dotted line, real minus predicted) of the function parameter \(f(t)\), the former two follow the left coordinates, the latter follow right coordinates. (b) Data-driven dynamics of solution \(u_{2}^{(vKdV)}\). The periodic coefficient function determines that the direction of the soliton we see in Fig. 14 is also periodic, and the period length or amplitude of the coefficient directly affects the evolution behavior of the soliton. The error curve in Fig. 14(a) is the most fluctuating among the above examples, which is inseparable from the periodicity of the variable coefficient \(f(t)\). The error curve remains on the order of \(10^{-3}\), and the evidence that the relative error of the coefficient function \(L^{2}\) is \(e_{c}^{r}=2.26\times 10^{-4}\). Suggests that the proposed method successfully inverts the variation of the coefficient. The following same settings are applied in all examples: variable coefficients are equidistantly divided into 500 equal parts, the activation function is Tanh, trunk network and branch network are \(NN_{u}\{8,40,3,2\}\) and \(NN_{c}\{6,30,2,2\}\) respectively, before using \(L\)-\(BFGS\) optimization 5000 \(Adam\) iterations are performed, and the other parameters are \(\{n_{s},n_{f}\}=\{2000,20000\}\). More detailed numerical results and parameter settings are in Appendix B.3. Our method shines in the first attempt of the variable coefficient inverse problem. Under different forms of coefficients, it can successfully invert the change of coefficients with time. In the following chapters, we look forward to its performance in multiple variable coefficients and high-dimensional situations. #### 4.1.2. Multiple variable coefficients In the previous section, we discussed the inverse problem of the vKdV equation in the case of a single variable coefficient. Although there are two variable coefficients in equation (4.1), what is discussed is their solution under constraint (4.2), and the constraint (4.2) is substituted into the network, which is why it is only a single variable coefficient problem for the network. At the beginning of this section, we plan not to impose constraint (4.2) into the network, and rerun the experiments on the 3 examples from the previous section to test the performance of the proposed method under two variable coefficients. All settings including the exact solution and network parameters are kept the same as in Section 4.1.1 (except \(c\) changed from 1 to 2), and Fig. 15 shows the numerical results of the discovery of the function parameters of the vKdV equation under two variable coefficients. The results in Fig. 15 display that when increasing the number of function parameters to be discovered for the examples in Section 4.1.1 to 2, the proposed method is still able to invert the changes of all function parameters over time. From the error graph, it can be found that for linear and cubic polynomials, the main part of the error is distributed near the left and right boundary areas, while for the cosine coefficient, the error still maintains a relatively high-frequency oscillation. Table 1 presents more detailed \(L^{2}\) relative error results. In addition to re-experimenting the examples in Section 4.1.1, we also discuss the problem of function parameter discovery in two other coefficient forms. The specific situation is as follows (set \(c=2,\alpha=1\)): Figure 14. (Color online) Function parameter discovery for the vKdV equation under cosine coefficients: (a) The real solution (red dotted line), predicted solution (blue solid line) and error curve (black dotted line, real minus predicted) of the function parameter \(f(t)\), the former two follow the left coordinates, the latter follow right coordinates. (b) Data-driven dynamics of solution \(u_{3}^{(vKdV)}\). * **Case 1:** Assuming that the variable coefficients \(f(t)\) and \(g(t)\) are both fractional, that is, \(g(t)=2f(t)=\frac{1}{t}\), the exact solution is \[u_{4}^{(vKdV)}=6\text{sech}^{2}\left[\frac{1}{2}(x-4\text{ln}(t))\right],\] (4.7) which is not an analytical solution, since it is undefined at \(t=0\), so we only discuss it in the positive half of the \(t\)-axis in the experiment of the inverse problem. Limiting the time interval to \([0.5,2]\) avoids the singularity problem. It should be noted that the motion curve of the soliton shown in Fig. 17 is consistent with the logarithmic function, not the reciprocal function. (although the two function curves are very similar.) The error curve tells us that for the case of fractional coefficients, the error mainly comes from the area near the time starting point (\(t=0.5\)). Combined with the error curve graph in Fig. 15, it can be concluded that the errors in the discovery of function coefficients are distributed in the region of large coefficients and the region of rapid coefficient changes. (This is very similar to the distribution of errors for solutions in the forward problem.) Finally, the \(L^{2}\) relative errors of \(f(t)\) and \(g(t)\) are \(1.35\times 10^{-3}\) and \(2.86\times 10^{-3}\), respectively. * **Case 2:** Let the variable coefficients \(f(t)\) and \(g(t)\) be in the form of the product of an exponential function and a cosine function (i.e. \(g(t)=2f(t)=e^{-\frac{1}{2}}\text{cos}(t)\)), the exact solution (4.3) becomes \[u_{5}^{(vKdV)}=6\text{sech}^{2}\left[\frac{1}{2}(x+\frac{4}{5}e^{-\frac{t}{2} }(\text{cos}(t)-2\text{sin}(t))\right].\] (4.8) Because the indefinite integral of \(f(t)\) and \(g(t)\) is still in the form of the product of the exponential function and the trigonometric function, the trajectory of the soliton also presents the shape of oscillation decay consistent with the variable coefficient. In the previous examples we have found that the oscillations of the coefficients are related to the high-frequency fluctuations of the errors. Although the coefficient in this case is not strictly a periodic function, the cosine function brings the high-frequency oscillation properties similar to the periodic function to the entire coefficient \(f(t)\)(or \(g(t)\)), which can explain why the error curve fluctuates so much. In addition, the conclusions already observed in Case 1 are backed up here again, the error of the coefficient \(g(t)\) with larger (larger absolute value) value and faster rate of change is significantly larger than that of \(f(t)\). Then, the \(L^{2}\) relative errors of the variable coefficients \(f(t)\) and \(g(t)\) are \(1.85\times 10^{-3}\) and \(2.77\times 10^{-3}\). In these two new examples, the proposed method again demonstrates outstanding capabilities in the inverse problem with two variable coefficients. In our experiments, this method can handle multiple variable coefficients of various types with ease, and the relative error is on the order of \(10^{-3}\) to \(10^{-4}\). At the end of this section, a unified setting is given: the Figure 15. (Color online) The function parameters of the vKdV equation under two variable coefficients are found (the first line represents the true curve and the predicted curve of \(f(t)\) and \(g(t)\), and the second line represents the error curve): (a) linear coefficients. (b) cubic polynomial coefficients. (c) cosine coefficients. grid size of the coefficient is \(500\), the trunk network and the branch network are \(NN_{u}\{11,40,3,3\}\) and \(NN_{c}\{8,30,3,2\}\), the activation function is Tanh, and \(5000\)\(Adam\) iterations are performed before using \(L\)-\(BFGS\). Other parameters are \(\{n_{s},n_{f}\}=\{2000,20000\}\). More detailed numerical results and parameter settings are in Appendix B.4. ### Sawada-Kotera equation with variable coefficient In this section, consider continuing to increase the number of variable coefficients in the network to examine the capability limit of our proposed method. First, the importance of the KdV equation is self-evident, and the application of VC-PINN to the inverse problem of the vKdV equation is also discussed in Section 4.1. Another important equation, the Sawada-Kotera (SK) equation, was obtained by extending the KdV equation to the fifth order by Sawada and Kotera [64]. Their work also gives the \(N\)-soliton solution of the SK equation by inverse scattering transformation. The SK equation has important applications in the fields of shallow water waves and nonlinear lattices, and will not be repeated here. What we care about in this section is the variable coefficient version of the SK equation, that is, the generalized SK equation with variable coefficient (gvSK), and its specific form is given as follows: \[u_{t}+\alpha(t)uu_{xxx}+\beta(t)u_{x}u_{xx}+\gamma(t)u^{2}u_{x}+\rho(t)u_{xxxx }=0, \tag{4.9}\] where \(\alpha(t),\beta(t),\gamma(t)\) and \(\rho(t)\) are arbitrary analytic functions about \(t\). Model (4.9) is often referred to when describing the interaction between a water wave and a floating ice cover or gravity-capillary waves in fluid dynamics. The gvSK equation passes the Painleve test [76] and is proven to have the following integrability: Lax pair, \(N\)-soliton solution [84] and conservation laws [55]. In addition, it also includes many important equations, such as Lax equation, Kaup-Kupershmidt equation, and Ito equation. The increase in the order of the derivative increases the demand for computing power exponentially, but what we are really interested in is the problem of the coexistence of multiple variable coefficients rather than the problem of high-order derivatives. Based on these facts, to reduce the computational cost as much as possible, we decided to ignore the 5th order term (\(\rho(t)=0\)), the simplified equation is \[u_{t}+\alpha(t)uu_{xxx}+\beta(t)u_{x}u_{xx}+\gamma(t)u^{2}u_{x}=0. \tag{4.10}\] Figure 16. (Color online) Functional coefficients discovery (inverse problem) of the vKdV equation in case 1: (a) Density plot of the data-driven solution of \(u_{4}^{(vKdV)}\). (b) \(3D\) plot of the data-driven solution of \(u_{4}^{(vKdV)}\). (c) Comparison of predicted and true curves for two variable coefficients. (d) Error curves for two coefficients. where \(u=u(x,t)\). Unless otherwise specified, gvSK refers to equation (4.10) instead of equation (4.9) in the following descriptions of this article. [20] adopts the symmetry method to obtain many new periodic wave solutions and solitary wave solutions of equation (4.9). To test the proposed method, we focus on some of these solutions, first let \[\xi=x-\frac{c_{4}}{c_{2}}\int\alpha(t)dt, \tag{4.11}\] where \(c_{2}\) and \(c_{4}\) are arbitrary constants, and \(\xi\) is a new variable related to the original variables \(x\) and \(t\). Given the integrability condition as follows: \[\beta(t)=\frac{\beta_{0}}{c_{2}}\alpha(t),\ \gamma(t)=\frac{\gamma_{0}}{c_{2}} \alpha(t), \tag{4.12}\] where \(\beta_{0}\) and \(\gamma_{0}\) are arbitrary constants. (They are constants of integration.) Recalling an exact solution in [20] given under the integrability condition (4.12), the expression is: \[u(x,t)=\frac{1}{\gamma_{0}}(B+4c_{2}-6c_{2}{\rm tanh}^{2}(\xi)), \tag{4.13}\] and satisfies \[B=\sqrt{4c_{2}^{2}+c_{4}\gamma_{0}},\ \beta_{0}=-c_{2}. \tag{4.14}\] In fact, conditions (4.12) and (4.14) restrict some degrees of freedom of solution (4.13). More specifically, the gvSK equation and solution (4.13) are completely determined by parameters \(c_{2},c_{4},\gamma_{0}\) and variable coefficient \(\alpha(t)\). Let \(c_{2}=2,c_{4}=1,\gamma_{0}=4\), and then we change the form of the variable coefficient \(\alpha(t)\) to test whether the proposed method can have the expected performance in the case of the coexistence of three variable coefficients. What needs special attention is that the integrable condition (4.12) restricts the variable coefficients \(\alpha(t),\beta(t)\) and \(\gamma(t)\) to be linearly related, which can be regarded as only one variable coefficient from a mathematical point of view. However, we did not impose integrability conditions on the network. For the network, these three variable coefficients are completely independent, so it can be regarded as a situation where multiple variable coefficients coexist. In addition, in view of the fact that the increase in the number of variable coefficients may bring greater difficulty, we decided to Figure 17. (Color online) Functional coefficients discovery (inverse problem) of the vKdV equation in case 2: (a) Density plot of the data-driven solution of \(u_{5}^{(vKdV)}\). (b) \(3D\) plot of the data-driven solution of \(u_{5}^{(vKdV)}\). (c) Comparison of predicted and true curves for two variable coefficients. (d) Error curves for two coefficients. give the variable coefficients more boundary information (including the boundary value and the first-order derivative information of the boundary), which is derived from (2.13) and (2.14): \[\begin{split}& c_{1}(T_{0})=\mathcal{C}_{0},\ c_{1}(T_{1})= \mathcal{C}_{1},\\ &\left.\partial_{t}c_{1}\right|_{t=T_{0}}=\mathcal{C}_{0}^{(1)}, \left.\partial_{t}c_{1}\right|_{t=T_{1}}=\mathcal{C}_{1}^{(1)},\end{split} \tag{4.15}\] where \(c_{1}(t)\) is referred to as three variable coefficients \(\alpha(t),\beta(t)\) and \(\gamma(t)\). Then, the discretization of the exact solution (4.13) provides internal data points (\(s\)-\(type\) points) for the inverse problem, so that the results of the inverse problem under the polynomial coefficients of three different powers are shown as follows: * **Linear coefficient:* * Assuming variable coefficient \(\alpha(t)=t\), then: \[\beta(t)=-\alpha(t)=-t,\ \gamma(t)=2\alpha(t)=2t.\] (4.16) The exact solution (4.13) at this time becomes \[u_{1}^{(grSK)}=\frac{1}{4}\left[8+2\sqrt{5}-12\text{tanh}^{2}(\frac{t^{2}}{4} -x)\right].\] (4.17) Fig. 18 displays that for the case of linear coefficients, although the proposed method can invert the overall approximate changes of the three coefficients, the predicted curve and the accurate curve do not match well in some intervals. In addition, the error curves also exhibit a strange phenomenon, the three error curves seem to intersect at the same point at \(t=0\). Although this seems to be somehow related to the intersection of the three variable coefficients at \((0,0)\), we did not find a plausible explanation. The error level shown in Fig. 18(b) has also reached an unprecedented \(10^{-1}\), but it is gratifying that the relative error can still maintain a level of about \(10^{-2}\). Specifically, the \(L^{2}\) relative errors of the variable coefficients \(\alpha(t),\beta(t)\) and \(\gamma(t)\) are \(3.05\times 10^{-2}\), \(4.18\times 10^{-2}\) and \(2.94\times 10^{-2}\), respectively. * **Quadratic polynomial coefficient:* * Assuming variable coefficient \(\alpha(t)=\frac{t^{2}}{4}\), then the other two variable coefficients are \[\beta(t)=-\alpha(t)=-\frac{t^{2}}{4},\ \gamma(t)=2\alpha(t)=\frac{t^{2}}{2}.\] (4.18) Under the above variable coefficient setting, the corresponding exact solution is \[u_{2}^{(grSK)}=\frac{1}{4}\left[8+2\sqrt{5}-12\text{tanh}^{2}\frac{t^{3}}{24 }-x\right].\] (4.19) The results presented in Fig. 19 show that the results of the inverse problem under the quadratic polynomial coefficients are unexpectedly better than the linear coefficients. But inverting the coefficients of quadratic polynomials will be more difficult, which is a natural idea. After excluding the cause of chance, we guess that the reason for the counterintuitive phenomenon here is that even though ResNet can make multi-layer nonlinear layers more effectively approximate linear mapping, its ability seems to have an upper limit, at least in this example is such that. In this example, the \(L^{2}\) relative errors of variable coefficients \(\alpha(t),\beta(t)\) and \(\gamma(t)\) are \(1.68\times 10^{-2}\), \(1.49\times 10^{-2}\) and \(1.63\times 10^{-2}\), respectively. Figure 18. (Color online) Functional coefficient discovery (inverse problem) of the gVSK equation under linear coefficients: (a) Comparison of predicted and true values of three variable coefficients. (b) Error curves for three variable coefficients. * **Cubic polynomial coefficient:** Assuming that the variable coefficient \(\alpha(t)\) is in the form of a cubic polynomial, that is, \(\alpha(t)=\frac{t^{3}}{4}\), then the variable coefficients \(\beta(t)\) and \(\gamma(t)\) are \[\beta(t)=-\alpha(t)=-\frac{t^{3}}{4},\;\gamma(t)=2\alpha(t)=\frac{t^{3}}{2},\] (4.20) thus the exact solution (4.13) becomes \[u_{3}^{(gvSK)}=\frac{1}{4}\left[8+2\sqrt{5}-12{\rm tanh}^{2}(\frac{t^{4}}{32}- x)\right].\] (4.21) All three examples, including this one, show the unexplainable phenomenon we mentioned earlier, that is, the error curves of the three coefficients have common intersection point(s). The results in this example tell us that there can even be multiple intersection points, and they even seem to have nothing to do with the intersection points of the original coefficient curve. This also reflects the "black box" problem of the neural network to a certain extent. I believe this will be one of the works we explore in the future. Another thing worth noting is that the error of \(\gamma(t)\) (green line) with a larger change rate and coefficient value is also the largest among the three, which provides a factual basis for our previous empirical conclusions. Of course, the first two examples also clearly have such a phenomenon. Then give more quantified numerical results, the \(L^{2}\) relative errors of the variable coefficients \(\alpha(t),\beta(t)\) and \(\gamma(t)\) are \(1.54\times 10^{-2}\), \(1.93\times 10^{-2}\) and \(1.45\times 10^{-2}\), respectively. In order to more directly compare the errors of variable coefficients under different types of coefficients, we unify the above results into Table 2. Figure 19. (Color online) Functional coefficient discovery (inverse problem) of the gVSK equation under quadratic polynomial coefficients: (a) Comparison of predicted and true values of three variable coefficients. (b) Error curves for three variable coefficients. Figure 20. (Color online) Functional coefficient discovery (inverse problem) of the gVSK equation under cubic polynomial coefficient: (a) Comparison of predicted and true values of three variable coefficients. (b) Error curves for three variable coefficients. The results of these numerical examples show that, compared with the single variable coefficient or the coexistence of two coefficients of the KdV equation in Section 4.1, the error of the coexistence of three coefficients in the gvSK equation in this section is significantly increased. However, the relative \(L^{2}\) error can still be maintained above the \(10^{-2}\) level, which is acceptable to us. Combined with the numerical results of the gvKP equation in the next section, it will be found that the form of the equation may also be an important factor affecting the accuracy. Generally speaking, the proposed method has also withstood the test when the three variable coefficients coexist, but how to further increase the capability limit of the method is the direction we need to think about. Finally, some uniform settings in the experiment are given: the grid size of the coefficients is \(500\), the trunk network and the branch network are \(NN_{u}\{11,40,3,3\}\) and \(NN_{c}\{8,30,3,2\}\), the activation function is Tanh, and \(5000\)\(Adam\) iterations are performed before using \(L\)-\(BFGS\), and other the parameter is \(\{n_{s},n_{f}\}=\{2000,20000\}\). More detailed numerical results and parameter settings are in Appendix B.5. ### Generalized Kadomtsev-Petviashvili equation with variable coefficient In this section, we wish to further complicate the problem from another angle, specifically, we consider increasing the dimensionality of the equation while keeping the number of co-existing variable coefficients at three. The \((2+1)\)-dimensional KP equation discussed in Section 3.2 contains three variable coefficients, which fully meets our requirements. Therefore, the exact solutions \(u_{i}^{(gvKP)},i=1,2,3,4\) under the four cases in Section 3.2 will be samples of the inverse problem in this section. In the following discussion, the free parameters and variable coefficients in these exact solutions are the same as in Section 3.2, the only difference is the space-time region discussed. (In order to better present the changes in the coefficients.) Fig. 21 shows the results of the inverse problem in four cases. Quantified numerical results are more helpful for analysis and comparison. Table 3 shows the more detailed error results of the inverse problem. The order of magnitude of the \(l^{2}\) relative error is basically at the level of \(10^{-2}\) to \(10^{-3}\). This result is better than the result of the vSK equation in Section 4.2, which is very surprising. Although such comparisons were not performed under strict control of variables, some attempts after tuning hyperparameters tell us that this result seems to be general. Therefore, it is reasonable to guess that some properties of the vSK equation itself affect the generalization of the network. This further admits that although the ultra-universal PINN framework and its variants can be applied to most equations almost equally, it is also necessary to design a more targeted network for equations with specific structures. In addition, observation of the error curves tells us that there are some exceptions to our proposed empirical conclusions in complex scenarios. For example, the largest error in Case 2 is the variable coefficient \(f(t)\) (red). Overall, in the inverse problem where multiple variable coefficients coexist in \((2+1)\)-dimensions, the proposed method can still invert the variation of coefficients with acceptable accuracy. Compared with the forward problem, the network parameters in the inverse problem of the gvKP equation have been slightly modified, and a unified setting is given: the grid size of the coefficients is \(500\), the trunk network and the branch network are \(NN_{u}\{10,40,4,2\}\) and \(NN_{c}\{8,30,3,2\}\), the activation function is Tanh, and \(5000\)\(Adam\) iterations are performed before using \(L\)-\(BFGS\), and other the parameter is \(\{n_{s},n_{f}\}=\{20000,50000\}\). More detailed numerical results and parameter settings are in Appendix B.6. ## 5. Analysis and Discussion In the numerical experiments of the forward and inverse problems in Section 3 and Section 4, the performance of VC-PINN is obvious to all. However, this section will make a further in-depth analysis of the proposed method from the perspectives of principle and results. It mainly includes the following four aspects: 1. The necessity of ResNet; 2. The relationship between the convexity of variable coefficients and learning; 3. Anti-noise analysis; 4. The unity of forward and inverse problems/relationship with standard PINN. \begin{table} \begin{tabular}{l||c|c|c} \hline \hline & Linear Coefficients & Quadratic polynomial & Cubic polynomial \\ \hline \hline Error of \(\alpha(t)\) & 3.05\(\times 10^{-2}\) & 1.68\(\times 10^{-2}\) & 1.54\(\times 10^{-2}\) \\ Error of \(\beta(t)\) & 4.18\(\times 10^{-2}\) & 1.49\(\times 10^{-2}\) & 1.93\(\times 10^{-2}\) \\ Error of \(\gamma(t)\) & 2.94\(\times 10^{-2}\) & 1.63\(\times 10^{-2}\) & 1.45\(\times 10^{-2}\) \\ \hline \hline \end{tabular} \end{table} Table 2. \(L^{2}\) relative error of variable coefficient \(\alpha(t)\), \(\beta(t)\) and \(\gamma(t)\) \begin{table} \begin{tabular}{l||c|c|c} \hline \hline & Case 1 & Case 2 & Case 3 & Case 4 \\ \hline \hline Error of \(f(t)\) & 2.98\(\times 10^{-3}\) & 1.95\(\times 10^{-2}\) & 2.45\(\times 10^{-2}\) & 1.29\(\times 10^{-2}\) \\ Error of \(g(t)\) & 2.06\(\times 10^{-3}\) & 4.01\(\times 10^{-3}\) & 1.10\(\times 10^{-2}\) & 3.98\(\times 10^{-3}\) \\ Error of \(l(t)\) & 5.94\(\times 10^{-3}\) & 2.10\(\times 10^{-3}\) & 6.98\(\times 10^{-3}\) & 4.42\(\times 10^{-3}\) \\ \hline \hline \end{tabular} \end{table} Table 3. \(L^{2}\) relative error of variable coefficient \(f(t)\), \(g(t)\) and \(l(t)\) Figure 21. (Color online) The discovery of the function coefficients of the gvKP equation under four cases (inverse problem), each row corresponds to the results of a case (from top to bottom are Case 1 to Case 4): (a) The predicted value of the three variable coefficients compared with the real value. (b) Error curves for the three coefficients. ### The necessity of ResNet The proposed method adopts the structure of ResNet, and this design is mainly based on two considerations. On the one hand, ResNet itself can alleviate the "vanishing gradient", on the other hand, in the variable coefficient problem, it unifies linearity and nonlinearity. The following will further explain why ResNet is a suitable choice in our network from these two aspects. #### 5.1.1. Mitigating the problem of vanishing gradients "Vanishing gradient" is an important issue in the training process of deep learning, which was first formally proposed by Hochreite (1991) in his graduation thesis [30]. The reason for this phenomenon is that the small gradient value gradually accumulates during the backpropagation process, and finally decays exponentially as the number of network layers increases. In order to explain how ResNet alleviates the disappearance of gradients from a theoretical level, we will analyze the propagation of gradients in the network (only the gradient of a single sample is given). In the subsequent derivation, in order to distinguish the trunk network and the branch network, we put subscripts on \(X^{[i]}_{[}]\), \(R^{[i]}_{[}]\), \(W^{[i]}_{[}]\) and put superscripts on \(D^{[:]}\), \(N^{[i]}_{B}\), \(N^{[i]}_{h}\), \(\mathcal{K}^{[:]}\) where the labels \(u\) and \(c\) represent the corresponding quantities in the trunk network and branch network, respectively. First, similar to the result in [29], the calculation formula of the gradient of the loss item \(Loss_{s}(\theta)\) to \(X^{[j]}_{u}\) is given as \((iN_{h}+1\leq j<[(i+1)N_{h}+1])\) \[\frac{\partial Loss_{s}}{\partial X^{[j]}_{u}}=\frac{\partial Loss_{s}}{ \partial R^{[N^{[2]}_{u}]}_{u}}\cdot\prod_{k=j}^{(i+1)N^{u}_{s}}\frac{\partial X ^{[k+1]}_{u}}{\partial X^{[k]}_{u}}\cdot\prod_{k=i+1}^{N^{u}_{s}-1}\left[ \mathcal{K}^{u}+\frac{1}{\partial R^{[k]}_{u}}\left(\mathcal{L}_{k}(R^{[k]}_ {u})\right)\right], \tag{5.1}\] other definitions involved here are consistent with (2.2) and (2.3). In fact, the gradient of \(Loss_{s}\) to \(W^{[j-1]}_{u}\) only needs to be multiplied by \(\frac{\partial X^{[j]}_{u}}{\partial W^{[j-1]}_{u}}\) on (5.1). The gradient calculation formulas of the loss items \(Loss_{I}(\theta)\), \(Loss_{b}(\theta)\) and \(Loss_{c}(\theta)\) involved in the forward and inverse problems are similar to the above, while the loss item \(Loss_{f}(\theta)\) is different. The gradient of the loss item \(Loss_{f}(\theta)\) to the weights \(W^{[j]}_{u}\) and \(W^{[j]}_{c}\) of the trunk network and the branch network is given as: \[\frac{\partial[(\tilde{u})_{t}]}{\partial W^{[j]}_{u}}=W^{[D^{u}-1]}_{u}\cdot \frac{\partial R^{[0]}_{u}}{\partial t}\cdot\prod_{\begin{subarray}{c}0\leq k \leq N^{u}_{B}-1\\ k\neq i\end{subarray}}\left[\mathcal{K}^{u}+\frac{1}{\partial R^{[k]}_{u}} \left(\mathcal{L}_{k}(R^{[k]}_{u})\right)\right]\cdot\frac{1}{\partial W^{[ j]}_{u}}\left[\mathcal{K}^{u}+\frac{1}{\partial R^{[i]}_{u}}\left(\mathcal{L}_{i}(R^ {[i]}_{u})\right)\right], \tag{5.2}\] \[\frac{\partial Loss_{f}}{\partial W^{[j]}_{c}}=\frac{\partial Loss_{f}}{ \partial f}\cdot\frac{\partial X^{[j+1]}_{c}}{\partial W^{[j]}_{c}}\cdot \mathcal{N}[\tilde{u}]\cdot W^{[D^{c}-1]}_{c}\cdot\prod_{k=j+1}^{(i+1)N^{u}_{ s}}\frac{\partial X^{[k+1]}_{c}}{\partial X^{[k]}_{c}}\cdot\prod_{k=i+1}^{N^{u}_{B}-1} \left[\mathcal{K}^{c}+\frac{1}{\partial R^{[k]}_{c}}\left(\mathcal{L}_{k}(R^ {[k]}_{c})\right)\right]. \tag{5.3}\] where the conditions for (5.2) and (5.3) to hold are \(iN^{u}_{h}+1\leq j<[(i+1)N^{u}_{h}+1]\) and \(iN^{u}_{h}\leq j<[(i+1)N^{u}_{h}]\), respectively. It is easy to find that (5.2) is not the complete gradient of \(Loss_{f}(\theta)\) to \(W^{[j]}_{u}\), but is only calculated with the \(\tilde{u}_{t}\) term in \(f\). But it is completely similar for other terms in \(f\), the final gradient is the sum of the gradients of all terms, so it doesn't hurt to focus on a specific term. The loss items \(Loss_{s}(\theta)\), \(Loss_{I}(\theta)\), \(Loss_{b}(\theta)\) and \(Loss_{c}(\theta)\) in the forward and inverse problem only contribute to the gradient of the weight in the trunk network, while the loss item \(Loss_{f}(\theta)\) contributes to the gradient of the weight in both the trunk network and the branch network. (5.1), (5.2) and (5.3) are the gradient formula (or the main part of the gradient formula) of the loss function to the weights. The part involving multiplication (\(\prod\)) among them is what we care about, because this is the item most likely to cause the gradient to tend to zero. It can be inferred from the above formula that if we adopt the ResNet structure for both the trunk network and the branch network (i.e. \(\mathcal{K}^{u}=\mathcal{K}^{c}=1\)), and ensure that the number of network layers (\(N^{u}_{h}\) and \(N^{c}_{h}\)) contained in each residual block is not too large, then the multiplication part in formulas (5.1), (5.2) and (5.3) is unlikely to be a small amount, so the "vanishing gradients" is alleviated to a certain extent. It is worth mentioning that if batch normalization (BN) technology is used on the basis of the ResNet structure, it may produce more surprising effects, but this is not the focus of this article. More specific details on the derivation are presented in Appendix C.1. #### 5.1.2. Unity of linear and nonlinear It is well known that nonlinear phenomena are an important part of natural science. But for a physical model (equation) with variable coefficients, the nonlinearity of the solution and the nonlinearity of variable coefficients are not the same thing. Even linear coefficients can generate nonlinear waves to explain natural phenomena, as we saw in the numerical experiments in Section 3 and Section 4 of parabolic solitons and kinks evolving along parabolic curves. In order to illustrate the importance of linear coefficients, a few specific examples are listed below: * The heat equation describes the diffusion behavior of heat in a region, and the following is its variable coefficient version in \(3D\) space [13] \[u_{t}=\alpha(x,y,z)\Delta u,\] (5.4) where \(\Delta\) represents the Laplace operator (\(\Delta u=u_{xx}+u_{yy}+u_{zz}\)), and the variable coefficient \(\alpha(x,y,z)\) represents the thermal conductivity, which is related to the temperature and the nature of the medium. Therefore, it is entirely possible that the variable coefficient is a linear function of a certain spatial component (such as \(x\)) in a heterogeneous medium. Of course, due to the constant positive property of thermal conductivity, it can only maintain linearity in a certain interval. In addition, the form of the diffusion equation is very similar to (5.4), and the diffusion coefficient may also show a linear change in some media with inhomogeneous concentrations. * The wave equation is used to describe the propagation of waves in classical physics, including mechanical waves, electromagnetic waves, and so on. When there is an external force term, the wave equation with variable coefficients is given as [61] \[u_{tt}=\text{div}(p(\mathbf{x},t)\nabla u)+f(\mathbf{x},t),\] (5.5) where div and \(\nabla\) represent divergence and gradient, respectively, while \(p(\mathbf{x},t)\) and \(f(x,t)\) represent medium parameters and external force terms, respectively. In different physical backgrounds, \(p(\mathbf{x},t)\) can represent both medium parameters and wave velocity. The inhomogeneity and dynamics of the medium may cause the wave velocity to vary at different locations and times. For example, when a sound wave propagates in a gas, factors such as the temperature and density of the gas may affect its propagation speed; in the propagation of seismic waves, physical properties such as the density and elastic modulus of the medium may also affect the propagation speed of seismic waves. Therefore, whether \(p(\mathbf{x},t)\) is a medium parameter or a wave velocity, it is entirely possible to show a linear function with time or a certain space variable. Also, for freer external force terms, a linear variation is more likely. Although in the above examples, some variable coefficients are linear functions of a certain spatial variable, from the perspective of our method, they are essentially the same as linear functions of a time variable. Overall, these examples tell us that linear variable coefficients are also important and cannot be underestimated. But in the numerical experiments of the inverse problem, we found that the increase in the number of network layers in the standard PINN does not seem to improve the accuracy of the linear variable coefficients, but is counterproductive. As shown in Fig. 22. Fig. 22 (a) and (b) very clearly display that as the number of layers representing the branch network of variable coefficients increases, the accuracy of linear variable coefficients gradually decreases. On the contrary, the accuracy of the non-linear variable coefficient gradually becomes higher (red represents low precision, blue represents high precision). The completely opposite behavior exhibited by the linear and nonlinear coefficients is very annoying. Because the linearity of the coefficients cannot be predicted in advance before the network is trained, it means that the choice of deep or shallow network is difficult. In addition, comparing the loss and error of the shallow network and the deep network (Fig. 23), it can be shown that the abnormal results of the linear coefficients are not caused by overfitting. Because the loss and error of the deep network are higher than those of the shallow network. This phenomenon is more in line with the network degradation problem described in [28, 29]. Theoretically, a deep network should not perform worse (whether in terms of accuracy or loss) than a shallow network, since the former is the hyperspace of the latter. And a deeper network can be obtained by adding a network layer of identity mapping on the shallow network. The network degradation proposed in A shows the difficulty of approximating identity maps with multiple nonlinear layers, so that the disappointing performance of deep networks on linear coefficients can be understood here. The structure of ResNet can effectively solve the degradation problem of the network. (c) and (d) in Fig. 22 are respectively the heat maps of the errors under the linear and nonlinear coefficients changing with the number of network layers after using the ResNet structure. (c) and (d) in Fig. 22 show that, no matter whether the coefficient is linear or not, as the number of branch network layers increases, the error tends to become smaller. In addition, it is worth noting that after comparing the scales of the color bars of the four images in Fig. 22, it is easy to find that the ResNet structure has improved the accuracy to a certain extent while solving the degradation problem. Table 4 shows the comparison between using ResNet and not using ResNet. Figure 23. (Color online) The change curve of the loss and error of the deep network and the shallow network during the training process (only the \(L\)-\(BFGS\) optimization part is intercepted, and the data is smoothed, but it does not affect the result). The data source is the same as Fig. 22. (a) The curve of the change of the loss function. (b) Error curves for the coefficients. Figure 22. (Color online) The heat map of the coefficient \(L^{2}\) error under different trunk network and branch network layers. This figure is based on the experiments of solution \(u_{1}^{(vKdV)}\) (linear coefficient) and solution \(u_{2}^{(vKdV)}\) (nonlinear coefficient) of the vKdV equation in Section 4.1. (a) and (b) do not use the ResNet structure. (c) and (d) use a dual ResNet structure. The data in the figure comes from repeated experiments under 5 sets of random seeds, and more detailed settings and results are shown in Appendix C.2 * Our analysis is mainly based on the inverse problem of the KdV equation. For the forward problem, because the variable coefficient is known in the discrete sense, the trunk network usually represents a nonlinear wave, so the degradation problem does not exist. In addition, in our experiments, a double ResNet structure is adopted (trunk network and branch network are both ResNet structures). Although ResNet seems to work mainly on the branch network, the results of the test show that the double ResNet structure unexpectedly has a better effect than the single ResNet structure. In general, using the ResNet structure can improve accuracy on the basis of unifying linearity and nonlinearity, and it does not bring additional network parameters. ### The relationship between convexity of variable coefficients and learning Understanding the learning process of neural networks is a huge challenge, but there are still some efforts in this area, such as deep neural networks usually fit the target function from low frequency to high frequency - "frequency principle" [77]; in the early stage of network training, the input weights (including biases and weights) of hidden neurons will condense into isolated directions - "parameter condensation phenomenon" [88]. However, in the context of variable coefficients, the learning of neural networks may lead to disappointing results or even outright failure when faced with more difficult problems (high dimensionality, coexistence of multiple variable coefficients, higher order derivatives). Although such a conclusion is regrettable, it seems reasonable in our cognition. In the numerical experiments of VC-PINN, we found that in addition to the above situations, some special variable coefficients will also make the learning of neural networks extremely difficult. Figure 24. (Color online) The learning process of the vKdV equation under five different variable coefficients. The coefficients from the first line to the fifth line are (1) \(f(t)=t\); (2) \(f(t)=t^{2}\); (3) \(f(t)=t^{3}\); (4) \(f(t)=t^{4}\); (5) \(f(t)=\cos(t)\). In order to analyze the reasons for the failure of neural network learning, we consider observing the changes of the prediction results of variable coefficients during the training process. Fig. 24 displays the learning process of the vKdV equation under five different variable coefficients (both success and failure in the example). The learning process of the five examples in Fig. 24 has a common law, that is, the neural network first learns the two endpoints of the variable coefficients, and then gradually learns the middle area. The soft constraints on the bounds of the variable coefficients we add to the loss function can reasonably explain this learning behavior. Furthermore, learning under quadratic and quartic polynomials fails, and the neural network stagnates after pre-learning (learning for bound constraints on coefficients in the loss function). Large gradients do not seem to be a plausible explanation for learning failures, since learning with cubic polynomial coefficients is completely successful. This seems to imply that the learning of neural networks is hindered in the face of strongly convex targets. Figure 25. (Color online) The learning process of the vKdV equation under quadratic and quartic polynomials. The middle coefficients from the first row to the sixth row are (1)-(4) \(f(t)=t^{2}\); (5)-(6) \(f(t)=t^{4}\). In (3) and (5), \(Loss_{c}\) adds intermediate point information (\((0,0)\)). (4) and (6) add the first derivative information of the boundary (condition (2.14)). Curvature is a quantitative description of the convexity of the curve, and the curvature formula of the curve \(f(t)\) is \[K|_{t=t_{0}}=\frac{\left|\left.\left.\left|f^{{}^{\prime\prime}}\right.\right| \right.}{\left(1+{f^{\prime}}^{2}\right)^{\frac{3}{2}}}\right|_{t=t_{0}} \tag{5.6}\] where \(K|_{t=t_{0}}\) is the curvature of curve \(f(t)\) at point \(t=t_{0}\). According to the curvature formula, the curvature of the first-degree polynomial coefficient in Fig. 24 is \(0\), and the curvature of the third-degree polynomial coefficient changes sign at point \(t=0\) (meaning that the concave-convexity changes). The curvature of the quadratic and quartic polynomials at \(t=0\) is the largest, respectively \(K|_{t=0}=2\) and \(K|_{t=0}=12\), and their curvature does not change the sign in the interval (concave-convexity does not change). This illustrates that the conclusion that convexity hinders learning seems reasonable. In order to further explore the reasons for the learning failure of the neural network, we make different adjustment strategies in the previous examples of quadratic and quartic polynomial coefficients. The details are as follows: (1) Reduce the time interval on one side (from \([-4,4]\) to \([-4,2]\)); (2) Reduce the time interval on both sides (from \([-4,4]\) to \([-2,2]\)); (3) and (5) \(Loss_{c}\) adds intermediate point information (\((0,0)\)); (4) and (6) add the first-order derivative information of the boundary of the coefficient (condition (2.14)). Both (1) and (2) in Fig. 25 do not change the curve, but only shorten the time range, and \(t=0\) with the largest curvature is still included in the interval. But such an adjustment enables the coefficients to be successfully learned, which shows that what affects the learning is the convexity of the entire interval rather than the convexity at a certain point. The accumulation of convexity on the entire interval makes the coefficient curve after pre-learning far from the real coefficient curve, which may lead to learning failure. The two adjustment strategies involved in (3)-(6) in Fig. 25 provide variable coefficient information, so that the pre-learning curve can be closer to the real coefficient curve, thereby avoiding the problems caused by the accumulation of convexity. The information provided by the two strategies is only a small amount. For example, the strategy at the middle point only adds one point of information to make learning from failure to success. Of course, this strategy is not mathematically tenable, and it is more suitable for the more common intermediate state problems in industrial applications described in Section 5.4.1 later. The reason we mention such a strategy here is to show that the gap between the pre-learning curve and the true coefficient curve is important and seems to play a decisive role in the success or failure of learning. Learning with quadratic and quartic polynomial coefficients was successful with all tuning strategies. (Although the learning in Fig. 25 (4) is still a little flawed) By analyzing the learning process under different coefficients and several adjustment strategies, the following empirical conclusions about learning can be drawn: * The convexity of the variable coefficients hinders the learning of the neural network, and the convexity here refers to the cumulative effect on the entire interval. The cumulative effect of convexity makes the pre-learning curve have a large gap with the real coefficient curve, which leads to the failure of learning. * Coefficients with concave-convex changes such as cubic polynomials and cosine functions are relatively easier to learn because their pre-learning curves are distributed on both sides of the real coefficient curve, rather than on one side (such as quadratic and quartic polynomial coefficients). * Strategies such as narrowing the range, increasing internal data points, and adding boundary derivative information can promote the learning of neural networks by bringing the pre-training curve and the real coefficient curve closer together. Although the discussion of the learning process of neural networks is not theoretical enough, it can explain the inseparable relationship between convexity and learning. I believe that a more in-depth discussion will become our future work. Finally, the detailed model setup and results of the numerical experiments in this section are presented in Appendix C.3. ### Anti-noise analysis Data resources are generally expensive, and obtaining clean (noise-free) data is impossible in the real world because errors in measurements are always inevitable. Therefore, the proposed method must have a certain anti-noise ability before it can be really applied to practical mathematical physics problems. The main purpose of this section is to test the anti-noise ability of VC-PINN. We still take the vKdV equation in Section 4.1 as an example for testing. In the experiment, the \(\alpha\%\) noise added to the clean data is a Gaussian distribution with zero mean, and its standard deviation is determined by the standard deviation of the training data and \(\alpha\). We used 8 sets of data with noise ranging from \(0\%\) to \(5\%\) to conduct experiments and tested the forward and inverse problems (double ResNet structure) using the VC-PINN framework under linear and nonlinear coefficients. The results in Fig. 26 display that as the signal-to-noise ratio decreases, the relative error of the results increases. In addition, it can be found that the anti-noise ability of the inverse problem is significantly better than that of the forward problem. Regardless of whether it is a linear or nonlinear coefficient, even at a noise level of \(5\%\), the relative error can still be maintained above the \(10^{-2}\) level. The forward problem has not reached this error level under the noise percentage of \(1.5\%\)-\(2\%\). And for the forward problem, the influence of noise on the linear coefficient seems to be greater than that of the nonlinear coefficient. In general, the proposed method has a certain ability to resist noise, but the performance on the forward problem needs to be improved. Finally, it should be noted that the above-mentioned anti-noise test is carried out under the repeated experiment of selecting 10 groups of random seeds. More detailed model settings and experimental results are presented in Appendix C.4. ### The unity of forward and inverse problems/relationship with standard PINN This section mainly discusses our proposed VC-PINN from a framework perspective, including the unity of forward and inverse problems and the relationship with standard PINN. #### 5.4.1 The unity of forward and inverse problems In numerical experiments testing the performance of VC-PINN, we find that the proposed framework is uniform in solving forward and inverse problems with variable coefficients. And this kind of unity is not possessed by the constant coefficient equation. Analyze from the perspective of the loss function. First, reviewing the method introduction in Section 2.3 and Section 2.4, the loss functions under the forward problem and the inverse problem are \[Forward: Loss_{l}(\theta)=Loss_{l}(\theta)+Loss_{b}(\theta)+Loss_{f}( \theta)+Loss_{c}(\theta), \tag{5.7}\] \[Inverse: Loss_{l}(\theta)=Loss_{s}(\theta)+Loss_{f}(\theta)+Loss_{c}( \theta). \tag{5.8}\] For the initial value loss \(Loss_{l}(\theta)\) and boundary loss \(Loss_{b}(\theta)\) in the forward problem, if we ignore their actual mathematical meaning and only consider them from the perspective of numerical calculation, they can be combined into a loss function, namely \[Loss_{l}(\theta)+Loss_{b}(\theta)=Loss_{lb}(\theta), \tag{5.9}\] where the involved \(I\)-\(type\) points and \(b\)-\(type\) points are also merged, and mark them with \(Ib\)-\(type\) points. Thus, both \(Loss_{lb}\) in the forward problem and \(Loss_{s}\) in the inverse problem represent the mean square error loss between the network output and the true solution \(u\). Therefore, the loss functions of the forward problem and the inverse problem under the variable coefficient version are completely consistent in form. They differ only in the distribution of points involved in \(Loss_{lb}/Loss_{s}\) and \(Loss_{c}\) (\(Loss_{f}\) is exactly the same in both forward and inverse problems). In the forward problem, the \(Ib\)-\(type\) points involved in \(Loss_{lb}\) are distributed in the initial boundary value area of \(u\), and the \(c\)-\(type\) points involved in \(Loss_{c}\) are distributed on the entire interval \([T_{0},T_{1}]\). For the inverse problem, the \(s\)-\(type\) points involved in \(Loss_{s}\) are distributed in the entire area of \(u\), and the \(c\)-\(type\) points involved in \(Loss_{c}\) are distributed on the two endpoints of the interval \([T_{0},T_{1}]\). A general summary is that the inverse problem provides more information about \(u\) but less information about the coefficient \(c_{1}\) than the forward problem. Therefore, from the data level, the forward problem and the inverse problem under the variable coefficient problem are unified. And there is no essential difference in the implementation of the code. However, such unity does not exist in the constant coefficient equations that PINN usually deals with. Because the coefficients at this time are a set of fixed constants rather than functions. For constants, there are only two states: fully known and completely unknown, so there is not even a loss item about the mean square error of the coefficient in the standard PINN, let alone the unity of the forward and inverse problems. Referring to the graphical representation in [35], Fig. 27 shows the relationship between the type of problem and the information content of \(u(x,t)\) and \(c_{1}(t)\). Finally, it is worth mentioning that if we do not discuss the strict definition Figure 26. (Color online) Under different proportions of noise, the performance of VC-PINN with double ResNet structure (including forward and inverse problems). The solid line represents the mean error, and the semi-transparent area represents the error range of repeated experiments. (a) Error comparison under linear coefficients. (b) Error comparison under nonlinear coefficients. of forward or inverse problems in the mathematical sense, the scenario represented by the middle area in Fig. 27 may be the problem faced in industrial applications. That is, there are some information about \(u\) and some information about the variable coefficient \(c_{1}\), which is an intermediate state between the forward problem and the inverse problem. #### 5.4.2. Relationship with standard PINN Before analyzing the relationship between VC-PINN and standard PINN, we first discuss the example where the independent variable of the coefficient is two-dimensional. If the variable coefficient in equation (2.1) is also related to \(x\), that is, \(\mathbf{C}[t]\) is the function of \(x\) and \(t\). In this case, we test the performance of VC-PINN. Consider the vSG equation discussed in Section 3.1 and rewrite it as \[V_{xt}=M(x,t)\text{sin}(V), \tag{5.10}\] Where \(V=V(x,t)\), and \(M(x,t)\) is a variable coefficient. The solution of equation (5.10) is given in [79] using the self-similar method. Specifically, suppose the solution of equation (5.10) satisfies \[V(x,t)=u(X,T), \tag{5.11}\] where \(X=X(x)\) and \(T=T(t)\) are two coordinate transformations respectively. If you consider some special coefficients such as \(M(x,t)=\frac{dX}{dx}\frac{dT}{dt}\). Then there is \[u_{X,T}=\text{sin}(u), \tag{5.12}\] which means that \(u(X,T)\) satisfies the SG equation with constant coefficient, and the solution of equation (5.10) can be derived naturally by using the solution of SG equation with constant coefficient, as follows: \[V(x,t)=4\text{arctan}\left(\lambda e^{aX(x)+\frac{T(t)}{a}}\right) \tag{5.13}\] where \(a\neq 0\) and \(\lambda\) are free parameters. In [79], the variable coefficients are taken as Chebyshev polynomials and a series of exact solutions are obtained, but here we only consider two simple cases and use them to test VC-PINN on the inverse problem when the coefficients are two-dimensional performance. Specifically (\(\lambda=a=1\)) * **Case 1:** Select \(X(x)=\frac{x^{2}}{2}\), \(T(t)=\frac{t^{2}}{2}\), then the corresponding \(M(x,t)=xt\). * **Case 2:** Select \(X(x)=\text{sin}(2x)\), \(T(t)=\text{sin}(2t)\), then the corresponding \(M(x,t)=4\text{cos}(2x)\text{cos}(2t)\). Fig. 28 shows the dynamic behavior of \(V(x,t)\) and coefficient \(M(x,t)\) predicted by the network in these two cases, respectively. The \(L^{2}\) relative errors \(e_{c}^{\prime}\) of the coefficients in the two cases are \(6.86\times 10^{-3}\) and \(2.32\times 10^{-2}\), respectively. Therefore, the VC-PINN method is also applicable to the case of high-dimensional coefficients. The specific settings are shown in Appendix C.5. Next, back to the topic of this section. In addition to the ResNet structure, the difference between VC-PINN and the standard PINN is the addition of a branch network responsible for approximating variable coefficients. But for the above situation where the coefficient dimension and the equation dimension are equal, it is obviously feasible to use a standard PINN with only one network. It is only necessary to make the output of the network two-dimensional, representing \(V(x,t)\) and \(M(x,t)\) respectively. Therefore, the relationship between VC-PINN and standard PINN can be summarized as follows: * The constant coefficient can be regarded as the dimensionality reduction of the variable coefficient, which is what the standard PINN is good at. Of course, VC-PINN can also handle constant coefficients well, such as the inverse problem of Case 1 in Section 4.3. * When encountering the situation where the variable coefficient dimension is inconsistent with the equation dimension (such as \(u(x,t)\) and \(c_{1}(t)\)), VC-PINN can show its talents. Because the added branch network is equivalent to adding a hard constraint to the coefficient \(c_{1}(t)\), which ensures that the coefficients at different Figure 27. (Color online) The relationship between the type of problem and the information content of \(u(x,t)\) and \(c_{1}(t)\). spatial positions are equal at the same time. But it is difficult for the standard PINN to handle such situations, unless the unsatisfactory strategy of soft constraints is used. * If the dimension of the variable coefficient continues to increase until it is equal to the dimension of the equation (such as \(u(x,t)\) and \(c_{1}(x,t)\)), then VC-PINN is not the only choice. Then the problem to be solved can also be regarded as a coupled system composed of \(u(x,t)\) and \(c_{1}(x,t)\), and it is reasonable to use a standard PINN. It's just that the use of ResNet in standard PINN is not as free as in VC-PINN (trunk network and branch network can use different ResNets). In general, VC-PINN is closely related to standard PINN, but the proposal of VC-PINN is necessary, and it can make up for the deficiency of PINN when the dimension of the variable coefficient and the dimension of the equation are different. ## 6. Conclusion Compared with the constant coefficient model, the variable coefficient model is a more realistic description of nature, because it can describe phenomena such as inhomogeneous media, non-constant physical quantities, and variable external forces. However, in the context of variable coefficients, it is difficult for the standard PINN to handle the case where the independent variable dimension of the coefficient function is different from the equation dimension. In view of the importance of the variable coefficient model, this paper proposes a deep learning method - VC-PINN, which specifically deals with the forward and inverse problem of variable coefficient PDEs. It adds a branch network responsible for approximating variable coefficients on the basis of standard PINN, which adds hard constraints to variable coefficients and avoids the problem of different dimensions between coefficients and equations. In addition, the ResNet structure without additional parameters is introduced into VC-PINN, which unifies linear and nonlinear coefficients while alleviating gradient vanishing. In this paper, the proposed VC-PINN is applied to four equations including vSG, gvKP, vKdV, and gvSK. The exact solutions of these four integrable variable coefficient equations can be obtained by generalizing the classical integrable method. This provides exact samples (rather than high-precision numerical samples) for testing the performance of VC-PINN. VC-PINN has achieved success in forward and inverse problems with different forms of variable coefficients (polynomials, trigonometric functions, fractions, oscillation attenuation coefficients), high dimensions, and coexistence Figure 28. Prediction results of \(V(x,t)\) and coefficient \(M(x,t)\) under Case 1 and Case 2. (The first line is Case 1, and the second line is Case 2) of multiple variable coefficients. It learns the dynamic behavior of the solution \(u\) as well as the full variation of the variable coefficients with satisfactory accuracy. In the numerical experiments, we also draw some empirical conclusions: 1. In the forward problem, the large gradient and high wave height of the solution \(u\) has a strong correlation with the error; 2. In the inverse problem, the error mainly comes from the region with large coefficient and the region with drastic change of coefficient; 3. In the inverse problem, the high-frequency oscillation of the coefficient will also cause the high-frequency oscillation of the error. In the analysis and discussion, we conducted an in-depth analysis of VC-PINN in a combination of theory and numerical experiments, including four aspects: the necessity of ResNet; the relationship between the convexity of variable coefficients and learning; anti-noise analysis; the unity of forward and inverse problems/relationship with standard PINN. When discussing the necessity of ResNet, we derived the backpropagation formula of the gradient in VC-PINN with ResNet structure, explaining how shortcut connections can alleviate gradient vanishing. In addition, we found that in the case of linear coefficients, the accuracy of ordinary FNN decreases with the increase of network depth (network degradation problem), which is completely opposite to the phenomenon in the case of nonlinear coefficients. Comparative experiments tell us that the ResNet structure reverses this phenomenon, thus unifying the linear and nonlinear coefficients. When analyzing the learning process, we found that the cumulative convexity of variable coefficients can seriously hinder the learning of neural networks. We also found that strategies that can make the pre-learning curve and the real coefficient curve closer have a certain promotion effect on the learning of the neural network. Anti-noise experiments show that the proposed method has a certain anti-noise ability, but the performance on the forward problem needs to be improved. Then, from the perspective of the loss function and data level, we found that the forward and inverse problems of VC-PINN under the background of variable coefficients are unified, and the difference is only in the data volume of \(u\) and \(c_{1}\). Finally, the close relationship between VC-PINN and standard PINN is explored through a two-dimensional variable coefficient example (vSG). In general, the proposal of VC-PINN is very necessary, and it fills the gap of PINN in variable coefficient problems. Of course, the examples involved in the numerical experiments in this article are relatively standard models and simple cases, and the performance of VC-PINN on more complex coefficients and more complex equations (coupling equations, equations on the complex field) is expected. The capabilities of VC-PINN demonstrated in this paper are just the tip of the iceberg, and some transferable modular technologies can be armed to VC-PINN in the future. In addition, it is very meaningful to use neural operator networks to learn the mapping between solutions and variable coefficients, which will be our future work. ## Acknowledgements The project is supported by the National Natural Science Foundation of China (No. 12175069 and No. 12235007), Science and Technology Commission of Shanghai Municipality (No. 21JC1402500 and No. 22DZ2229014), and Natural Science Foundation of Shanghai (No. 23ZR1418100). ## Appendix A Symbol Description The following table gives a description of the symbols involved in the appendix. ## Appendix B Model parameters and results of numerical experiments ### Model parameters and results of the forward problem of the vSG equation The symbols \(p1\), \(p2\), and \(cos\) in the following table represent the cases of first-degree polynomials, quadratic polynomial, and trigonometric function coefficients in Section 3.1, respectively. \begin{table} \begin{tabular}{c||c|c} \hline \hline & \([T_{0},T_{1}]\) & \(\Omega\) \\ \hline \hline \(p1\) (\(k_{1}=1\)) & \([-5,5]\) & \([-5,5]\) \\ \(p1\) (\(k_{1}=-1\)) & \([-5,5]\) & \([-5,5]\) \\ \(p2\) & \([-3,3]\) & \([-5,5]\) \\ \(cos\) & \([-3,5]\) & \([-8,8]\) \\ \hline \hline \end{tabular} \end{table} Table 6: Space-time interval for the forward problem of the vSG equation \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c|c|c} \hline \hline & \(e_{u}^{r}\) & \(e_{c}^{r}\) & \(e_{c}^{a}\) & \(T_{train}\) & \(Iter_{L}\) & \(Loss\) \\ \hline \hline \(p1\) (\(k_{1}=1\)) & \(5.23\times 10^{-4}\) & \(7.66\times 10^{-5}\) & \(1.92\times 10^{-4}\) & 260.06s & 1446 & \(1.61\times 10^{-6}\) \\ \(p1\) (\(k_{1}=-1\)) & \(1.52\times 10^{-4}\) & \(1.66\times 10^{-4}\) & \(4.11\times 10^{-4}\) & 327.59s & 2311 & \(5.37\times 10^{-7}\) \\ \(p3\) & \(9.33\times 10^{-5}\) & \(3.98\times 10^{-5}\) & \(9.55\times 10^{-5}\) & 455.13s & 4993 & \(5.28\times 10^{-7}\) \\ \(cos\) & \(6.20\times 10^{-4}\) & \(2.06\times 10^{-3}\) & \(1.73\times 10^{-3}\) & 548.05s & 6999 & \(9.84\times 10^{-7}\) \\ \hline \hline \end{tabular} \end{table} Table 8: Model results for the forward problem of the vSG equation \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c|c|c|c} \hline \hline & \(e_{u}^{r}\) & \(e_{c}^{r}\) & \(e_{c}^{a}\) & \(T_{train}\) & \(Iter_{L}\) & \(Loss\) \\ \hline \hline \(p1\) (\(k_{1}=1\)) & \(5.23\times 10^{-4}\) & \(7.66\times 10^{-5}\) & \(1.92\times 10^{-4}\) & 260.06s & 1446 & \(1.61\times 10^{-6}\) \\ \(p1\) (\(k_{1}=-1\)) & \(1.52\times 10^{-4}\) & \(1.66\times 10^{-4}\) & \(4.11\times 10^{-4}\) & 327.59s & 2311 & \(5.37\times 10^{-7}\) \\ \(p3\) & \(9.33\times 10^{-5}\) & \(3.98\times 10^{-5}\) & \(9.55\times 10^{-5}\) & 455.13s & 4993 & \(5.28\times 10^{-7}\) \\ \(cos\) & \(6.20\times 10^{-4}\) & \(2.06\times 10^{-3}\) & \(1.73\times 10^{-3}\) & 548.05s & 6999 & \(9.84\times 10^{-7}\) \\ \hline \hline \end{tabular} \end{table} Table 9: Model parameters and results of numerical experiments ### Model parameters and results of the forward problem of the gvKP equation The notations Case 1 to Case 4 here correspond to the four situations in Section 3.2. ### Model parameters and results for the inverse problem of vKdV equation (multiple coefficients) The symbols \(p1\), \(p3\) and \(cos\) in the following table represent the cases of first-degree polynomials, cubic polynomials, and trigonometric function coefficients in Section 4.1.2, respectively. However, Case 1 and Case 2 are the corresponding situations in Section 4.1.2 ### Model parameters and results for the inverse problem of gvKP equation (multiple coefficients) The notations Case 1 to Case 4 here correspond to the four situations in Section 3.2. ## Appendix C Supplement to analysis and discussion ### Derivation of gradient propagation First, some obviously established formulas needed in the derivation process are given, as follows: \[R^{[i+1]}=\mathcal{L}_{i}(R^{[i]})+\mathcal{K}R^{[i]},\;i=0,1,...,N_{B},\] \[Loss_{s}(\theta)=\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}|\tilde{u}(\mathbf{x}_{s}^{i},t _{s}^{i};\theta_{u})-u_{s}^{i}|^{2},\] \[Loss_{f}(\theta)=\frac{1}{n_{f}}\sum_{i=1}^{n_{f}}|f(\mathbf{x}_{f}^{i},t_{f}^{i}; \tilde{u}(\mathbf{x},t;\theta_{u}),\tilde{c}(t;\theta_{c}))|^{2}.\] The gradient of loss function \(Loss_{s}(\theta)\) to \(X^{[j]},j\geq 1\): 6 Footnote 6: The gradient calculation in this section is based on a single sample. * If \(X^{[j]}\) is the output inside a residual block, let \([j]\) be between \([iN_{h}+1]\) and \([(i+1)N_{h}+1]\), i.e. \(iN_{h}+1\leq j<[(i+1)N_{h}+1],i=0,1,...,N_{B}\), then we have \[\frac{\partial Loss_{s}}{\partial X^{[j]}} =\frac{\partial Loss_{s}}{\partial R^{[i+1]}}\cdot\frac{ \partial R^{[i+1]}}{\partial X^{[j]}},\] \[=\frac{\partial Loss_{s}}{\partial R^{[N_{B}]}}\cdot\left[\prod_{k=i+1}^{ N_{B}-1}\frac{\partial R^{[k+1]}}{\partial R^{[k]}}\right]\cdot\frac{ \partial R^{[i+1]}}{\partial X^{[j]}},\] \[=\frac{\partial Loss_{s}}{\partial R^{[N_{B}]}}\prod_{k=i+1}^{N_{B}} \left[\mathcal{K}+\frac{1}{\partial R^{[k]}}\left(\mathcal{L}_{k}(R^{[k]}) \right)\right]\cdot\frac{\partial X^{[(i+1)N_{h}+1]}}{\partial X^{[j]}},\] \[=\frac{\partial Loss_{s}}{\partial R^{[N_{B}]}}\cdot\prod_{k=j}^{(i+1)N_{h}} \frac{\partial X^{[k+1]}}{\partial X^{[k]}}\cdot\prod_{k=i+1}^{N_{B}-1}\left[ \mathcal{K}+\frac{1}{\partial R^{[k]}}\left(\mathcal{L}_{k}(R^{[k]})\right) \right].\] * In particular, \(X^{[j]}\) happens to be the output/input of a residual block, assuming \(j=iN_{h}+1,i=0,1,...,N_{B}\), then the above formula is simplified to \[\frac{\partial Loss_{s}}{\partial X^{[j]}}=\frac{\partial Loss_{s}}{\partial R ^{[N_{B}]}}\cdot\prod_{k=i}^{N_{B}-1}\left[\mathcal{K}+\frac{1}{\partial R^{[k ]}}\left(\mathcal{L}_{k}(R^{[k]})\right)\right].\] \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline & \(Seed\) & \(N_{Adam}\) & \(nd_{c}\) & \(n_{x}\) & \(n_{y}\) & \(n_{t}\) & \(N_{d}^{u}\) & \(N_{B}^{u}\) & \(N_{h}^{u}\) & \(N_{d}^{c}\) & \(N_{B}^{c}\) & \(N_{h}^{c}\) & \(n_{s}\) & \(n_{f}\) \\ \hline \hline All & 6666 & 5000 & 500 & 100 & 100 & 100 & 40 & 3 & 3 & 30 & 3 & 2 & 20000 & 50000 \\ \hline \hline \end{tabular} \end{table} Table 22. Model parameters for the inverse problem the gvKP equation (multiple coefficients) \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c|c|c} \hline \hline & \(e_{u}^{r}\) & \(e_{c}^{r}(f(t))\) & \(e_{c}^{r}(g(t))\) & \(e_{c}^{r}(l(t))\) & \(T_{train}\) & \(Iter_{L}\) & \(Loss\) \\ \hline \hline \(Case\) 1 & \(3.96\times 10^{-4}\) & \(2.98\times 10^{-3}\) & \(2.06\times 10^{-3}\) & \(5.94\times 10^{-3}\) & 7037.44s & 9254 & \(9.93\times 10^{-7}\) \\ \(Case\) 2 & \(3.47\times 10^{-4}\) & \(1.95\times 10^{-3}\) & \(4.01\times 10^{-3}\) & \(2.10\times 10^{-3}\) & 8504.51s & 12091 & \(9.13\times 10^{-7}\) \\ \(Case\) 3 & \(1.05\times 10^{-3}\) & \(2.45\times 10^{-2}\) & \(1.10\times 10^{-2}\) & \(6.98\times 10^{-3}\) & 3314.94s & 1700 & \(1.25\times 10^{-6}\) \\ \(Case\) 4 & \(7.84\times 10^{-4}\) & \(1.29\times 10^{-2}\) & \(3.98\times 10^{-3}\) & \(4.42\times 10^{-3}\) & 3335.84s & 1737 & \(9.56\times 10^{-7}\) \\ \hline \hline \end{tabular} \end{table} Table 23. Model results for the inverse problem the gvKP equation (multiple coefficients) The gradient calculation formulas of the loss items \(Loss_{f}(\theta)\), \(Loss_{b}(\theta)\) and \(Loss_{e}(\theta)\) involved in the forward and inverse problems are similar to the above, while the loss item \(Loss_{f}(\theta)\) is different. The gradient of the above loss function only involves the trunk network, but for \(Loss_{f}(\theta)\), it involves both the trunk network and the branch network. Therefore, in the subsequent derivation, \(X_{[\cdot]}^{[i]}\) and \(R_{[\cdot]}^{[i]}\) are subscripted with \(u\) or \(c\) to distinguish them. In addition, \(D^{[\cdot]}\), \(N_{B}^{[\cdot]}\)\(N_{h}^{[\cdot]}\) and \(\mathcal{K}^{[\cdot]}\) are all superscripted accordingly. Suppose the equation is in a simplified form, as \[u_{t}=c_{1}(t)\mathcal{N}[u]\] The gradient of the \(Loss_{f}(\theta)\) to the weights \(W_{u}^{[j]}\) and \(W_{c}^{[j]}\) of the trunk network and the branch network: * Take the gradient contribution of item \((\tilde{u})_{t}\) in \(f\) to weight \(W_{u}^{[j]}\) as an example to calculate (assuming \(iN_{h}^{u}+1\leq j<[(i+1)N_{h}^{u}+1],i=0,1,...,N_{B}^{n}\)): \[\frac{\partial Loss_{f}}{\partial W_{u}^{[j]}} =\frac{\partial Loss_{f}}{\partial f}\cdot\left[\frac{\partial f }{\partial(\tilde{u})_{t}}\cdot\frac{\partial(\tilde{u})_{t}}{\partial W_{u} ^{[j]}}+c_{1}\frac{\partial f}{\partial\mathcal{N}[\tilde{u}]}\cdot\frac{ \partial\mathcal{N}[\tilde{u}]}{\partial W_{u}^{[j]}}\right],\] \[\frac{\partial[(\tilde{u})_{t}]}{\partial W_{u}^{[j]}} =\frac{1}{\partial W_{u}^{[j]}}\cdot\left(\frac{\partial X_{u}^ {[D^{*}]}}{\partial X_{u}^{[D^{*}-1]}}\cdot\prod_{k=0}^{N_{h}^{u}-1}\frac{ \partial R_{u}^{[k+1]}}{\partial R_{u}^{[k]}}\cdot\frac{\partial R_{u}^{[0]}}{ \partial t}\right),\] \[=\frac{\partial X_{u}^{[D^{*}]}}{\partial X_{u}^{[D^{*}-1]}}\cdot \prod_{\begin{subarray}{c}0\leq k\leq N_{B}^{n}-1\\ k\neq i\end{subarray}}\left[\frac{\partial R_{u}^{[k+1]}}{\partial R_{u}^{[k]} }\right]\cdot\frac{\partial R_{u}^{[0]}}{\partial t}\cdot\frac{1}{\partial W_ {u}^{[j]}}\left(\frac{\partial R_{u}^{[i+1]}}{\partial R_{u}^{[i]}}\right),\] \[=W_{u}^{[D^{*}-1]}\cdot\frac{\partial R_{u}^{[0]}}{\partial t}\cdot \prod_{\begin{subarray}{c}0\leq k\leq N_{B}^{n}-1\\ k\neq i\end{subarray}}\left[\mathcal{K}^{u}+\frac{1}{\partial R_{u}^{[k]}} \left(\mathcal{L}_{k}(R_{u}^{[k]})\right)\right]\cdot\frac{1}{\partial W_{u} ^{[j]}}\left[\mathcal{K}^{u}+\frac{1}{\partial R_{u}^{[i]}}\left(\mathcal{L}_ {i}(R_{u}^{[i]})\right)\right]\] * The gradient of the loss term \(Loss_{f}(\theta)\) to the weight \(W_{c}^{[j]}\) of the branch network (assuming \(iN_{h}^{c}+1\leq j+1<[(i+1)N_{h}^{c}+1],i=0,1,...,N_{B}^{n}\)): \[\frac{\partial Loss_{f}}{\partial W_{c}^{[j]}} =\frac{\partial Loss_{f}}{\partial f}\cdot\frac{\partial f}{ \partial(c_{1})}\cdot\frac{\partial c_{1}}{\partial W_{u}^{[j]}},\] \[=\frac{\partial Loss_{f}}{\partial f}\cdot\mathcal{N}[\tilde{u}] \cdot\frac{\partial X_{c}^{[D^{*}]}}{\partial R_{c}^{[N_{B}]}}\cdot\frac{ \partial R_{c}^{[N_{B}^{n}]}}{\partial W_{c}^{[j]}},\] \[=\frac{\partial Loss_{f}}{\partial f}\cdot\mathcal{N}[\tilde{u}] \cdot W_{c}^{[D^{*}-1]}\cdot\frac{\partial R_{c}^{[N_{B}^{n}]}}{\partial X_{c }^{[j+1]}}\cdot\frac{\partial X_{c}^{[j+1]}}{\partial W_{c}^{[j]}},\] \[=\frac{\partial Loss_{f}}{\partial f}\cdot\frac{\partial X_{c}^{[j +1]}}{\partial W_{c}^{[j]}}\cdot\mathcal{N}[\tilde{u}]\cdot W_{c}^{[D^{*}-1] }\cdot\prod_{k=j+1}^{(i+1)N_{h}^{c}}\frac{\partial X_{c}^{[k+1]}}{\partial X_ {c}^{[k]}}\cdot\prod_{k=i+1}^{N_{B}^{n}-1}\left[\mathcal{K}^{c}+\frac{1}{ \partial R_{c}^{[k]}}\left(\mathcal{L}_{k}(R_{c}^{[k]})\right)\right].\] ### Repeated experiments in the unity of linearity and nonlinearity For the solution \(u_{1}^{(vKdV)}\) (linear/\(p1\)) and solution \(u_{2}^{(vKdV)}\) (nonlinear/\(p3\)) of the vKdV equation in Section 4.1, the inverse problem test is performed under the two situations of using the ResNet structure and not using the ResNet structure. It contains the results under different layers of trunk network and branch network. Except for the different random seeds, other model settings are the same. The model settings are shown below, along with the average results for 5 different random seeds. The results of this appendix mainly provide data support for the discussion in Section 5.1.2. ### Repeated experiments in the unity of linearity and nonlinearity For the solution \(u_{1}^{(vKdV)}\) (linear/\(p1\)) and solution \(u_{2}^{(vKdV)}\) (nonlinear/\(p3\)) of the vKdV equation in Section 4.1, the inverse problem test is performed under the two situations of using the ResNet structure and not using the ResNet structure. It contains the results under different layers of trunk network and branch network. Except for the different random seeds, other model settings are the same. The model settings are shown below, along with the average results for 5 different random seeds. The results of this appendix mainly provide data support for the discussion in Section 5.1.2. \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c|c} \hline \hline & \(N_{Adam}\) & \(nd_{c}\) & \(N_{d}^{a}\) & \(N_{h}^{a}\) & \(N_{d}^{c}\) & \(N_{h}^{c}\) & \(n_{s}\) & \(n_{f}\) \\ \hline \hline All & 5000 & 500 & 40 & 2 & 30 & 2 & 2000 & 20000 \\ \hline \hline \end{tabular} \end{table} Table 24. Space-time interval for the inverse problem the vKdV equation (single coefficient) \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c} \hline \hline & \(N_{Adam}\) & \(nd_{c}\) & \(N_{d}^{a}\) & \(N_{h}^{a}\) & \(N_{d}^{c}\) & \(N_{h}^{c}\) & \(n_{s}\) & \(n_{f}\) \\ \hline \hline All & 5000 & 500 & 40 & 2 & 30 & 2 & 2000 & 20000 \\ \hline \hline \end{tabular} \end{table} Table 25. Model parameters for the inverse problem the vKdV equation (single coefficient) ### Data support for the relationship between convexity and learning of neural networks Notations (1)-(5) or (1)-(6) correspond to the corresponding cases in Fig. 24 and Fig. 25 in Section 5.2, respectively. \begin{table} \begin{tabular}{c||c|c|c|c|c|c} \hline \hline trunkbranch & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \hline 7 & \(6.11\times 10^{-3}\) & \(1.32\times 10^{-3}\) & \(1.57\times 10^{-3}\) & \(2.18\times 10^{-3}\) & \(3.10\times 10^{-3}\) & \(2.09\times 10^{-3}\) \\ 8 & \(8.04\times 10^{-4}\) & \(5.26\times 10^{-4}\) & \(1.99\times 10^{-3}\) & \(3.60\times 10^{-3}\) & \(3.33\times 10^{-3}\) & \(2.88\times 10^{-3}\) \\ 9 & \(1.01\times 10^{-3}\) & \(9.67\times 10^{-4}\) & \(2.37\times 10^{-3}\) & \(3.80\times 10^{-3}\) & \(3.39\times 10^{-3}\) & \(2.95\times 10^{-3}\) \\ 10 & \(8.22\times 10^{-4}\) & \(6.55\times 10^{-4}\) & \(3.57\times 10^{-3}\) & \(3.26\times 10^{-3}\) & \(3.69\times 10^{-3}\) & \(3.48\times 10^{-3}\) \\ 11 & \(1.22\times 10^{-3}\) & \(1.15\times 10^{-3}\) & \(2.64\times 10^{-3}\) & \(3.42\times 10^{-3}\) & \(4.04\times 10^{-3}\) & \(3.36\times 10^{-3}\) \\ 12 & \(6.60\times 10^{-4}\) & \(7.29\times 10^{-4}\) & \(3.34\times 10^{-3}\) & \(3.81\times 10^{-3}\) & \(4.21\times 10^{-3}\) & \(4.03\times 10^{-3}\) \\ \hline \hline \end{tabular} \end{table} Table 26. The \(L^{2}\) relative error of linear coefficient (without ResNet) \begin{table} \begin{tabular}{c||c|c|c|c|c|c} \hline \hline trunkbranch & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \hline 7 & \(1.10\times 10^{-3}\) & \(6.38\times 10^{-4}\) & \(5.39\times 10^{-4}\) & \(4.57\times 10^{-4}\) & \(3.79\times 10^{-4}\) & \(1.12\times 10^{-3}\) \\ 8 & \(7.87\times 10^{-4}\) & \(5.57\times 10^{-4}\) & \(4.44\times 10^{-4}\) & \(4.84\times 10^{-4}\) & \(3.33\times 10^{-4}\) & \(4.14\times 10^{-4}\) \\ 9 & \(9.70\times 10^{-4}\) & \(9.40\times 10^{-4}\) & \(4.48\times 10^{-4}\) & \(4.23\times 10^{-4}\) & \(6.00\times 10^{-4}\) & \(4.53\times 10^{-4}\) \\ 10 & \(1.02\times 10^{-3}\) & \(8.77\times 10^{-4}\) & \(4.98\times 10^{-4}\) & \(5.19\times 10^{-4}\) & \(3.72\times 10^{-4}\) & \(5.79\times 10^{-4}\) \\ 11 & \(1.04\times 10^{-3}\) & \(1.15\times 10^{-3}\) & \(9.16\times 10^{-4}\) & \(8.95\times 10^{-4}\) & \(4.86\times 10^{-4}\) & \(3.34\times 10^{-4}\) \\ 12 & \(1.16\times 10^{-3}\) & \(1.12\times 10^{-3}\) & \(8.94\times 10^{-4}\) & \(7.11\times 10^{-4}\) & \(5.97\times 10^{-4}\) & \(5.37\times 10^{-4}\) \\ \hline \hline \end{tabular} \end{table} Table 27. The \(L^{2}\) relative error of nonlinear coefficient (without ResNet) \begin{table} \begin{tabular}{c||c|c|c|c|c} \hline \hline trunkbranch & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \hline 7 & \(1.62\times 10^{-3}\) & \(1.20\times 10^{-3}\) & \(1.17\times 10^{-3}\) & \(8.65\times 10^{-4}\) & \(1.63\times 10^{-3}\) & \(1.80\times 10^{-3}\) \\ 8 & \(1.57\times 10^{-3}\) & \(1.36\times 10^{-3}\) & \(1.12\times 10^{-3}\) & \(1.07\times 10^{-3}\) & \(1.29\times 10^{-3}\) & \(1.17\times 10^{-3}\) \\ 9 & \(1.71\times 10^{-3}\) & \(1.33\times 10^{-3}\) & \(1.23\times 10^{-3}\) & \(8.84\times 10^{-4}\) & \(9.80\times 10^{-4}\) & \(1.16\times 10^{-3}\) \\ 10 & \(1.57\times 10^{-3}\) & \(1.32\times 10^{-3}\) & \(9.54\times 10^{-4}\) & \(7.57\times 10^{-4}\) & \(1.24\times 10^{-3}\) & \(1.23\times 10^{-3}\) \\ 11 & \(1.68\times 10^{-3}\) & \(1.43\times 10^{-3}\) & \(1.01\times 10^{-3}\) & \(1.01\times 10^{-3}\) & \(1.42\times 10^{-3}\) & \(1.05\times 10^{-3}\) \\ 12 & \(2.12\times 10^{-3}\) & \(1.69\times 10^{-3}\) & \(8.20\times 10^{-4}\) & \(7.26\times 10^{-4}\) & \(9.55\times 10^{-4}\) & \(9.75\times 10^{-4}\) \\ \hline \hline \end{tabular} \end{table} Table 29. The \(L^{2}\) relative error of nonlinear coefficient (with ResNet) \begin{table} \begin{tabular}{c||c|c|c|c|c|c} \hline \hline trunkbranch & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \hline 7 & \(1.62\times 10^{-3}\) & \(1.20\times 10^{-3}\) & \(1.17\times 10^{-3}\) & \(8.65\times 10^{-4}\) & \(1.63\times 10^{-3}\) & \(1.80\times 10^{-3}\) \\ 8 & \(1.57\times 10^{-3}\) & \(1.36\times 10^{-3}\) & \(1.12\times 10^{-3}\) & \(1.07\times 10^{-3}\) & \(1.29\times 10^{-3}\) & \(1.17\times 10^{-3}\) \\ 9 & \(1.71\times 10^{-3}\) & \(1.33\times 10^{-3}\) & \(1.23\times 10^{-3}\) & \(8.84\times 10^{-4}\) & \(9.80\times 10^{-4}\) & \(1.16\times 10^{-3}\) \\ 10 & \(1.57\times 10^{-3}\) & \(1.32\times 10^{-3}\) & \(9.54\times 10^{-4}\) & \(7.57\times 10^{-4}\) & \(1.24\times 10^{-3}\) & \(1.23\times 10^{-3}\) \\ 11 & \(1.68\times 10^{-3}\) & \(1.43\times 10^{-3}\) & \(1.01\times 10^{-3}\) & \(1.01\times 10^{-3}\) & \(1.42\times 10^{-3}\) & \(1.05\times 10^{-3}\) \\ 12 & \(2.12\times 10^{-3}\) & \(1.69\times 10^{-3}\) & \(8.20\times 10^{-4}\) & \(7.26\times 10^{-4}\) & \(9.55\times 10^{-4}\) ### Data support in anti-noise test The solution \(u_{1}^{(vKdV)}\) (linear/\(p1\)) and solution \(u_{2}^{(vKdV)}\) (nonlinear/\(p3\)) of the vKdV equation in Section 4.1 are tested for forward and inverse problems in the case of using the ResNet structure. The results of this appendix mainly provide data support for the discussion in Section 5.3. \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c|c|c} \hline \hline & \(e_{u}^{r}\) & \(e_{c}^{r}\) & \(e_{c}^{a}\) & \(T_{train}\) & \(Iter_{L}\) & \(Loss\) \\ \hline \hline (1) & \(3.78\times 10^{-4}\) & \(1.47\times 10^{-3}\) & \(2.71\times 10^{-3}\) & 494.16s & 1382 & \(9.89\times 10^{-7}\) \\ (2) & \(2.10\times 10^{-1}\) & \(1.62\times 10^{0}\) & \(1.06\times 10^{0}\) & 1549.39s & 14744 & \(8.03\times 10^{-2}\) \\ (3) & \(3.52\times 10^{-4}\) & \(2.77\times 10^{-2}\) & \(1.20\times 10^{-1}\) & 1235.97s & 10268 & \(1.12\times 10^{-6}\) \\ (4) & \(3.20\times 10^{-1}\) & \(2.51\times 10^{0}\) & \(1.28\times 10^{1}\) & 887.64s & 6203 & \(2.66\times 10^{-1}\) \\ (5) & \(2.28\times 10^{-4}\) & \(1.44\times 10^{-3}\) & \(8.48\times 10^{-4}\) & 597.61s & 2750 & \(5.72\times 10^{-7}\) \\ \hline \hline \end{tabular} \end{table} Table 33. Model results for the inverse problem the vKdV equation (Fig. 24) \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c|c|c|c|c} \hline \hline & \(e_{u}^{r}\) & \(e_{c}^{r}\) & \(e_{c}^{a}\) & \(T_{train}\) & \(Iter_{L}\) & \(Loss\) \\ \hline \hline (1) & \(2.59\times 10^{-4}\) & \(3.80\times 10^{-2}\) & \(1.01\times 10^{-1}\) & 999.60s & 7418 & \(4.67\times 10^{-7}\) \\ (2) & \(1.92\times 10^{-4}\) & \(9.67\times 10^{-4}\) & \(1.46\times 10^{-3}\) & 767.36s & 4646 & \(5.02\times 10^{-7}\) \\ (3) & \(2.28\times 10^{-4}\) & \(1.55\times 10^{-2}\) & \(4.45\times 10^{-2}\) & 828.71s & 5605 & \(5.82\times 10^{-6}\) \\ (4) & \(7.07\times 10^{-4}\) & \(6.32\times 10^{-2}\) & \(2.05\times 10^{-1}\) & 1393.76s & 11443 & \(4.51\times 10^{-6}\) \\ (5) & \(5.43\times 10^{-4}\) & \(4.71\times 10^{-3}\) & \(1.30\times 10^{-2}\) & 1027.98s & 7895 & \(2.04\times 10^{-6}\) \\ (6) & \(1.26\times 10^{-3}\) & \(6.01\times 10^{-3}\) & \(2.19\times 10^{-2}\) & 1675.01s & 15209 & \(1.05\times 10^{-5}\) \\ \hline \hline \end{tabular} \end{table} Table 35. Space-time interval for the vKdV equation (Fig. 26) \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c|c|c} \hline \hline & \(e_{u}^{r}\) & \(e_{c}^{r}\) & \(e_{c}^{a}\) & \(T_{train}\) & \(Iter_{L}\) & \(Loss\) \\ \hline \hline (1) & \(2.59\times 10^{-4}\) & \(3.80\times 10^{-2}\) & \(1.01\times 10^{-1}\) & 999.60s & 7418 & \(4.67\times 10^{-7}\) \\ (2) & \(1.92\times 10^{-4}\) & \(9.67\times 10^{-4}\) & \(1.46\times 10^{-3}\) & 767.36s & 4646 & \(5.02\times 10^{-7}\) \\ (3) & \(2.28\times 10^{-4}\) & \(1.55\times 10^{-2}\) & \(4.45\times 10^{-2}\) & 828.71s & 5605 & \(5.82\times 10^{-6}\) \\ (4) & \(7.07\times 10^{-4}\) & \(6.32\times 10^{-2}\) & \(2.05\times 10^{-1}\) & 1393.76s & 11443 & \(4.51\times 10^{-6}\) \\ (5) & \(5.43\times 10^{-4}\) & \(4.71\times 10^{-3}\) & \(1.30\times 10^{-2}\) & 1027.98s & 7895 & \(2.04\times 10^{-6}\) \\ (6) & \(1.26\times 10^{-3}\) & \(6.01\times 10^{-3}\) & \(2.19\times 10^{-2}\) & 1675.01s & 15209 & \(1.05\times 10^{-5}\) \\ \hline \hline \end{tabular} \end{table} Table 34. Model results for the inverse problem the vKdV equation (Fig. 25) \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c|c|c} \hline \hline & \(e_{u}^{r}\) & \(e_{c}^{r}\) & \(e_{c}^{a}\) & \(T_{train}\) & \(Iter_{L}\) & \(Loss\) \\ \hline \hline (1) & \(2.59\times 10^{-4}\) & \(3.80\times 10^{-2}\) & \(1.01\times 10^{-1}\) & 999.60s & 7418 & \(4.67\times 10^{-7}\) \\ (2) & \(1.92\times 10^{-4}\) & \(9.67\times 10^{-4}\) & \(1.46\times 10^{-3}\) & 767.36s & 4646 & \(5.02\times 10^{-7}\) \\ (3) & \(2.28\times 10^{-4}\) & \(1.55\times 10^{-2}\) & \(4.45\times 10^{-2}\) & 828.71s & 5605 & \(5.82\times 10^{-6}\) \\ (4) & \(7.07\times 10^{-4}\) & \(6.32\times 10^{-2}\) & \(2.05\times 10^{-1}\) & 1393.76s & 11443 & \(4.51\times 10^{-6}\) \\ (5) & \(5.43\times 10^{-4}\) & \(4.71\times 10^{-3}\) & \(1.30\times 10^{-2}\) & 1027.98s & 7895 & \(2.04\times 10^{-6}\) \\ (6) & \(1.26\times 10^{-3}\) & \(6.01\times 10^{-3}\) & \(2.19\times 10^{-2}\) & 1675.01s & 15209 & \(1.05\times 10^{-5}\) \\ \hline \hline \end{tabular} \end{table} Table 35. Space-time interval for the vKdV equation (Fig. 26) \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline seednoise & 0\% & 0.5\% & 1.0\% & 1.5\% & 2.0\% & 2.5\% & 3.0\% & 5.0\% \\ \hline \hline 1000 & 6.64 \(\times 10^{-4}\) & 2.37 \(\times 10^{-3}\) & 4.55 \(\times 10^{-3}\) & 6.61 \(\times 10^{-3}\) & 8.78 \(\times 10^{-3}\) & 1.13 \(\times 10^{-2}\) & 1.29 \(\times 10^{-2}\) & 2.13 \(\times 10^{-2}\) \\ 2000 & 1.14 \(\times 10^{-3}\) & 1.68 \(\times 10^{-3}\) & 3.03 \(\times 10^{-3}\) & 3.95 \(\times 10^{-3}\) & 4.99 \(\times 10^{-3}\) & 6.20 \(\times 10^{-3}\) & 7.51 \(\times 10^{-3}\) & 1.19 \(\times 10^{-2}\) \\ 3000 & 2.84 \(\times 10^{-3}\) & 5.38 \(\times 10^{-3}\) & 7.49 \(\times 10^{-3}\) & 1.08 \(\times 10^{-2}\) & 1.44 \(\times 10^{-2}\) & 1.67 \(\times 10^{-2}\) & 2.02 \(\times 10^{-2}\) & 3.36 \(\times 10^{-2}\) \\ 4000 & 6.03 \(\times 10^{-4}\) & 2.77 \(\times 10^{-3}\) & 5.37 \(\times 10^{-3}\) & 8.13 \(\times 10^{-3}\) & 1.15 \(\times 10^{-2}\) & 1.40 \(\times 10^{-2}\) & 1.65 \(\times 10^{-2}\) & 2.67 \(\times 10^{-2}\) \\ 5000 & 1.02 \(\times 10^{-3}\) & 1.58 \(\times 10^{-3}\) & 2.70 \(\times 10^{-3}\) & 3.92 \(\times 10^{-3}\) & 5.26 \(\times 10^{-3}\) & 6.40 \(\times 10^{-3}\) & 6.61 \(\times 10^{-3}\) & 1.21 \(\times 10^{-2}\) \\ 6000 & 6.88 \(\times 10^{-4}\) & 9.50 \(\times 10^{-4}\) & 1.63 \(\times 10^{-3}\) & 2.62 \(\times 10^{-3}\) & 3.43 \(\times 10^{-3}\) & 4.30 \(\times 10^{-3}\) & 5.07 \(\times 10^{-3}\) & 8.65 \(\times 10^{-3}\) \\ 7000 & 6.91 \(\times 10^{-4}\) & 1.78 \(\times 10^{-3}\) & 2.74 \(\times 10^{-3}\) & 4.51 \(\times 10^{-3}\) & 5.51 \(\times 10^{-3}\) & 6.66 \(\times 10^{-3}\) & 7.56 \(\times 10^{-3}\) & 1.30 \(\times 10^{-2}\) \\ 8000 & 1.01 \(\times 10^{-3}\) & 4.11 \(\times 10^{-3}\) & 8.26 \(\times 10^{-3}\) & 1.23 \(\times 10^{-2}\) & 1.65 \(\times 10^{-2}\) & 2.04 \(\times 10^{-2}\) & 2.42 \(\times 10^{-2}\) & 3.87 \(\times 10^{-2}\) \\ 9000 & 9.02 \(\times 10^{-4}\) & 3.62 \(\times 10^{-3}\) & 7.14 \(\times 10^{-3}\) & 1.07 \(\times 10^{-2}\) & 1.37 \(\times 10^{-2}\) & 1.75 \(\times 10^{-2}\) & 1.97 \(\times 10^{-2}\) & 3.28 \(\times 10^{-2}\) \\ 10000 & 8.11 \(\times 10^{-4}\) & 3.96 \(\times 10^{-3}\) & 8.24 \(\times 10^{-3}\) & 1.27 \(\times 10^{-2}\) & 1.74 \(\times 10^{-2}\) & 2.07 \(\times 10^{-2}\) & 2.59 \(\times 10^{-2}\) & 4.34 \(\times 10^{-2}\) \\ \hline \hline \end{tabular} \end{table} Table 39. Anti-noise test of forward problem with linear coefficients (\(L^{2}\) relative error) \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c|c|c|c|c} \hline \hline seednoise & 0\% & 0.5\% & 1.0\% & 1.5\% & 2.0\% & 2.5\% & 3.0\% & 5.0\% \\ \hline \hline 1000 & 1.15 \(\times 10^{-3}\) & 2.64 \(\times 10^{-3}\) & 4.30 \(\times 10^{-3}\) & 6.41 \(\times 10^{-3}\) & 8.56 \(\times 10^{-3}\) & 1.04 \(\times 10^{-2}\) & 1.23 \(\times 10^{-2}\) & 2.11 \(\times 10^{-2}\) \\ 2000 & 2.50 \(\times 10^{-4}\) & 1.39 \(\times 10^{-3}\) & 2.65 \(\times 10^{-3}\) & 4.08 \(\times 10^{-3}\) & 5.25 \(\times 10^{-3}\) & 6.63 \(\times 10^{-3}\) & 7.70 \(\times 10^{-3}\) & 1.24 \(\times 10^{-2}\) \\ 3000 & 4.66 \(\times 10^{-4}\) & 3.95 \(\times 10^{-3}\) & 7.87 \(\times 10^{-3}\) & 1.17 \(\times 10^{-2}\) & 1.56 \(\times 10^{-2}\) & 1.89 \(\times 10^{-2}\) & 2.29 \(\times 10^{-2}\) & 3.20 \(\times 10^{-2}\) \\ 4000 & 7.39 \(\times 10^{-4}\) & 2.68 \(\times 10^{-3}\) & 5.41 \(\times 10^{-3}\) & 7.95 \(\times 10^{-3}\) & 1.05 \(\times 10^{-2}\) & 1.31 \(\times 10^{-2}\) & 1.58 \(\times 10^{-2}\) & 2.51 \(\times 10^{-2}\) \\ 5000 & 3.65 \(\times 10^{-4}\) & 1.21 \(\times 10^{-3}\) & 2.25 \(\times 10^{-3}\) & 3.45 \(\times 10^{-3}\) & 4.41 \(\times 10^{-3}\) & 5.59 \(\times 10^{-3}\) & 6.66 \(\times 10^{-3}\) & 1.05 \(\times 10^{-2}\) \\ 6000 & 6.69 \(\times 10^{-4}\) & 1.03 \(\times 10^{-3}\) & 2.41 \(\times 10^{-3}\) & 3.82 \(\times 10^{-3}\) & 5.14 \(\times 10^{-3}\) & 6.45 \(\times 10^{-3}\) & 7.79 \(\times 10^{-3}\) & 1.31 \(\times 10^{-2}\) \\ 7000 & 4.25 \(\times 10^{-4}\) & 7.64 \(\times 10^{-4}\) & 1.52 \(\times 10^{-3}\) & 2.25 \(\times 10^{-3}\) & 2.91 \(\times 10^{-3}\) & 3.63 \(\times 10^{-3}\) & 4.34 \(\times 10^{-3}\) & 7.06 \(\times 10^{-3}\) \\ 8000 & 3.36 \(\times 10^{-4}\) & 5.14 \(\times 10^{-3}\) & 9.99 \(\times 10^{-3}\) & 1.46 \(\times 10^{-2}\) & 1.97 \(\times 10^{-2}\) & 2.44 \(\times 10^{-2}\) & 2.94 \(\times 10^{-2}\) & 4.26 \(\times 10^{-2}\) \\ 90 ### Model parameters and results for the inverse problem of the vSG equation (2D-coefficients) Notations Case 1 and Case 2 correspond to the two situations in Section 5.4.2.
2306.06237
Beyond Weights: Deep learning in Spiking Neural Networks with pure synaptic-delay training
Biological evidence suggests that adaptation of synaptic delays on short to medium timescales plays an important role in learning in the brain. Inspired by biology, we explore the feasibility and power of using synaptic delays to solve challenging tasks even when the synaptic weights are not trained but kept at randomly chosen fixed values. We show that training ONLY the delays in feed-forward spiking networks using backpropagation can achieve performance comparable to the more conventional weight training. Moreover, further constraining the weights to ternary values does not significantly affect the networks' ability to solve the tasks using only the synaptic delays. We demonstrate the task performance of delay-only training on MNIST and Fashion-MNIST datasets in preliminary experiments. This demonstrates a new paradigm for training spiking neural networks and sets the stage for models that can be more efficient than the ones that use weights for computation.
Edoardo W. Grappolini, Anand Subramoney
2023-06-09T20:14:10Z
http://arxiv.org/abs/2306.06237v5
# Beyond Weights: Deep learning in Spiking Neural Networks with pure synaptic-delay training ###### Abstract. Biological evidence suggests that adaptation of synaptic delays on short to medium timescales plays an important role in learning in the brain. Inspired by biology, we explore the feasibility and power of using synaptic delays to solve challenging tasks even when the synaptic weights are not trained but kept at randomly chosen fixed values. We show that training ONLY the delays in feed-forward spiking networks using backpropagation can achieve performance comparable to the more conventional weight training. Moreover, further constraining the weights to ternary values does not significantly affect the networks' ability to solve the tasks using only the synaptic delays. We demonstrate the task performance of delay-only training on MNIST and Fashion-MNIST datasets in preliminary experiments. This demonstrates a new paradigm for training spiking neural networks and sets the stage for models that can be more efficient than the ones that use weights for computation. spiking neural networks, synaptic delays, neuromorphic computing ## 1. Introduction Spiking neural networks (SNNs) (Grover et al., 2016) are biologically inspired neuron models that have recently become increasingly popular for deep learning use cases due to their potential for extreme energy efficiency on neuromorphic hardware. In these models, the complex electrochemical dynamics of a biological synapse are modelled in the connection between neurons and (typically) weights parameters. Neuroscientific literature classically focuses on the neuron and synaptic strength as the only factor of learning; this was because of several factors, the most trivial of which is that electrical activity is a relatively easy measurement of cellular and brain activity. Recent literature (Berg et al., 2016) suggests, however, that other types of cells could contribute to computations and learning in the brain. Glia cells, especially Oligo-dendrocytes, have been shown to be activated through learning processes (Grover et al., 2016; Grover et al., 2016). At a very high level, they work by wrapping the axon in a myelin sheath which can control the electrical signal speed through synapses and as a consequence, the time of the spike reception. The role and importance of exact spike times are also considered in many studies (Berg et al., 2016; Grover et al., 2016; Grover et al., 2016) and the hypothesis is also reinforced by simulative works (Grover et al., 2016). In this work, we explore the potential of computations being a direct consequence of synaptic delays in a network of spiking neurons. Although the idea of computations through delays comes from nature, our techniques come from the current technical literature in spiking neural networks, and we train our models through backpropagation. More precisely, we use an SNN based on SLAYER (Krause et al., 2015) and compare training only the synaptic delays with the conventional procedure of training weights of the network. We show that training just the synaptic delays can lead to competitive task performance on deep learning benchmarks - specifically MNIST and Fashion-MNIST. ## 2. Related Work Many previous works explore the possibility of using time coding schemes and precise spike time learning in spiking neural networks as well as training the delays in an SNN. Examples include (Grover et al., 2016; Grover et al., 2016; Grover et al., 2016) that combines time coding and backpropagation for different types of neurons (IF, LIF), DL-ReSuMe (Krause et al., 2015), as an extension of ReSuMe (Krause et al., 2015) introduces the concept of delay training to improve performance and reduce weight adjustments, utilizing a supervised learning rule that is not backpropagation based. Hazan et al. (Hazan et al., 2015) is perhaps the most closely related work to ours, where only the delays of a weightless spiking neural network are trained using an STDP unsupervised learning rule to create latent representations of the MNIST dataset, which are then classified using a linear classifier. SLAYER (Krause et al., 2015) uses pseudo-derivatives to train axonal delays and weights using backpropagation with a spike response model (SRM) of spiking neurons. Our work was heavily influenced by SLAYER, although we train synaptic rather than axonal delays and explore the case where weights aren't trained. To our knowledge, we are the first to show that pure delay training using backpropagation with surrogate gradient method can achieve comparable performance to weight training. ## 3. Methods ### Spike response model (SRM) We utilize the spike response model (SRM) and all the parameters were trained using surrogate gradients as in (Krause et al., 2015). This includes the synaptic weights and delays. For the convenience of the reader, we describe the notation necessary for the comprehension of our work. Let \(s_{i}(t)=\sum_{f}\delta(t-t_{i}^{(f)})\) be one of a series of spike trains that reaches a neuron; \(t_{i}^{(f)}\) is the time of the \(f^{th}\) spike of the \(i^{th}\) input. Let \(e(\cdot)_{d}\) be a spike response kernel that also takes into consideration the _axonal_ delay. Then the membrane potential of the neuron that we are taking into consideration is: \[u(t) = \sum_{i}w_{i}(\epsilon_{d}*s_{i})(t)+(v*s)=w^{\top}a(t)+(v*s)(t)\] \[= \sum_{i}w_{i}(\epsilon(t-d)*s_{i})(t)+(v*s)=w^{\top}a(t)+(v*s)(t),\] where \(*\) represents the convolution operation and \(\epsilon(\cdot)\) is a spike response kernel that does not take into account delays. Let \(\vartheta\) be the spiking threshold: an output spike is generated when the membrane potential \(u(t)\) reaches \(\vartheta\), more formally: \[f_{s}(u):u\to s,s(t):=s(t)+\delta(t-t^{(f+1)})\] \[\text{where }\ t^{(f+1)}=\min\{t:u(t)=\vartheta,t>t^{(f)}\}.\] ### Synaptic delays To achieve pure delay training, axonal delays are not sufficient since they lack expressivity. Training an axonal delay means in practice that, starting from a neuron in a layer \(i\), the delay applied to all the neurons of the following layer \(i+1\), is exactly the same, i.e. we have a fraction of the trainable parameters that we have when training the synaptic delays. Our implementation is prototypical and has not been optimized for a CUDA execution. Notice however that the number of _trainable_ parameters in a weight-based and _synaptic delay-based_ network is exactly the same; we expect therefore that a lower-level implementation would perform similarly to (Han et al., 2017) in terms of training time. In the simulated environment, the inference is badly influenced by the double operation needed in a synapse (application of delay + multiplication), but we expect advantages in a hardware context. Taking into consideration the synaptic delays, the formulation is simply extended as: \[u(t)=\sum_{i}w_{i}(\epsilon_{d_{i}}*s_{i})(t)+(v*s)=\sum_{i}w_{i}(\epsilon(t-d _{i})*s_{i})(t)+(v*s).\] In all our experiments, the spike response kernels were: \[\epsilon(t)=\frac{t}{\tau_{s}}\exp(1-\frac{t}{\tau_{s}})\Theta(t),v(t)=2 \vartheta\exp(1-\frac{t}{\tau_{r}})\Theta(t),\] where \(\Theta(t)\) is the Heaviside step function, although the formulation is independent of the chosen kernel. Real-valued delays were stored for each synapse during training, but only the quantized values were used during inference. Quantization was obtained by a simple round _down_ to the nearest allowed number. With a simulation timestep of 1ms, this meant that if we had a synaptic delay of, say, \(4.421ms\), the spike is delayed by \(4ms\). A stochastic rounding (Kraus et al., 2017) method was tried, but did not lead to improvements and led to an increase to compute time; it was considered therefore not worthy at this stage. ### Loss and spike target For all the experiments, a spike time-based loss was used. For a target spike train \(\hat{s}(t)\), for a time interval [0,T], the loss function was defined as: \[E=\int_{0}^{T}L(s^{(m)},\hat{s}(t))dt=\frac{1}{2}\int_{0}^{T}(e^{m_{t}}(s^{(m )}(t),\hat{s}(t)))^{2}dt,\] where \(L(s^{(m_{t})}(t),\hat{s}(t))\) is the loss at time instance \(t\) and \(e^{(m_{t})}(s^{(m_{t})},\hat{s}(t))\) is the error signal at the final layer. Finally, the error signal was: \[e^{(n_{I})}(s^{(n_{I})},\hat{s}(t))=e*(s^{(n_{I})}(t)-\hat{s}(t))=a^{(n_{I})} (t)-\hat{a}(t).\] For the classification tasks, the target spike train was specified as the neuron corresponding to the correct class, spiking for the whole simulation period. The class was inferred by looking at the first neuron that spikes; in the case where more than one neuron spikes, we infer the class of the neuron that, at parity of arrival time, has the most spikes. ### Data encoding For the image-based classification task we converted the non-temporal deep learning datasets to spiking encodings. In this case, because the trained parameters (the delays) work intrinsically in a temporal domain, we opt for temporal encoding. Although a temporal coding similar to what is done in (Han et al., 2017) was tried (i.e. for each pixel, a spike temporally placed in a proportional way to the pixel intensity), we found that a simpler encoding was more effective in terms of accuracy performance for our setup. The strategy that we used in practice is one where, for each pixel, if the greyscale value is higher than 127 we get a spike, and otherwise we don't get a spike. ### Training delays Training delays is a fundamentally different operation, semantically, than training weights. Here we attempt to give an intuitive explanation at a high level, of what the neural dynamics can be. The explanation applies generally to all neuronal models. The spikes cause an increase of the membrane potential \(u(t)\) in the receiving neuron - see Figure 1. As the membrane potential reaches a threshold \(\vartheta\), the receiving neuron will output a spike. Training synaptic weights act on how much a spike will impact the magnitude of the membrane potential: this is a direct action on the amplitude of the receiving spike (Bahdan et al., 2017). Training delays, on the other hand, act on when (or if) a spike is generated, following different dynamics. Remembering that generation of a spike is the consequence of a change in amplitude, we consider the latter, as the rest follows naturally. Considering Figure 2: (A) shows a possible dynamics of a neuron receiving two spikes. Consider the magnitude \(\hat{u}\) as the maximum value of the membrane potential \(u(t)\) as the consequence of the spikes in this case, we will show how we can obtain a lower or higher maximum membrane potential value compared to \(\hat{u}\). If we want to use delays to change the amplitude of the membrane potential, in the case depicted, we have two possibilities, delay the first spike, or the second. We can see that, delaying the second spike leads to the silhouette of the membrane Figure 1. Spiking neuron dynamics. As a neuron receives a spike, a change in the membrane potential is obtained. As the membrane potential reaches a threshold \(\vartheta\) a spike is generated potential never reaching \(\hat{u}\) (Figure 2.B). Delaying the first spike, leads to a \(u(t)\) function that surpasses \(\hat{u}\) (Figure 2.A). ### Training procedure All our experiments were on a fully connected network with one hidden layer of 800 neurons (784-800-10). The optimization was done with the Adam (Kingmare et al., 2014), with an initial learning rate of 0.01 and a batch size of 32. The training set was split to train and validation with an 80%, 20% split, and we report the test accuracy in the tables. Weights were initialized following a normal distributions: \(N(0.0571,0.5458)\) for the first layer and \(N(-0.5244,1.0490)\) for the second layer and scaled by a factor of \(x10\) as in (Kolmogorov, 2015), based on analysing the weight distributions of weight-trained networks The weights are then scaled by a factor of \(x10\), following the specifications of SLAYER. For the initialization of the _constrained weights_, the same strategy was used, with the addition that, before applying the multiplicative factor, we apply the following, simple rule: let \(w\in\mathbb{R}\) be a weight of the network. Let \(\hat{w}\) be the quantized version of the weight. Then, \(\hat{w}=round(w)\) where \(round(\cdot)\) rounds \(w\) to the nearest integer in the set \(\{-1,0,1\}\). For the delays initialisation, we kept the same strategy used in (Kolmogorov, 2015): A uniform random initialisation between 0 and 1. In practice, this means that initially, the applied delay will be 0 in the forward pass, but non-zero initialization allows the avoidance of gradient problems in the backward pass. In some trials, a random delays initialisation in \([0,n]\) with different values for \(n\in\mathbb{R}\) was also tried, but it did not lead to any performance improvements. _Parameters of the simulation._ All simulations have a duration of 10 ms, and time is quantized with 1 ms precision. The spiking threshold was set to \(\hat{\vartheta}=10\,\mathrm{mV}\), the time constant to \(\tau_{s}=1\,\mathrm{ms}\) and the refractory time constant to \(\tau_{r}=1\,\mathrm{ms}\). ## 4. Results We test pure delay training on two datasets, the MNIST (Krizhevsky et al., 2014) handwritten digits dataset and the Fashion-MNIST (Krizhevsky et al., 2014) dataset. We compare the results with weight training, and other similar methods in literature. ### Mnist We show in Figure 3 the learning curves for our experiments. In the case of MNIST, we can see how the weights baseline fits the training dataset quickly, and our two methods do not achieve the same training accuracy. However, viewing the validation accuracy curve, we can see how, without additional regularizers, weights training overfits, in contrast to our method. With both free and constrained weights initialisation, we achieve an accuracy improvement over (Kolmogorov, 2015), which is the only work that attempts a similar approach to ours (albeit not fully supervised). Our goal here is to demonstrate a proof-of-concept that pure delay training is competitive rather than achieve state-of-the-art accuracy. ### Fashion-MNIST In Figure 4, we present the learning curves for the experiments on the Fashion-MNIST dataset. In this case, we observe how the overfitting on the training set for the weight training is even more evident than the previous case; weight training achieves over 97% training accuracy on the training set, delay training stops at 92%. We can see from the validation curves how, as epochs pass, our delay training method is more robust to overfitting, indeed, at the 100th epoch mark the weights training baseline has lower accuracy values than both our delay training methods. We also notice that our methods get very close to the weights training baseline for the best validation model, and are not far behind the non-spiking and basic ANN baseline. \begin{table} \begin{tabular}{l c c c c} \hline \hline & **Coding** & **Training Method** & **Neuron model** & **Accuracy (\%)** \\ \hline \hline SNNN (Krizhevsky et al., 2014) & Temporal & Shcriptor & IF & 97.4 \\ Memoriess with delays (Kolmogorov, 2015) & Temporal & STIP & LIF & 93.5 \\ \hline ANN-\(\hat{\epsilon}\) (\(\cdot\)) & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } \\ SLAYER weights baseline (\(\cdot\)) & Temporal & Shcriptor & SNM & 96.1\(\pm\)0.1 \\ \hline Delay training & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{} \end{tabular} } \\ \cline{1-1} \cline{5-5} \cline{5- ## 5. Discussion We have demonstrated a proof of concept that training just the delays in a spiking neural network can work, as well as training weights. We showed that on both MNIST and Fashion-MNIST datasets, pure delay training achieves task performance comparable to conventional weight training, with delay training having a slight advantage in not overfitting the dataset. Moreover, we demonstrated that even when the weights are randomly initialised to ternary values ({+x,0,-x}), the task performance of the networks remains good. This is the first step toward understanding the computational power of delays in spiking neural networks for biology and machine learning. Input and output encodes that use better temporal information and more precise delay training methods, and using pure delay-training for more powerful event-based models such as the EGRU (Kang et al., 2021) are potential ways to extend this in future work. A forward pass in a spiking network using only delays with these ternary weights can also be implemented significantly more efficiently than a network that uses floating point weights. In software, such a forward pass can use just matrix roll operations combined with addition/subtraction instead of multiply-accumulate. In neuromorphic computing, our work suggests new ways of configuring the hardware to achieve extremely efficient inference. It might especially be relevant to analog and photonic (Krizhevsky et al., 2014) neuromorphic devices, although significant savings can also be realised in other devices. Overall, this work demonstrates a new paradigm of using time in and training spiking neural networks. ###### Acknowledgements. This work was co-funded by the European Union within the Erasmus+ traineeship program. AS was funded by the Ministry of Culture and Science of the State of North Rhine-Westphalia, Germany during part of this work. We would like to thank Andrew D. Bagdanov, Laurenz Wiskott and the Institute for Neural Computation at Ruhr University Bochum for institutional and infrastructural support.
2306.03373
CiT-Net: Convolutional Neural Networks Hand in Hand with Vision Transformers for Medical Image Segmentation
The hybrid architecture of convolutional neural networks (CNNs) and Transformer are very popular for medical image segmentation. However, it suffers from two challenges. First, although a CNNs branch can capture the local image features using vanilla convolution, it cannot achieve adaptive feature learning. Second, although a Transformer branch can capture the global features, it ignores the channel and cross-dimensional self-attention, resulting in a low segmentation accuracy on complex-content images. To address these challenges, we propose a novel hybrid architecture of convolutional neural networks hand in hand with vision Transformers (CiT-Net) for medical image segmentation. Our network has two advantages. First, we design a dynamic deformable convolution and apply it to the CNNs branch, which overcomes the weak feature extraction ability due to fixed-size convolution kernels and the stiff design of sharing kernel parameters among different inputs. Second, we design a shifted-window adaptive complementary attention module and a compact convolutional projection. We apply them to the Transformer branch to learn the cross-dimensional long-term dependency for medical images. Experimental results show that our CiT-Net provides better medical image segmentation results than popular SOTA methods. Besides, our CiT-Net requires lower parameters and less computational costs and does not rely on pre-training. The code is publicly available at https://github.com/SR0920/CiT-Net.
Tao Lei, Rui Sun, Xuan Wang, Yingbo Wang, Xi He, Asoke Nandi
2023-06-06T03:22:22Z
http://arxiv.org/abs/2306.03373v2
CiT-Net: Convolutional Neural Networks Hand in Hand with Vision Transformers for Medical Image Segmentation ###### Abstract The hybrid architecture of convolutional neural networks (CNNs) and Transformer are very popular for medical image segmentation. However, it suffers from two challenges. First, although a CNNs branch can capture the local image features using vanilla convolution, it cannot achieve adaptive feature learning. Second, although a Transformer branch can capture the global features, it ignores the channel and cross-dimensional self-attention, resulting in a low segmentation accuracy on complex-content images. To address these challenges, we propose a novel hybrid architecture of convolutional neural networks hand in hand with vision Transformers (CiT-Net) for medical image segmentation. Our network has two advantages. First, we design a dynamic deformable convolution and apply it to the CNNs branch, which overcomes the weak feature extraction ability due to fixed-size convolution kernels and the stiff design of sharing kernel parameters among different inputs. Second, we design a shifted-window adaptive complementary attention module and a compact convolutional projection. We apply them to the Transformer branch to learn the cross-dimensional long-term dependency for medical images. Experimental results show that our CiT-Net provides better medical image segmentation results than popular SOTA methods. Besides, our CiT-Net requires lower parameters and less computational costs and does not rely on pre-training. The code is publicly available at [https://github.com/SR0920/CiT-Net](https://github.com/SR0920/CiT-Net). ## 1 Introduction Medical image segmentation refers to dividing a medical image into several specific regions with unique properties. Medical image segmentation results can not only achieve abnormal detection of human body regions but also be used to guide clinicians. Therefore, accurate medical image segmentation has become a key component of computer-aided diagnosis and treatment, patient condition analysis, image-guided surgery, tissue and organ reconstruction, and treatment planning. Compared with common RGB images, medical images usually suffer from the problems such as high density noise, low contrast, and blurred edges. So how to quickly and accurately segment specific human organs and lesions from medical images has always been a huge challenge in the field of smart medicine. In recent years, with the rapid development of computer hardware resources, researchers have continuously developed many new automatic medical image segmentation algorithms based on a large number of experiments. The existing medical image segmentation algorithms can be divided into two categories: based on convolutional neural networks (CNNs) and based on the Transformer networks. The early traditional medical image segmentation algorithms are based on manual features designed by medical experts using professional knowledge [23]. These methods have a strong mathematical basis and theoretical support, but these algorithms have poor generalization for different organs or lesions of the human body. Later, inspired by the full convolutional networks (FCN) [17] and the encoder-decoder, Ronneberger et al. designed the U-Net [14] network that was first applied to medical image segmentation. After the network was proposed, its symmetric U-shaped encoder and decoder structure received widespread attention. At the same time, due to the small number of parameters and the good segmentation effect of the U-Net network, deep learning has made a breakthrough in medical image segmentation. Then a series of improved medical image segmentation networks are inspired based on the U-Net network, such as 2D U-Net++ [15], ResDO-UNet [12], SGU-Net [11], 2.5D Ritual-Net [10], 3D Unet [2], V-Net [16], etc. Among them, Alom et al. designed R2U-Net [1] by combining U-Net, ResNet [20], and recurrent neural network (RCNN) [1]. Then Gu et al. introduced dynamic convolution [1] into U-Net proposed CA-Net [14]. Based on U-Net, Yang et al. proposed DCU-Net [21] by referring to the idea of residual connection and deformable convolution [11].Lei et al. [1] proposed a network ASENet based on adversarial consistency learning and dynamic convolution. The rapid development of CNNs in the field of medical image segmentation is largely due to the scale invariance and inductive bias of convolution operation. Although this fixed receptive field improves the computational efficiency of CNNs, it limits its ability to capture the relationship between distant pixels in medical images and lacks the ability to model medical images in a long range. Aiming at the shortcomings of CNNs in obtaining global features of medical images, scholars have proposed a Transformer architecture. In 2017, Vaswani et al. [21] proposed the first Transformer network. Because of its unique structure, Transformer obtains the ability to process indefinite-length input, establish long-range dependency modeling, and capture global information. With the excellent performance of Transformer in NLP fields, ViT [11] applied Transformer to the field of image processing for the first time. Then Chen et al. put forward TransUNet [3], which initiates a new period of Transformer in the field of medical image segmentation. Valanarasu et al. proposed MedT [20] in combination with the gating mechanism. Cao et al. proposed a pure Transformer network Swin-Unet [14] for medical image segmentation, in combination with the shifted-window multi-head self-attention (SW-MSA) in Swin Transformer [15]. Subsequently, Wang et al. designed the BAT [20] network for dermoscopic images segmentation by combining the edge detection idea [21]. Hatamizadeh et al. proposed Swin UNETR [16] network for 3D brain tumor segmentation. Wang et al. proposed the UCTransNet [20] network that combines the channel attention with Transformer. These methods can be roughly divided into based on the pure Transformer architecture and based on the hybrid architecture of CNNs and Transformer. The pure Transformer network realizes the long-range dependency modeling based on self-attention. However, due to the lack of inductive bias of the Transformer itself, Transformer cannot be widely used in small-scale datasets like medical images [23]. At the same time, Transformer architecture is prone to ignore detailed local features, which reduces the separability between the background and the foreground of small lesions or objects with large-scale changes in the medical image. The hybrid architecture of CNNs and Transformer realizes the local and global information modeling of medical images by taking advantage of the complementary advantages of CNNs and Transformer, thus achieving a better medical image segmentation effect [1]. However, this hybrid architecture still suffers from the following two problems. First, it ignores the problems of organ deformation and lesion irregularities when modeling local features, resulting in weak local feature expression. Second, it ignores the correlation between the feature map space and the channels when modeling the global feature, resulting in inadequate expression of self-attention. To address the above problems, our main contributions are as follows: * A novel dynamic deformable convolution (DDConv) is proposed. Through task adaptive learning, DDConv can flexibly change the weight coefficient and deformation offset of convolution itself. DDConv can overcome the problems of fixation of receptive fields and sharing of convolution kernel parameters, which are common problems of vanilla convolution and its variant convolutions, such as Atrous convolution and Involution, etc. Improves the ability to perceive tiny lesions and targets with large-scale changes in medical images. * A new (shifted)-window adaptive complementary attention module ((S)W-ACAM) is proposed. (S)W-ACAM realizes the cross-dimensional global modeling of medical images through four parallel branches of weight coefficient adaptive learning. Compared with the current popular attention mechanisms, such as CBAM and Non-Local, (S)W-ACAM fully makes up for the deficiency of the conventional attention mechanism in modeling the cross-dimensional relationship between spatial and channels. It can capture the cross-dimensional long-distance correlation features in medical images, and enhance the separability between the segmented object and the background in medical images. * A new parallel network structure based on dynamically adaptive CNNs and cross-dimensional feature fusion Transformer is proposed for medical image segmentation, called CiT-Net. Compared with the current popular hybrid architecture of CNNs and Transformer, CiT-Net can maximize the retention of local and global features in medical images. It is worth noting that CiT-Net not only abandons pre-training but also has fewer parameters and less computational costs, which are 11.58 M and 4.53 GFLOPs respectively. Compared with the previous vanilla convolution [17], dynamic convolution [3][10], and deformable convolution [1], our DDConv can not only adaptively change the weight coefficient and deformation offset of the convolution according to the medical image task, but also better adapt to the shape of organs and small lesions with large-scale changes in the medical image, and additionally, it can improve the local feature expression ability of the segmentation network. Compared with the self-attention mechanism in the existing Transformer architectures [14][20], our (S)W-ACAM requires fewer parameters and less computational costs while it's capable of capturing the global cross-dimensional long-range dependency in the medical image, and improving the global feature expression ability of the segmentation network. Our CiT-Net does not require a large number of labeled data for pre-training, but it can maximize the retention of local details and global semantic information in medical images. It has achieved the best segmentation performance on both dermoscopic images and liver datasets. ## 2 Method ### Overall Architecture The fusion of local and global features are clearly helpful for improving medical image segmentation. CNNs capture lo cal features in medical images through convolution operation and hierarchical feature representation. In contrast, the Transformer network realizes the extraction of global features in medical images through the cascaded self-attention mechanism and the matrix operation with context interaction. In order to make full use of local details and global semantic features in medical images, we design a parallel interactive network architecture CiT-Net. The overall architecture of the network is shown in Figure 1 (a). CiT-Net fully considers the complementary properties of CNNs and Transformer. During the forward propagation process, CiT-Net continuously feeds the local details extracted by the CNNs to the decoder of the Transformer branch. Similarly, CiT-Net also feeds the global long-range relationship captured by the Transformer branch to the decoder of the CNNs branch. Obviously, the proposed CiT-Net provides better local and global feature representation than pure CNNs or Transformer networks, and it shows great potential in the field of medical image segmentation. Specifically, CiT-Net consists of a patch embedding model, dynamically adaptive CNNs branch, cross-dimensional fusion Transformer branch, and feature fusion module. Among them, the dynamically adaptive CNNs branch and the cross-dimensional fusion Transformer branch follow the design of U-Net and Swin-Unet, respectively. The dynamically adaptive CNNs branch consists of seven main stages. By using the weight coefficient and deformation offset adaptive DDConv in each stage, the segmentation network can better understand the local semantic features of medical images, better perceive the subtle changes of human organs or lesions, and improve the ability of extracting multi-scale change targets in medical images. Similarly, the cross-dimensional fusion Transformer branch also consists of seven main stages. By using (S)W-ACAM attention in each stage, as shown in Figure 1 (b), the segmentation network can better understand the global dependency of medical images to capture the position information between different organs, and improve the separability of the segmented object and the background in the medical images. Although our CiT-Net can effectively improve the feature representation of medical images, it requires a large number of training data and network parameters due to the dual-branch structure. As the conventional Transformer network contains a lot of MLP layers, which not only aggravates the training burden of the network but also makes the number of model parameters rise sharply, resulting in the slow training of the model. Inspired by the idea of the Ghost network [14], we redesign the MLP layer in the original Transformer and proposed a lightweight perceptron module (LPM). The LPM can help our CiT-Net not only achieve better medical image segmentation results than MLP but also greatly reduced the parameters and computational complexity of the original Transformer block, even the Transformer can achieve good results without a lot of labeled data training. It is worth mentioning that the dual-branch structure involves mutually symmetric encoders and decoders so that the parallel interaction network structure can maximize the preservation of local features and global features in medical images. ### Dynamic Deformable Convolution Vanilla convolution has spatial invariance and channel specificity, so it has a limited ability to change different visual modalities when dealing with different spatial locations. At the same time, due to the limitations of the receptive field, it is difficult for vanilla convolution to extract features of small targets or targets with blurred edges. Therefore, vanilla convolution inevitably has poor adaptability and weak generalization ability for complex medical images. Although the ex Figure 1: (a) The architecture of CiT-Net. CiT-Net consists of a dual-branch interaction between dynamically adaptive CNNs and cross-dimensional feature fusion Transformer. The DDConv in the CNNs branch can adaptively change the weight coefficient and deformation offset of the convolution itself, which improves the segmentation accuracy of irregular objects in medical images. The (S)W-ACAM in the Transformer branch can capture the cross-dimensional long-range dependency in medical images, improving the separability of segmented objects and backgrounds in medical images. The lightweight perceptron module (LPM) greatly reduces the parameters and calculations of the original Transformer network by using the Ghost strategy. (b) Two successive Transformer blocks. W-ACAM and SW-ACAM are cross-dimensional self-attention modules with shifted windows and compact convolutional projection configurations. isting deformable convolution [1] and dynamic convolution [1][1] outperforms vanilla convolution to a certain extent, they still have the unsatisfied ability to balance the performance and size of networks when dealing with medical image segmentation. In order to solve the shortcomings of current convolution operations, this paper proposes a new convolution strategy, DDConv, as shown in Figure 2. It can be seen that DDConv can adaptively learn the kernel deformation offset and weight coefficients according to the specific task and data distribution, so as to realize the change of both the shapes and the values of convolution kernels. It can effectively deal with the problems of large data distribution differences and large target deformation in medical image segmentation. Also, DDConv is plug-and-play and can be embedded in any network structure. The shape change of the convolutional kernel in DDConv is based on the network learning of the deformation offsets. The segmentation network first samples the input feature map \(X\) using a square convolutional kernel \(S\), and then performs a weighted sum with a weight matrix \(M\). The square convolution kernel \(S\) determines the range of the receptive field, e.g., a \(3\times 3\) convolution kernel can be expressed as: \[S=\{(0,0),(0,1),(0,2),...,(2,1),(2,2)\}, \tag{1}\] then the output feature map \(Y\) at the coordinate \(\varphi_{n}\) can be expressed as: \[Y\left(\varphi_{n}\right)=\sum_{\varphi_{m\in S}}S\left(\varphi_{m}\right) \cdot X\left(\varphi_{n}+\varphi_{m}\right), \tag{2}\] when the deformation offset \(\triangle\varphi_{m}=\{m=1,2,3,\ldots,N\}\) is introduced in the weight matrix \(M\), \(N\) is the total length of \(S\). Thus the Equation (2) can be expressed as: \[Y\left(\varphi_{n}\right)=\sum_{\varphi_{m\in S}}S\left(\varphi_{m}\right) \cdot X\left(\varphi_{n}+\varphi_{m}+\triangle\varphi_{m}\right). \tag{3}\] Through network learning, an offset matrix with the same size as the input feature map can be finally obtained, and the matrix dimension is twice that of the input feature map. To show the convolution kernel of DDConv is dynamic, we first present the output feature map of vanilla convolution: \[y=\sigma(W\cdot x), \tag{4}\] where \(\sigma\) is the activation function, \(W\) is the convolutional kernel weight matrix and \(y\) is the output feature map. In contrast, the output of the feature map of DDConv is: \[\hat{y}=\sigma\left((\alpha_{1}\cdot W_{1}+\ldots+\alpha_{n}\cdot W_{n}) \cdot x\right), \tag{5}\] where \(n\) is the number of weight coefficients, \(\alpha_{n}\) is the weight coefficients with learnable parameters and \(\hat{y}\) is the output feature map generated by DDConv. DDConv achieves dynamic adjustment of the convolution kernel weights by linearly combining different weight matrices according to the corresponding weight coefficients before performing the convolution operation. According to the above analysis, we can see that DDConv realizes the dynamic adjustment of the shape and weights of the convolution kernel by combining the convolution kernel deformation offset and the convolution kernel weight coefficient with a minimal number of calculation. Compared with directly increasing the number and size of convolution kernels, the DDConv is simpler and more efficient. The proposed DDConv not only solves the problem of poor adaptive feature extraction ability of fixed-size convolution kernels but also overcomes the defect that different inputs share the same convolution kernel parameters. Consequently, our DDConv can be used to improve the segmentation accuracy of small targets and large targets with blurred edges in medical images. ### Shifted Window Adaptive Complementary Attention Module The self-attention mechanism is the core computing unit in Transformer networks, which realizes the capture of long-range dependency of feature maps by utilizing matrix operations. However, the self-attention mechanism only considers the dependency in the spatial dimension but not the cross-dimensional dependency between spatial and channels [1]. Therefore, when dealing with medical image segmentation with low contrast and high density noise, the self-attention mechanism is easy to confuse the segmentation targets with the background, resulting in poor segmentation results. To solve the problems mentioned above, we propose a new cross-dimensional self-attention module called (S)W-ACAM. As shown in Figure 3, (S)W-ACAM has four parallel branches, the top two branches are the conventional dual attention module [1] and the bottom Figure 2: The module of the proposed DDConv. Compared with the current popular convolution strategy, DDConv can dynamically adjust the weight coefficient and deformation offset of the convolution itself during the training process, which is conducive to the feature capture and extraction of irregular targets in medical images. \(\alpha\) and \(\beta\) represent the different weight values of DDConv in different states. two branches are cross-dimensional attention modules. Compared to popular self-attention modules such as spatial self-attention, channel self-attention, and dual self-attention, our proposed (S)W-ACAM can not only fully extract the long-range dependency of spatial and channels, but also capture the cross-dimensional long-range dependency between spatial and channels. These four branches complement each other, provide richer long-range dependency relationships, enhance the separability between the foreground and background, and thus improve the segmentation results for medical images. The standard Transformer architecture [14] uses the global self-attention method to calculate the relationship between one token and all other tokens. This calculation method is complex, especially in the face of high-resolution and intensive prediction tasks like medical images where the computational costs will increase exponentially. In order to improve the calculation efficiency, we use the shifted window calculation method similar to that in Swin Transformer [13], which only calculates the self-attention in the local window. However, in the face of our (S)W-ACAM four branches module, using the shifted window method to calculate self-attention does not reduce the overall computational complexity of the module. Therefore, we also designed the compact convolutional projection. First, we reduce the local size of the medical image through the shifted window operation, then we compress the channel dimension of feature maps through the compact convolutional projection, and finally calculate the self-attention. It is worth mentioning that this method can not only better capture the global high-dimensional information of medical images but also significantly reduce the computational costs of the module. Suppose an image contains \(h\times w\) windows, each window size is \(M\times M\), then the complexity of the (S)W-ACAM, the global MSA in the original Transformer, and the (S)W-MSA in the Swin Transformer are compared as follows: \[\Omega\left(MSA\right)=4hwC^{2}+2(hw)^{2}C, \tag{6}\] \[\Omega\left((S)W\text{-}MSA\right)=4hwC^{2}+2M^{2}hwC, \tag{7}\] \[\Omega\left((S)W\text{-}ACAM\right)=\frac{hwC^{2}}{4}+M^{2}hwC. \tag{8}\] if the former term of each formula is a quadratic function of the number of patches \(hw\), the latter term is linear when \(M\) is fixed (the default is 7). Then the computational costs of (S)W-ACAM are smaller compared with MSA and (S)W-MSA. Among the four parallel branches of (S)W-ACAM, two branches are used to capture channel correlation and spatial correlation, respectively, and the remaining two branches are used to capture the correlation between channel dimension \(C\) and space dimension \(H\) and vice versa (between channel dimension \(C\) and space dimension \(W\)). After adopting the shifted window partitioning method, as shown in Figure 2 (b), the calculation process of continuous Transformer blocks is as follows: \[\hat{T}^{l}=W\text{-}ACAM\left(LN\left(T^{l-1}\right)\right)+T^{l-1}, \tag{9}\] \[T^{l}=LPM\left(LN\left(\hat{T}^{l}\right)\right)+\hat{T}^{l}, \tag{10}\] \[\hat{T}^{l+1}=SW\text{-}ACAM\left(LN\left(T^{l}\right)\right)+T^{l}, \tag{11}\] \[T^{l+1}=LPM\left(LN\left(\hat{T}^{l+1}\right)\right)+\hat{T}^{l+1}. \tag{12}\] where \(\hat{T}^{l}\) and \(T^{l}\) represent the output features of (S)W-ACAM and LPM, respectively. W-ACAM represents window adaptive complementary attention, SW-ACAM represents shifted window adaptive complementary attention, and LPM represents lightweight perceptron module. For the specific attention calculation process of each branch, we follow the same principle in Swin Transformer as follows: \[Attention\left(Q,K,V\right)=SoftMax\left(\frac{QK^{T}}{\sqrt{C/8}}+B\right)V, \tag{13}\] where relative position bias \(B\in\mathbb{R}^{M^{2}\times M^{2}}\), \(Q,K,V\in\mathbb{R}^{M^{2}\times\frac{C}{8}}\) are query, key, and value matrices respectively. \(\frac{C}{8}\) represents the dimension of query/key, and \(M^{2}\) represents the number of patches. After four parallel attention branches \(Out_{1}\), \(Out_{2}\), \(Out_{3}\) and \(Out_{4}\) are calculated, the final feature fusion output is: \[Out=\lambda_{1}\cdot Out_{1}+\lambda_{2}\cdot Out_{2}+\lambda_{3}\cdot Out_{3} +\lambda_{4}\cdot Out_{4}, \tag{14}\] where \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) and \(\lambda_{4}\) are learnable parameters that enable adaptive control of the importance of each attention branch for spatial and channel information in a particular segmentation task through the back-propagation process of the segmentation network. Different from other self-attention mechanisms, the (S)W-ACAM in this paper can fully capture the correlation between spatial and channels, and reasonably use the context information of medical images to achieve long-range dependence Figure 3: The module of the proposed (S)W-ACAM. Unlike conventional self-attention, (S)W-ACAM has the advantages of spatial and channel attention, and can also capture long-distance correlation features between spatial and channels. Through the shifted window operation, the spatial resolution of the image is significantly reduced, and through the compact convolutional projection operation, the channel dimension of the image is also significantly reduced. Thus, the overall computational costs and complexity of the network are reduced. \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) and \(\lambda_{4}\) are learnable weight parameters. modeling. Since our (S)W-ACAM effectively overcomes better feature representation of the defect that the conventional self-attention only focuses on the spatial self-attention of images and ignores the channel and cross-dimensional self-attention, it achieves the best image suffers from large noise, low contrast, and complex background. ### Architecture Variants We have built a CiT-Net-T as a base network with a model size of 11.58 M and a computing capacity of 4.53 GFLOPs. In addition, we built the CiT-Net-B network to make a fair comparison with the latest networks such as CvT [21] and PVT [22]. The window size is set to 7, and the input image size is \(224\times 224\). Other network parameters are set as follows: * [noitemsep,topsep=0pt] * CiT-Net-T: \(layer\ number=\{2,\ 2,\ 6,\ 2,\ 6,\ 2,\ 2\}\), \(H=\{3,\ 6,\ 12,\ 24,\ 12,\ 6,\ 3\}\), \(D=96\) * CiT-Net-B: \(layer\ number=\{2,\ 2,\ 18,\ 2,\ 18,\ 2,\ 2\}\), \(H=\{4,\ 8,\ 16,\ 32,\ 16,\ 8,\ 4\}\), \(D=96\), \(D\) represents the number of image channels when entering the first layer of the dynamically adaptive CNNs branch and the cross-dimensional fusion Transformer branch, \(layer\ number\) represents the number of Transformer blocks used in each stage, and \(H\) represents the number of multiple heads in self-attention. ## 3 Experiment and Results ### Datasets We conducted experiments on the skin lesion segmentation dataset ISIC2018 from the International Symposium on Biomedical Imaging (ISBI) and the Liver Tumor Segmentation Challenge dataset (LiTS) from the Medical Image Computing and Computer Assisted Intervention Society (MIC-CAI). The ISIC2018 contains 2,594 dermoscopic images for training, but the ground truth images of the testing set have not been released, thus we performed a five-fold cross-validation on the training set for a fair comparison. The LiTS contains 131 3D CT liver scans, where 100 scans of which are used for training, and the remaining 31 scans are used for testing. In addition, all images are empirically resized to \(224\times 224\) for efficiency. ### Implementation Details All the networks are implemented on NVIDIA GeForce RTX 3090 24GB and PyTorch 1.7. We utilize Adam with an initial learning rate of 0.001 to optimize the networks. The learning rate decreases in half when the loss on the validation set has not dropped by 10 epochs. We used mean squared error loss (MSE) and Dice loss as loss functions in our experiment. ### Evaluation and Results In this paper, we selected the mainstream medical image segmentation networks U-Net [14], Attention Unet [13], Swin-Unet [10], PVT [22], CrossForm [22] and the proposed CiT-Net to conduct a comprehensive comparison of the two different modalities datasets, ISIC2018 and the LiTS. In the experiment of the ISIC2018 dataset, we made an overall evaluation of the mainstream medical image segmentation network by using five indicators: Dice (DI), Jaccard (JA), Sensitivity (SE), Accuracy (AC), and Specificity (SP). Table 1 shows the quantitative analysis of the results of the proposed CiT-Net and the current mainstream CNNs and Transformer networks in the ISIC2018 dataset. From the experimental results, we can conclude that our CiT-Net has the minimum number of parameters and the lowest computational costs, and can obtain the best segmentation effect on the dermoscopic images without adding pre-training. Moreover, our CiT-Net-T network has only 11.58 M of parameters and 4.53 GFLOPs of computational costs, but still achieves the second-best segmentation effect. Our CiT-Net-B network, \begin{table} \begin{tabular}{c c c c c c c c c} \multicolumn{2}{c}{**Method**} & \multicolumn{1}{c}{**DI\(\uparrow\)**} & \multicolumn{1}{c}{**JA\(\uparrow\)**} & \multicolumn{1}{c}{**SE\(\uparrow\)**} & \multicolumn{1}{c}{**AC\(\uparrow\)**} & \multicolumn{1}{c}{**SP\(\uparrow\)**} & \multicolumn{1}{c}{**Para. (M) \(\downarrow\)**} & \multicolumn{1}{c}{**GFLOPs**} \\ \hline \multirow{4}{*}{**CNNs**} & U-Net [14] & 86.54 & 79.31 & 88.56 & 93.16 & 96.44 & 34.52 & 65.39 \\ & R2UNet [1] & 87.92 & 80.28 & 90.92 & 93.38 & 96.33 & 39.09 & 152.82 \\ & Attention Unet [13] & 87.16 & 79.55 & 88.52 & 93.17 & 95.62 & 34.88 & 66.57 \\ & CENet [14] & 87.61 & 81.18 & 90.71 & 94.03 & 96.35 & 29.02 & 11.79 \\ & CPFNet \(\uparrow\)[11] & 82.92 & 91.66 & 94.68 & 96.63 & 30.65 & **9.15** \\ \hline \multirow{4}{*}{**Transformer**} & Swin-Unet \(\uparrow\)[10] & 89.26 & 80.47 & 90.36 & 94.45 & 96.51 & 41.40 & 11.63 \\ & TransUNet \(\uparrow\)[12] & 89.39 & 82.10 & 91.43 & 93.67 & 96.54 & 105.30 & 15.21 \\ & BAT \(\uparrow\)[22] & 90.21 & 83.49 & 91.59 & 94.85 & 96.57 & 45.56 & 13.38 \\ & CvT \(\uparrow\)[22] & 88.23 & 80.21 & 87.60 & 93.68 & 96.28 & 21.51 & 20.53 \\ & PVT [22] & 87.31 & 79.99 & 87.74 & 93.10 & 96.21 & 28.86 & 14.92 \\ & CrossForm [22] & 87.44 & 80.06 & 88.25 & 93.39 & 96.40 & 38.66 & 13.57 \\ \hline \multirow{4}{*}{**Transformer**} & **CIT-Net-T (our)** & **90.72** & **84.59** & **92.54** & **95.21** & **96.83** & **11.58** & **4.53** \\ & **CIT-Net-B (our)** & **91.23** & **84.76** & **92.68** & **95.56** & **98.21** & **21.24** & 13.29 \\ \end{tabular} \end{table} Table 1: Performance comparison of the proposed method against the SOTA approaches on the ISIC2018 benchmarks. **Red** indicates the best result, and **blue** displays the second-best. BAT, CvT, and CrossForm have similar parameters or computational costs, but in the ISIC2018 dataset, the division Dice value of our CiT-Net-B is 1.02%, 3.00%, and 3.79% higher than that of the BAT, CvT, and CrossForm network respectively. In terms of other evaluation indicators, our CiT-Net-B is also significantly better than other comparison methods. In the experiment of the LiTS-Liver dataset, we conducted an overall evaluation of the mainstream medical image segmentation network by using five indicators: DI, VOE, RVD, ASD and RMSD. Table 2 shows the quantitative analysis of the results of the proposed CiT-Net and the current mainstream networks in the LiTS-Liver dataset. It can be seen from the experimental results that our CiT-Net has great advantages in medical image segmentation, which further verifies the integrity of CiT-Net in preserving local and global features in medical images. It is worth noting that the CiT-Net-B and CiT-Net-T networks have achieved good results in medical image segmentation in the first and second place, with the least number of model parameters and computational costs. The division Dice value of our CiT-Net-B network without pre-training is 1.20%, 1.03%, and 1.01% higher than that of the Swin-Unet, TransUNet, and CvT network with pre-training. In terms of other evaluation indicators, our CiT-Net-B is also significantly better than other comparison methods. ### Ablation Study In order to fully prove the effectiveness of different modules in our CiT-Net, we conducted a series of ablation experiments on the ISIC2018 dataset. As shown in Table 3, we can see that the Dynamic Deformable Convolution (DDConv) and (Shifted) Window Adaptive Complementary Attention Module ((S)W-ACAM) proposed in this paper show good performance, and the combination of these two modules, CiT-Net shows the best medical image segmentation effect. At the same time, the Lightweight Perceptron Module (LPM) can significantly reduce the overall parameters of the CiT-Net. ## 4 Conclusion In this study, we have proposed a new architecture CiT-Net that combines dynamically adaptive CNNs and cross-dimensional fusion Transformer in parallel for medical image segmentation. The proposed CiT-Net integrates the advantages of both CNNs and Transformer, and retains the local details and global semantic features of medical images to the maximum extent through local relationship modeling and long-range dependency modeling. The proposed DDConv overcomes the problems of fixed receptive field and parameter sharing in vanilla convolution, enhances the ability to express local features, and realizes adaptive extraction of spatial features. The proposed (S)W-ACAM self-attention mechanism can fully capture the cross-dimensional correlation between feature spatial and channels, and adaptively learn the important information between spatial and channels through network training. In addition, by using the LPM to replace the MLP in the traditional Transformer, our CiT-Net significantly reduces the number of parameters, gets rid of the dependence of the network on pre-training, avoids the challenge of the lack of labeled medical image data and easy over-fitting of the network. Compared with popular CNNs and Transformer medical image segmentation networks, our CiT-Net shows significant advantages in terms of operational efficiency and segmentation effect. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{2}{c}{**Method**} & \multicolumn{1}{c}{**DI \(\uparrow\)**} & \multicolumn{1}{c}{**VOE \(\downarrow\)**} & \multicolumn{1}{c}{**RVD \(\downarrow\)**} & \multicolumn{1}{c}{**ASD \(\downarrow\)**} & \multicolumn{1}{c}{**RMSD \(\downarrow\)**} & \multicolumn{1}{c}{**Para. (M) \(\downarrow\)**} & \multicolumn{1}{c}{**GFLOPs**} \\ \hline \multirow{8}{*}{**CNNs**} & U-Net [12] & 93.99\(\pm\)1.23 & 11.13\(\pm\)2.47 & 3.22\(\pm\)0.20 & 5.79\(\pm\)0.53 & 123.57\(\pm\)6.28 & 34.52 & 65.39 \\ & R2UNet [1] & 94.01\(\pm\)1.18 & 11.12\(\pm\)2.37 & 2.36\(\pm\)0.15 & 5.23\(\pm\)0.45 & 120.36\(\pm\)5.03 & 39.09 & 152.82 \\ & Attention Unet [1] & 94.08\(\pm\)1.21 & 10.95\(\pm\)2.36 & 3.02\(\pm\)0.18 & 4.95\(\pm\)0.48 & 118.67\(\pm\)5.31 & 34.88 & 66.57 \\ & CENet [12] & 94.04\(\pm\)1.15 & 11.03\(\pm\)2.31 & 6.19\(\pm\)0.16 & 4.11\(\pm\)0.51 & 115.40\(\pm\)5.82 & 29.02 & **11.79** \\ & 3D Unet [12] & 94.10\(\pm\)1.06 & 11.13\(\pm\)2.23 & **1.42\(\pm\)0.13** & 2.61\(\pm\)0.45 & 36.43\(\pm\)5.38 & 40.32 & 66.45 \\ & V-Net [11] & 94.25\(\pm\)1.03 & 10.65\(\pm\)2.17 & 1.92\(\pm\)0.11 & 2.48\(\pm\)0.38 & 38.28\(\pm\)5.05 & 65.17 & 55.35 \\ \hline \multirow{8}{*}{**Transformer**} & Swin-Unet \(\uparrow\)[1] & 95.62\(\pm\)1.32 & 9.73\(\pm\)1.26 & 2.78\(\pm\)0.21 & 2.35\(\pm\)0.35 & 38.85\(\pm\)5.42 & 41.40 & 11.63 \\ & TransUNet \(\uparrow\)[12] & 95.79\(\pm\)1.09 & 9.82\(\pm\)2.10 & 1.98\(\pm\)0.15 & 2.33\(\pm\)0.41 & 37.22\(\pm\)5.23 & 105.30 & 15.21 \\ \cline{1-1} & CvT \(\uparrow\)[13] & 95.81\(\pm\)1.25 & 9.66\(\pm\)2.31 & 1.77\(\pm\)0.16 & 2.34\(\pm\)0.29 & 36.71\(\pm\)5.09 & 21.51 & 20.53 \\ \cline{1-1} & PVT [13] & 94.56\(\pm\)1.15 & 9.75\(\pm\)2.19 & 1.69\(\pm\)0.12 & 2.42\(\pm\)0.34 & 37.35\(\pm\)5.16 & 28.86 & 14.92 \\ \cline{1-1} & CrossForm [13] & 94.63\(\pm\)1.24 & 9.72\(\pm\)2.24 & 1.65\(\pm\)0.15 & 2.39\(\pm\)0.31 & 37.21\(\pm\)5.32 & 38.66 & 13.57 \\ \hline \hline \multicolumn{2}{c}{**CIT-Net-T (our)**} & **96.48\(\pm\)1.05** & **9.53\(\pm\)2.11** & 1.45\(\pm\)0.12 & **2.29\(\pm\)0.33** & **36.21\(\pm\)4.97** & **11.58** & **4.53** \\ \multicolumn{2}{c}{**CIT-Net-B (our)**} & **96.82\(\pm\)1.22** & **9.46\(\pm\)2.33** & **1.38\(\pm\)0.13** & **2.21\(\pm\)0.35** & **36.08\(\pm\)4.88** & **21.24** & 13.29 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparison of the proposed method against the SOTA approaches on the LiTS-Liver benchmarks. **Red** indicates the best result, and **blue** displays the second-best. \(\dagger\) indicates the model initialized with pre-trained weights on ImageNet21K. “Para.” refers to the number of parameters. “GFLOPs” is calculated under the input scale of \(224\times 224\). Compared with the comparison experiment oin the ISIC2018 dataset, 3D Unet and V-Net are introduced into the comparison experiment oin the LiTS-Liver dataset. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Backbone** & **DDConv(S)W-ACAMLPM** & **Para. (M) DI (\%) \(\uparrow\)** \\ \hline U-Net+Swin-Unet & & & 46.92 & 87.45 \\ U-Net+Swin-Unet & \(\surd\) & & 48.25 & 89.15 \\ U-Net+Swin-Unet & & \(\surd\) & 30.26 & 89.62 \\ U-Net+Swin-Unet & & \(\surd\) & 15.45 & 88.43 \\ U-Net+Swin-Unet & \(\surd\) & \(\surd\) & 32.16 & 90.88 \\ U-Net+Swin-Unet & \(\surd\) & \(\surd\) & 16.93 & 89.12 \\ U-Net+Swin-Unet & & \(\surd\) & \(\surd\) & 9.67 & 89.46 \\ CiT-Net-T (our) & \(\surd\) & \(\surd\) & \(\surd\) & 11.58 & 90.72 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation experiments of DDConv, (S)W-ACAM and LPM in CiT-Net in the ISIC2018 dataset. ## Acknowledgments This work was supported in part by the National Natural Science Foundation of China under Grants 62271296, 62201334 and 62201452, in part by the Natural Science Basic Research Program of Shaanxi under Grant 2021JC-47, and in part by the Key Research and Development Program of Shaanxi under Grants 2022GY-436 and 2021ZDLGY08-07.
2310.09806
Can LSH (Locality-Sensitive Hashing) Be Replaced by Neural Network?
With the rapid development of GPU (Graphics Processing Unit) technologies and neural networks, we can explore more appropriate data structures and algorithms. Recent progress shows that neural networks can partly replace traditional data structures. In this paper, we proposed a novel DNN (Deep Neural Network)-based learned locality-sensitive hashing, called LLSH, to efficiently and flexibly map high-dimensional data to low-dimensional space. LLSH replaces the traditional LSH (Locality-sensitive Hashing) function families with parallel multi-layer neural networks, which reduces the time and memory consumption and guarantees query accuracy simultaneously. The proposed LLSH demonstrate the feasibility of replacing the hash index with learning-based neural networks and open a new door for developers to design and configure data organization more accurately to improve information-searching performance. Extensive experiments on different types of datasets show the superiority of the proposed method in query accuracy, time consumption, and memory usage.
Renyang Liu, Jun Zhao, Xing Chu, Yu Liang, Wei Zhou, Jing He
2023-10-15T11:41:54Z
http://arxiv.org/abs/2310.09806v1
# Can LSH (Locality-Sensitive Hashing) Be Replaced by Neural Network? ###### Abstract With the rapid development of GPU (Graphics Processing Unit) technologies and neural networks, we can explore more appropriate data structures and algorithms. Recent progress shows that neural networks can partly replace traditional data structures. In this paper, we proposed a novel DNN (Deep Neural Network)-based learned locality-sensitive hashing, called LLSH, to efficiently and flexibly map high-dimensional data to low-dimensional space. LLSH replaces the traditional LSH (Locality-sensitive Hashing) function families with parallel multi-layer neural networks, which reduces the time and memory consumption and guarantees query accuracy simultaneously. The proposed LLSH demonstrate the feasibility of replacing the hash index with learning-based neural networks and open a new door for developers to design and configure data organization more accurately to improve information-searching performance. Extensive experiments on different types of datasets show the superiority of the proposed method in query accuracy, time consumption, and memory usage. learned index, deep learning, locality-sensitive hashing, kNN ## 1 Introduction Given a set of data points and a query, searching for the nearest data point in a given database is the fundamental problem of NN (Nearest Neighbor) search [2; 4; 26], which is widely used in information retrieval, data mining, multimedia, and scientific databases. Suppose there is a query point \(q\) and dataset \(D\), the NN problem is to find an item \(q_{1}\) from \(D\) on the condition that the distance between \(q_{1}\) and \(q\) is closest. One extension of NN is kNN that find top-\(k\) closest items to the query data \(\{q_{1},...,q_{k}\}\) from \(D\). The traditional kNN algorithm is mainly based on spatial division, which is most widely used in the tree algorithms, such as KD-tree [3], R-tree [12], Ball-tree [1]. Although the query accuracy of the tree-based approach is high, they require a huge amount of memory, sometimes even exceeding the data itself. Besides, the performance of tree-based indexing methods will be significantly faked when handling high-dimensional data [11; 27], which is named "curse of dimensionality" [5]. In addition, with the development of the current business systems, the data dimensions are increasing, achieving from thousands to millions. It puts a high demand on finding a new way to deal with kNN efficiently because the traditional indexing methods are challenging to handle the high dimensional data. One feasible way is to transform the NN and kNN problems into ANN (Approximate Nearest Neighbors) and kANN problems to cope with the growing data dimension. In the ANN search, the index method only needs to return the approximate nearest objects \(\{q_{1},...,q_{k}\}\) rather than find the actual nearest one. In this way, the query efficiency can be significantly improved. The ANNs have a lot of advantages to solving the search tasks in scenarios which not require high precision to reduce time and memory consumption. Among them, the LSH [14], which basic principle is that the two adjacent data points in the original data space can be hashed into the same bucket by the same mapping or projection transformation rule is the most popular one. And it is widely used in various searching fields, including and not limited to text, audio, image, video, gene, et al., due to its unusual nature of locality sensitivity and the superiority to KD-tree [3] and other methods in high-dimensional searching. Traditional LSH, however, is applied to CPU, parallel computing and distributed applications, which greatly limits its potential in the face of high dimensional data. Moreover, due to the rapid development of hardware, such as GPU/TPU (Tensor Processing Unit), the high cost of performing neural networks may be negligible in the near future. Therefore, inspired by the pioneering work [17] in developing a learned index to explore how neural networks can enhance or even replace traditional index structures. In this paper, we design a novel neural networks-based framework, called LLSH, to boot the E2LSH (Exact Euclidean Locality Sensitive Hashing) [7] in the task of massive data retrieval. The LLSH creatively proposed to replace the hash functions in E2LSH with a simple neural network to improve the search efficiency of the hash indexing. Extensive experiments illustrated the proposed framework's feasibility and superiority in query accuracy, time, and memory consumption. The main contributions of this paper are reflected as follows: \(\triangleright\) We propose a novel DNN-based learned locally-sensitive hashing, called LLSH, which can be applied to the kNN problem of high-dimensional data and avoid "dimensional curses." To the best of our knowledge, it is the first work to use neural networks instead of hash function families. Each neural network is independent and computes parallelly to fully utilize the hardware's advantages and reduce the false-positive and false-negative rates. \(\triangleright\) We design the framework of LLSH in detail and apply it to replace the traditional E2LSH with two different strategies. The basic one trains the neural network layer supervised by the E2LSH outputs, while the ensemble one takes a forward step to fully utilize the idea of ensemble learning to integrate the outputs of multiple NN algorithms to improve the performance. \(\triangleright\) We conduct extensive experiments, which include feasibility verification, time and memory consumption, and query accuracy, on eight datasets with different data types and distributions. The empirical results show the viability of the proposed LLSH framework and its superiority in reducing time and memory usage and improving query accuracy. The rest of the paper is organized as follows. We briefly review the methods relating to data structure and machine learning in Sec. 2. In Sec. 3, we provide the preliminaries of LSH and E2LSH. Sec. 4 discusses the details of the proposed LLSH. The experimental results are shown and analyzed in Sec. 5. Finally, the paper is concluded in Sec. 6. ## 2 Related Works Our work is based on a wide range of previous excellent research. In the following, we intend to summarize several essential interactions between data structure and machine learning. LSH is a hashing algorithm that was first proposed by Indyk in 1998. In general, the hash algorithm is a way to reduce conflicts, and it can facilitate quick additions and deletions, but LSH is not. LSH, which uses the hash conflict to speed up the retrieval effect, is mainly applied to the fast approximate search of high-dimensional mass data. The approximate search is a comparison of distances or similarities between data points. According to the different methods of similarity calculation, LSH can be divided into several categories, including Simhash [24], E2LSH [7], C2LSH [8], Kernel LSH [18], LSB-forest [31], QALSH [13] etc. LSH families have many branches and are widely used in various applications. For example, Simhash maps the original text content to a digital hash signature, where the two similar texts correspond to the same digital signature. So, the similarity of the two documents can be measured by the Hamming distance between the Simhash value. E2LSH is a randomized implementation method of LSH in Euclidean space. The basic principle of E2LSH is to use the position-sensitive function based on p-stable distribution to map the high-dimensional data and keep the two neighbor points in the original space still closest to each other after the mapping operation. LSB-forest builds multiple trees to adjust to the NN search. Sun et al. devised SRS [30] with a small index footprint so that the entire index structure can fit in lesser memory. Recently, a new LSH scheme named QALSH (Query-aware data-dependent LSH) has been proposed to improve search accuracy by deciding the bucket boundaries after the query arrives at its position. However, with the development of AI (Artificial Intelligence) and the explosion of data complexity, machine learning has become a powerful technique for solving computer optimization problems, which require new methods to compute more efficiently and intelligently. Recently, researchers have begun employing machine learning to optimize indexes and hash functions. There is various research on emulating locality-sensitive hash functions to build the new ANN indexes, ranging from supervised [6; 21; 28; 32] to unsupervised [9; 10; 15; 16; 20]. These kinds of methods incorporate data-driven learning methods in developing advanced hash functions. The principle of these works is learning to a hash, which means learning the information of data distributions or class labels to guide the design of the new learning-based hash function. However, the hash function's basic construction is still unchanged. Although, there are some methods, like [19; 33], using the neural network to replace a hash function and using the image as the hash label to pursue the good search performance in image retrieval, but these limits the scope of the hash method and cannot be used to construct fundamental data structures directly. As far as we know, paper [17] is the pioneering work in developing a learning index that explores how neural networks can enhance and even replace traditional index structures. It provides a learned index based on a neural network to replace the B-tree index and further discusses the difference between learning hash mapping and traditional hash mapping index. Moreover, Our previous work also provides an unsupervised learned index named PAVO [34]. Therefore, we are well motivated by these works to propose a novel neural network-based learned hash index framework that can utilize new techniques, like a deep neural network, and new hardware, like high-performance GPU, to construct a novel learning-based hash method for massive magnitude and dimensional data retrieve. ## 3 Preliminary LSH is a fast nearest neighbor search algorithm for massive high-dimensional data. We call such a family of hash functions \(H=h:S\to U\) as \((r_{1},r_{2},p_{1},p_{2})\) sensitive if the function \(h\) in any \(H\) satisfies the following two conditions: \[if\,d(O_{1},O_{2})<r_{1}\,then\,Pr[h(O_{1})=h(O_{2})]\geq p_{1},\] \[if\,d(O_{1},O_{2})>r_{2}\,then\,Pr[h(O_{1})=h(O_{2})]\leq p_{2}.\] Among them, \(O_{1},O_{2}\in S\), denote two data objects with multi-dimensional attributes, \(d(O_{1},O_{2})\) is a metric function that represents the degree to which two objects are different. And the threshold \((r_{1},r_{2},p_{1},p_{2})\) satisfies the condition: \(r_{1}<r_{2}\) and \(p_{1}>p_{2}\). It means that two high-dimensional data are mapped to the same hash values when they are similar enough. The LSH can be divided into different types according to the different similarity calculation methods. One of the most widely used is the p-stable hash, also called E2LSH, which uses a Euclidean distance to measure data similarity. The p-stable distribution refers to a type of distribution defined as follows. For any \(n\) real numbers \(v_{1},v_{2},...,v_{n}\) and \(n\) random variables \(d_{1},d_{2},...,d_{n}\) subject to the distribution \(D\), there is a \(p\geq 0\) that makes \(\sum_{i}v_{i}d_{i}\) and \((\sum_{i}v_{i}^{p})^{1/p}\) have the same distribution (\(d\) is a random variable in the p-stable distribution). For E2LSH, the \(p\) of the p-stable distribution is limited to \(0<p\leq 2\) and defined as follows: \(\triangleright\) 1-stable: Cauchy Distribution \[c(x)=\frac{1}{\pi}\frac{1}{1+x^{2}}; \tag{1}\] \(\triangleright\) 2-stable: Gaussian Distribution \[g(x)=\frac{1}{\sqrt{2\pi}}e^{-x^{2}/2}. \tag{2}\] The family of hash functions are proposed as follows [7]: \[h_{a,b}(v)=\lfloor\frac{av+b}{r}\rfloor, \tag{3}\] where \(a\) is a vector that conforms to the p-stable, and the dimension is the same as \(v\), \(b\in(0,r)\) is a random number, \(r\) is the length of a straight line segment, the establishment of hash function family is based on the differences of \(a\) and \(b\). So, if two points \(v_{1}\) and \(v_{2}\) are supposed to be mapped into the same hash value, they must satisfy \(av_{1}+b\) and \(av_{2}+b\) are mapped to the same line segment. Let \(f_{p}(t)\) denote the probability density function of the absolute value of the p-stable distribution. For two vector \(v_{1},v_{2}\), make \(c=\left\|v_{1}-v_{2}\right\|_{p}\), the collision probability in E2LSH is calculated as follows: \[p(c)=P_{a,b}[h_{a,b}(v_{1})=h_{a,b}(v_{2})]=\int_{0}^{r}f_{p}(\frac{t}{c})(1- \frac{t}{r})dt. \tag{4}\] For a fixed parameter \(r\), the probability of collision increases as \(c=\left\|v_{1}-v_{2}\right\|_{p}\) decreases. The family of hash functions is \((r_{1},r_{2},p_{1},p_{2})\)-sensitive, \(p_{1}=p(1),p_{2}=p(c),r_{2}/r_{1}=c\). Therefore, this family of locality-sensitive hash functions can be used to solve the approximate nearest neighbor problem. In order to widen the gap between the collision probability between the points with short distance and the points with far distance after Figure 1: The framework of DNN-based learned index. mapping, E2LSH uses \(k\) position-sensitive functions together to build the function family: \[\mathcal{G}=\{g:S\longrightarrow U^{k}\} \tag{5}\] where \(\mathcal{G}\) represents the union of \(k\) position-sensitive functions, and \(g(v)=(h1(v),...,h_{k}(v))\), then each data point \(v\in\mathbb{R}^{d}\)' dimension can be reduced via the function \(g(v)\in\mathcal{G}\) to obtain a \(k\)-dimensional vector \(\vec{a}=(a_{1},a_{2},...,a_{k})\). Then, E2LSH uses the main hash function \(H_{1}\) and the secondary hash function \(H_{2}\) to hash the vector after dimension reduction and establishes the hash table to store data points. The specific forms of \(H_{1}\) and \(H_{2}\) are as follows: \[H_{1}=((a_{1}*h_{1}+...a_{k}*h_{k})\ mod\ C)\ mod\ T \tag{6}\] \[H_{2}=(b_{1}*h+...b_{k}*h_{k})\ mod\ C \tag{7}\] where \(a_{i}\) and \(b_{i}\) are randomly selected integers, \(T\) is the length of the hash table (generally set to the total number of data points \(n\)), and \(C\) is a large prime number (can be set to \(2^{32}-5\) on a 32-bit machine). Data points with the same primary hash value \(H_{1}\) and secondary hash value \(H_{2}\) will Figure 2: The supervised strategy in neural network stage. be stored in the same hash bucket to realize the clustering of data points. For the query point \(q\), E2LSH first uses the locality-sensitive hash function to obtain a set of hash values, then use \(H_{1}\) to obtain its location in the hash table and then calculates its \(H_{2}\) value, and obtain the same \(H_{2}\) value of point \(q\) by querying the linked list of the location point. Finally, to obtain a set of recovered points by querying \(L\) tables and \(K\) (or less than \(K\)) neighbor points by sorting the distances. ## 4 The Framework of DNN-Based Learned Index Traditionally, we view index structure and machine learning algorithms as pretty different research branches. The index structure is constructed fixedly, but the machine learning algorithm is based on data training. However, both of them are positioning and searching for the space position. There is a potential connection between neural networks and indexes. A hash index can be regarded as a regression or a classification where the data is predicted based on the key, which is not fundamentally different from the neural network's. Inspired by the structure of the learned index, we propose the following groundbreaking work. This section will present our learned locality-sensitive hashing index framework in detail. ### The framework of DNN-based learned index The ideal locality-sensitive hashing requires mapping and querying efficiently. Since the neural network with enough parameters has a robust fitting ability, using a deep neural network to simulate the hash function is meaningful. Empirically, an arbitrarily complex dataset fed into a well-trained model can always obtain the ideal mapping results. The scheme of the proposed method (shown in Fig. 1) can be divided into four stages: Input Stage, Autoencoder Stage, Neural Network Stage, and Hash index Stage. The supervised strategy is used to train the model in Neural Network Stage. When LLSH is trained, it can infer the input data to get the corresponding hash value. Taking image data as an example, each piece of data will go through the following four stages: 1) Feature extraction: where SIFT or GIST are generally used for feature extraction; 2) Dimensionality reduction: which refers to further dimensionality reduction of the extracted features by autoencoder; 3 ) Hash value generation: input the dimensionality-reduced feature vector into the neural network to generate corresponding hash value; 4) Hash index: perform the nearest neighbor search of the generated hash values. Since training the model in a supervised manner, it can guarantee that similar data will generate similar hash values. #### 4.1.1 Input Stage The input stage includes all kinds of data that require LSH to get mapping results in industrial or other scenarios, including various images, audio and text. Among them, some simple data, such as latitude and longitude data, can be directly input into the neural network. In contrast, other complex data need to be preprocessed (e.g., by feature extraction) before input into the LLSH, such as image data, audio data and et al. For example, the image data can use the GIST [29] or SIFT [23], and the audio data can use the MFCC [22] and the text data can use word2vector [25] to extract features, respectively. #### 4.1.2 Autoencoder Stage Although the raw data has been extracted through the traditional feature extraction method, its correlation information needs to be expressed more adequately and the dimension of extracted feature is still too large, resulting in large amounts of parameters and further increasing computing consumption in the neural network part. So, LLSH first builds and trains an autoencoder model with a large amount of data to make it perform well and further reduce the extracted features dimensions regarding semantics. #### 4.1.3 Neural Network Stage The Neural Network Stage is the most critical part of the LLSH algorithm. In this stage, the neural network is composed of multiple DNN models, and the purpose is to encode the processed data. In this paper, we use \(L\) neural networks to simulate \(L\) locality-sensitive hashing function families, each of them outputs \(k\) hash function values, the same as a traditional local-sensitive hash function at query time. In this way, we only need a set of neural networks that return the same result for similar data. Each neural network mentioned above acts as a family of hash functions, where the number of layers and neural nodes is determined according to the original hash structure. In the training process, we concat each neural network's output as the final output of the whole neural network stage to calculate the loss with the given label and further update the neural network's parameters. The training process will be finished soon because the parameters of each neural network are updated in a parallel way and do not affect each other. Besides, for NN search, we don't need each neural network's output exactly be the same. #### 4.1.4 Hash Index Stage Finally, after the entire framework is well-trained, each neural network's output will be used as the hash index value and build the multiple hash tables. Empirically, the multiple hash tables can significantly reduce false-positive and false-negative rates [7]. In the querying, if the neural network outputs are the same for two input data, LLSH regards them as similar and maps them into the same storage address (bucket). For more convenience to find the index and decrease the computation when building a hash table, we build two extra hash functions, \(H_{1},H_{2}\), to transform the upper stage's output. The \(H_{1},H_{2}\) is shown below: \[H_{1}(x_{1},...,x_{k})=((\sum_{i=1}^{k}r_{i}x_{i})\ mod\ C)\ mod\ T, \tag{8}\] \[H_{2}(x_{1},...,x_{k})=(\sum_{i=1}^{k}r^{\prime}_{i}x_{i})\ mod\ C), \tag{9}\] where \(r_{i},r^{\prime}_{i}\) are random integers. \(C=2^{32}-5\), is a large prime number. The \(H_{2}\)'s result is a data fingerprint, and the \(H_{1}\)'s result is the index of the hashtable in which the data fingerprint resides. ### Autoencoder and Neural Network Design In this subsection, we will introduce the autoencoder and neural network of the Sec. 4.1 in detail, including the architecture and parameter design. Fig. 3 shows the autoencoder and neural network detail. To reduce the number of parameters in LLSH, we design a relatively small autoencoder that only includes the input layer, one hidden layer, and the output layer. In this paper, the autoencoder part is used as the feature extractor to reduce the data's dimension. To train this autoencoder more efficiently, we first pre-train it with large-scale data, and when faced with different datasets, we use transfer learning to fine-tune it again. When trained, the autoencoder can output a feature vector with a lower dimension. Similar to the autoencoder, we use two fully connected (FC) layers to implement each small neural network unit. The whole model contains \(N\) (\(N=1,2,...,L\)) small units, named NN \(L\) respectively. The first layer of each small unit contains \(m3\) neurons and the last layer contains \(k\) neurons. Each neural network's output contacts the final hash values of the whole model. Note that where the \(N\), \(m3\), and \(k\) could adjust to keep good performance concerning the data size flexibly. Therefore, the number of LLSH's parameter is \(p_{1}=d*m_{1}+m_{1}*m_{2}+(m_{2}*m_{3}+m_{3}*k)*L\), while the traditional E2LSH algorithm is \(p_{2}=d*k*L\). In the actual implementation, we make \(p_{1}<<p_{2}\) but without loss in query performance. Figure 3: The detail of autoencoder and neural network ### Model training and prediction In this subsection, we will describe the train and prediction of the LLSH in detail. The first is to train the model to build the hash index well. When the model is well-trained, the second is to calculate the hash value of the query data by model prediction. Fig. 2 shows the overall framework of the first part. The specific steps are as follows: * Step 1: The feature extraction of different kinds of data such as images, audio, and texts extract features to obtain their corresponding feature vectors (v1), and then put the extracted feature vector into the autoencoder mentioned above to get more condensed vectors (v2) with lower dimension; * Step 2: Input the vectors (v2) obtained by step 1 into the traditional E2LSH to obtain the \(L*k\) hash values and concatenate them into a matrix as the label; * Step 3: Train neural networks with the vectors (v2) and their corresponding labels obtained in Step 2 until the model reaches convergence. * Step 4: Input the query item to the well-trained model for predicting the hash value. **Loss function:** The purpose of training neural networks is to make its output match the E2LSH output by iteratively updating the networks' parameters. And we expect the predicted results to be as close as possible to the hash value generated by the E2LSH. So, we chose the mean square error (MSE) loss as the objective function as follows: \[Loss=\frac{1}{N}\sum_{i=1}^{N}(y_{i}-\hat{y}_{i})^{2}, \tag{10}\] where \(y_{i}\) represents the neural network's output, \(\hat{y_{i}}\) refers to label, and \(N\) is the total number of data of per output. We use Adam for optimizing and Relu for the activation in the training process. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Dataset & Type & Dimension & Mean & Std & Dataset & Type & Dimension & Mean & Std \\ \hline Uniform & Random & 100 & 0.5 & 0.29 & Tiny Images & GIST & 384 & 0.11 & 0.07 \\ \hline Normal & Random & 100 & 0 & 1 & Ann SIFT & SIFT & 128 & 27.05 & 35.89 \\ \hline Lognormal & Random & 100 & 1.65 & 2.16 & Nytimes & word2vec & 250 & 0 & 0.06 \\ \hline Exponential & Random & 100 & 1 & 1 & Golve & word2vec & 200 & 0 & 0.45 \\ \hline \hline \end{tabular} \end{table} Table 1: The datasets used in the experiments: four are randomly generated, and four are from the real world. Figure 4: The fitting rate on different datasets. ## 5 Experiment In this section, we will discuss the experiment details. All the experiments were conducted on a GPU server equipment with 128GB memory, two 2.1 GHz Intel(R) E5 processors, and two GTX1080Ti GPU cards with 11GB dedicated memory, and the operating system is CentOS 7. We use Python 3.6 and TensorFlow 1.13.1 to implement all code work. We repeat each experiment ten times and then use the median or average of the ten results as the final performance. ### Setup **Datasets:** The dataset used in our experiments comes from two different types, synthetic data and real data, to adapt to data with different distributions in practical applications. Specifically, there are four synthetic datasets sampled from the distributions of uniform, exponential, normal and lognormal, respectively. The other four datasets include Tiny Images, Ann Sift, Nytimes and Glove, which involved images and word vectors. The details of the aforementioned datasets, which are detailed in Table 1, contain different types, scales and dimensionality. **Metrics:** In order to evaluate the simulating ability of the neural network-based LLSH in this work to the traditional E2LSH method, we use fitting accuracy as its evaluation metric, which refers to the correct rate of fitting E2LSH. It defines as follows: \[F_{rate}=\frac{M}{N}\times 100\%, \tag{11}\] where the \(M\) represents the same output numbers of neural networks and E2LSH, \(N\) is the output dimension. A higher fitting accuracy indicates LLSH fits E2LSH more correctly. **Parameters:** For all of our experiments, we set the E2LSH parameters as \(K=10,L=30\), and \(r=4\) (the width of projection), and set \(M=2\), \(L=30\) and \(K=10\) for the proposed LLSH. ### Ablation Study The suitable combination of parameters of \(L,k,r\) significantly impacts the performance of traditional E2LSH. In our framework, however, the most critical parameters are \(M,L,k\), where \(M\) is the number of neural network layers. Therefore, in this subsection, we study how combining these parameters could boost LLSH. In general, the more neural network layers mean the better the learning performance. However, our experiments show it is not exactly true for this work. The results in Fig. 5 and Fig. 6 illustrated the query accuracy of various \(L\) on three random datasets drawn from uniform (a), normal (b) and lognormal distribution (c), and a real image dataset Tiny Images (d), respectively. The query accuracy decrease with the layers \(M\) grows up, suggesting that a smaller \(M\) shows a better query effect. The query accuracy reaches the top point when \(M=2\) both in Fig. 5 and Fig. 6. From Fig. 5. We also observe that \(L\) has a vital influence on query accuracy and \(L=30\) leads to the highest query accuracy. Moreover, the results in Fig. 6 suggested that with the increase of \(K\), the query accuracy decreases, and the best results can be obtained when \(K=10\) in all cases. Therefore, in the following experiments, we set the key parameters of the proposed LLSH as \(M=2\), \(L=30\) and \(K=10\) to pursue optimal performance. ### Feasibility verification In practical applications, data often have intricate distributions. So, to verify the feasibility of the proposed LLSH to replace the traditional E2LSH, we carry conduct experiments on eight datasets from different distributions (which are detailed in Table. 1) to verify whether LLSH can effectively fit the input-output mapping of E2LSH. The results are shown in Fig. 4(a) and 4(b) for the four synthetic datasets and four real-world datasets, respectively. The results in Fig. 4 show the neural network-based LLSH achieves excellent performance when fitting E2LSH. With randomly generated datasets, the fitting rates reach 96.42%, 93.35%, 95.68% and 94.56% on the uniform, exponential, normal and lognormal distribution, respectively. The result means that it is feasible for the LLSH to replace E2LSH with such a high fitting rate. Surprisingly, the experiments on four real datasets show that the fitting rates can grow up to 96.42%, 94.57%, 97.01%, and 95.49% on the Tiny Images, Ann SIFT, Nytimes, and Glove, respectively. The average fitting rate of LLSH on real datasets (95.87%) is more significant than synthetic datasets (95.59%), which shows that the LLSH framework can be deployed in physical scenarios. ### Evaluation of basic LLSH In this subsection, we design experiments in terms of memory and time consumption to evaluate LLSH's superiority in the process of hash value calculation. Here, we use the data with different magnitudes and dimensions and compare the performance of traditional E2LSH and its matrix-accelerated version (E2LSH(numpy)) and the LLSH running in different hardware (LLSH(CPU) and LLSH(GPU)). For the data with different magnitudes, we draw data from a uniform distribution with a magnitude range from \(1\times 10^{4}\) to \(4\times 10^{5}\) as the validation dataset. As the results show in Fig. 7(a), Figure 5: The fitting rate of the various number of neural networks \(L\). LLSH has an absolute superiority on time consumption, nearly 300 times faster than E2LSH in different data magnitudes. Moreover, as the data magnitude increases, the benefits continue to be improved. Compared with the matrix-accelerated E2LSH (E2LSH(numpy)), LLSH still has a nearly 50% boost, and the advantages continue growing as the magnitude increases. Simultaneously, the LLSH can also be deployed on the GPU to fully use the advantages of new hardware, thus occupying great merit in large-scale data. As shown in Fig. 7(b), LLSH has an overwhelming superiority in memory consumption in different data magnitudes, whereas the traditional E2LSH consumes memory about 40 more times than LLSH. The matrix-accelerated E2LSH's memory also consumes 1.7 times larger than the proposed LLSH. Compared to the CPU version, LLSH costs more memory on the GPU because part of the memory is consumed in exchange for a high computation speed. For the dataset with different dimensions, we draw from a uniform distribution with dimensions ranging from 50 to 500 and keep the data magnitude as \(1\times 10^{5}\) to formulate the validation dataset. Figure 6: The fitting rate of the various number of nodes in the last layer \(K\). Fig. 8 illustrates that LLSH is far beyond the traditional E2LSH. As shown in Fig. 8, different algorithms are insensitive to dimension, and the time consumption increases slowly with the dimension increase. The LLSH algorithm maintains significant advantages in any dimension, and the LLSH running deployed on GPUs shows greater advantages. Moreover, as shown in Fig. 8, the advantage of memory consumption is more obvious. The LSH consumes about 45 times less memory than the traditional E2LSH and about 2.8 times less than the matrix-accelerated E2LSH. According to the empirical results mentioned above, we found that LLSH has evident merits under various data magnitudes and dimensions. Benefiting from the fast reasoning ability of the neural network, the LLSH shows potential performance on time and memory consumption. This ability makes LLSH calculate faster than Figure 8: Time & Memory consuming vs. data with dimensions. Figure 7: Time & Memory consumption vs. data with different magnitude. E2LSH when calculating the hash value. Moreover, LLSH's advantage is more pronounced on the new advanced computing device (GPU) with its parallel computing manner, its time consumption hardly increases as the data dimensions grow. ### Evaluation of ensemble-based LLSH LLSH aims to improve accuracy and reduce time and memory consumption. And ensemble learning can improve the accuracy of the model well. To make a step forward of the proposed LLSH, we introduce the ensemble strategy to LLSH, where different from the basic LLSH is the label for training is generated by multiple hash algorithms. We compare it with four traditional NN search methods, including Brute, KD-tree, Ball-tree and E2LSH. In this experiment, the magnitude of the dataset is set from \(1\times 10^{4}\) to \(5\times 10^{4}\), and each dataset dimension is set to 20. The results in Figure 10: Query accuracy & Time consumption vs. data with different dimensions. Figure 9: Query accuracy & Time consumption vs. data with different magnitude Fig. 9(a) show that the ensemble-based LLSH has obtained higher accuracy than other baselines, even higher than the traditional tree-based algorithm by 2% on average. While compared with the E2LSH, it is even more obvious and can achieve nearly 10% higher. Regarding time consumption, as Fig. 9(b) shows, the ensemble-based LLSH has extremely low time consumption; unlike the traditional tree algorithm, its time consumption will increase exponentially with the amount of data. The ensemble-based LLSH is nearly a hundred times faster than these tree-based algorithms, and the merits will be more evident with the larger data magnitude. Compared with the E2LSH, the improvement is nearly doubled. We also compared the ensemble-based LLSH and these four baselines' query accuracy and time consumption on the different data dimensions. Where the data dimension is set from 10 to 50 and the magnitude is set to \(10^{4}\). As shown in Fig. 10(a), the accuracy of different algorithms will decrease as the dimension increases, but the ensemble-based LLSH still has the best performance. In terms of time consumption, the results in Fig. 10(b) show that the traditional tree algorithm's memory consumption will improve as the dimension increases, which is called the "curse of dimension". So the tree-based algorithm is unsuitable for high-dimensional data. Besides, compared with E2LSH, the ensemble-based LLSH also shows its superiority in memory consumption. As discussed above, compared with the traditional hash algorithm, the ensemble-based LLSH can also improve accuracy and reduce time consumption. In addition, it has more comprehensive practical application value because it does not fall into the "curse of dimension". ## 6 Conclusions In this paper, we investigated the LSH-based hash algorithms and the booming development of machine learning and high computing performance hardware. The traditional LSH-bash hash, however, is challenging to cope with the increasing dimensional and magnitude of massive data. To bridge this gap, we propose a novel learning-based hash framework, which uses multiple parallel neural networks to simulate the traditional hash functions to boost the hashing performance concerning time and memory consumption, and query accuracy. Extensive empirical results illustrated the feasibility of the proposed framework, and further showed its superiority in the effectiveness and efficiency of the NN search task with two implementations, i.e., the basic-based and the ensemble-based. ## Compliance with Ethical Standards * **Funding** This work was partly supported by the National Natural Science Foundation of China under Grant 62162067 and the Yunnan Province Science Foundation under Grant No.202005AC160007, No. 202001B050076. And Open Foundation of Key Laboratory in Software Engineering of Yunnan Province under Grant No. 2020SE310. and Open Foundation of Engineering Research Center of Cyberspace under Grant No. KJAQ202112013. * **Competing interests** The authors declare that they have no competing of interests. * **Ethics approval** This article does not contain any studies with human participants performed by any of the authors. * **Informed consent** Not applicable. * **Consent to participate** Not applicable. * **Consent for publication** Not applicable. * **Data availability** The datasets used in this paper are available online publically. * **Code availability** Not applicable. * **Authors' contributions** All authors have equally contributed and all authors have read and agreed to the manuscript.
2304.14923
Deep sound-field denoiser: optically-measured sound-field denoising using deep neural network
This paper proposes a deep sound-field denoiser, a deep neural network (DNN) based denoising of optically measured sound-field images. Sound-field imaging using optical methods has gained considerable attention due to its ability to achieve high-spatial-resolution imaging of acoustic phenomena that conventional acoustic sensors cannot accomplish. However, the optically measured sound-field images are often heavily contaminated by noise because of the low sensitivity of optical interferometric measurements to airborne sound. Here, we propose a DNN-based sound-field denoising method. Time-varying sound-field image sequences are decomposed into harmonic complex-amplitude images by using a time-directional Fourier transform. The complex images are converted into two-channel images consisting of real and imaginary parts and denoised by a nonlinear-activation-free network. The network is trained on a sound-field dataset obtained from numerical acoustic simulations with randomized parameters. We compared the method with conventional ones, such as image filters, a spatiotemporal filter, and other DNN architectures, on numerical and experimental data. The experimental data were measured by parallel phase-shifting interferometry and holographic speckle interferometry. The proposed deep sound-field denoiser significantly outperformed the conventional methods on both the numerical and experimental data. Code is available on GitHub: https://github.com/nttcslab/deep-sound-field-denoiser.
Kenji Ishikawa, Daiki Takeuchi, Noboru Harada, Takehiro Moriya
2023-04-27T11:12:26Z
http://arxiv.org/abs/2304.14923v2
# Deep sound-field denoiser: optically-measured sound-field denoising using deep neural network ###### Abstract This paper proposes a deep sound-field denoiser, a deep neural network (DNN) based denoising of optically measured sound-field images. Sound-field imaging using optical methods has gained considerable attention due to its ability to achieve high-spatial-resolution imaging of acoustic phenomena that conventional acoustic sensors cannot accomplish. However, the optically measured sound-field images are often heavily contaminated by noise because of the low sensitivity of optical interferometric measurements to airborne sound. Here, we propose a DNN-based sound-field denoising method. Time-varying sound-field image sequences are decomposed into harmonic complex-amplitude images by using a time-directional Fourier transform. The complex images are converted into two-channel images consisting of real and imaginary parts and denoised by a nonlinear-activation-free network. The network is trained on a sound-field dataset obtained from numerical acoustic simulations with randomized parameters. We compared the method with conventional ones, such as image filters and a spatiotemporal filter, on numerical and experimental data. The experimental data were measured by parallel phase-shifting interferometry and holographic speckle interferometry. The proposed deep sound-field denoiser significantly outperformed the conventional methods on both the numerical and experimental data. ## 1 Introduction Optical imaging has recently been used for high-spatial-resolution imaging of acoustic phenomena in airborne sound fields that conventional acoustic sensors cannot accomplish [1]. The acousto-optic effect [2], in which the refractive index of a medium is changed by sound, allows sound to be measured from the optical phase variation. Various optical methods have been used, for example, laser Doppler vibrometry (LDV) [2, 3], parallel phase-shifting interferometry (PPSI) [1], and digital holography [4, 5, 6, 7]. The applications include investigation of acoustic phenomena [8, 9, 10] and sound-field back-projection [2, 11, 12, 13]. Owing to their significant advantages, optical technologies are considered promising as a next-generation acoustic sensing modality. The sound field measured by a high-speed camera or scanning laser beam can be represented as an image sequence. Each pixel value is proportional to the line integral of the sound pressure along the corresponding optical path with superimposed noise. Because the phase fluctuation of light caused by audible sound is tiny owing to its physical origin, noise reduction of sound-field images is a fundamental concern. Whereas sound-field image denoising is typically conducted using filters, as discussed in section 2.1, no machine-learning-based sound-field image denoising method has been proposed so far. In this paper, we propose a denoising method for sound-field images that is based a deep neural network (DNN) (Fig. 1). A sound-field image sequence is Fourier transformed along the time direction at each pixel, by which is obtained complex-amplitude sound-field images corresponding to the frequency bins of the discrete Fourier transform (FT). Then, each complex amplitude image is converted into a two-channel image consisting of real and imaginary parts and denoised by using a trained DNN. To train the network, we generated training datasets by performing acoustic simulations with white and/or speckle noises. Randomizing the simulation parameters ensured variety in the training data. Numerical experiments confirmed that the proposed DNN-based method performs better than conventional methods. We also applied the method to data measured by PPSI and holographic speckle interferometry (HSI). It outperformed conventional methods on these experimental data without priori knowledge of the sound field. ## 2 Related works ### Sound-field denoising The physical properties of sound are commonly utilized for designing noise-reduction filters. These filters can be categorized into time-domain processing, spatial-domain processing, and spatiotemporal-frequency-domain processing. Time-domain processing is typically the first choice. Because sound pressure varies over time, a high-pass filter with a very low cutoff frequency can eliminate static optical phase components and low-frequency fluctuations caused by air fluctuation and seismic vibration. Taking the difference between successive frames of an image sequence, which is a simple high pass filter, has been used as an easy denoising method for sound-field image sequences [8]. When the frequencies of a measured sound field are known, the noise-reduction performance can be improved by designing an appropriate temporal filter [14]. Spatial-domain processing can be applied independently of the time-domain processing. A spatial filter is applied to the sound-field image at each frame. Since sound is a spatially smooth variation and steep edges are usually absent, typical image processing filters, such as Gaussian and median filters, are effective [15]. Figure 1: Overview of the deep sound-field denoiser. (a) Training process. A sound-field dataset is generated in a 2D acoustic simulation with randomized parameters. Each data is a complex-amplitude sound-field image of a harmonic frequency \(\omega\). A nonlinear activation-free network (NAFNet) is trained using the clean and noisy pairs of the simulated sound fields. (b) Inference process. The time-sequential sound-field images are transformed into complex amplitude images, where each image is denoised by the trained network. Spatiotemporal-frequency-domain processing utilizes the fact that sound satisfies the equation: \(k=\omega/c\), where \(k\) is the acoustic wavenumber, \(\omega\) is the acoustic angular frequency, and \(c\) is the speed of sound. If we consider a two-dimensional space, this equation forms a cone in \(k-\omega\) space [15]. Since all of the spatiotemporal components in the recorded images that do not exist on the cone are noise, they can be eliminated by filtering [15, 16]. The methods used so far are all classical filters. We developed a DNN-based denoising of sound fields and confirmed that it outperforms these conventional methods. ### Natural image denoising by DNNs DNNs have been extensively applied to image-denoising tasks and have outperformed classical methods. Convolutional neural networks (CNN) [17, 18, 19, 20, 21, 22] and transformers [23, 24, 25, 26] have widely been used. Among the numerous DNNs, a nonlinear activation free network (NAFnet) [22] has a simple and efficient structure and has achieved peak signal-to-noise ratio (PSNR) of 40.30 dB on a smartphone image denoising dataset [27]. We chose this architecture for our sound-field denoiser. ### DNNs for optical metrology DNNs have been increasingly used in optical metrology [28]. DNNs have been used in many processes, including pre-processing (e.g., fringe denoising [29] and enhancement [30]), analysis (e.g., phase retrieval [31] and phase unwrapping [32]), and post-processing (e.g., phase denoising [33], error compensation [34], and digital refocusing [35]). Several DNN-based methods have shown high performance in denoising fringe patterns and optical phase maps. For fringe denoising, a deep CNN consisting of 20 layers was proposed by Yan _et al._, where the training dataset was generated from Zernike polynomials and additive white Gaussian noise. Several methods have also applied DNNs to fringes corrupted by speckle noise [36, 37, 38, 39, 40]. Similar ideas have been used to denoise optical phase maps [41, 42, 43, 44, 33]. However, no research has used DNNs to denoise sound-field images measured by optical methods. Since the spatial and temporal features of sound-field images differ from those of interference fringes and typical optical phase maps, the previous methods may not be optimal for sound-field denoising. Our contribution is that we developed DNN-based sound-field denoising methods and a training dataset that considers the physical nature of sound. ## 3 Methods ### Acousto-optic measurement data Here, let us briefly review the principle of acousto-optic measurement [2]. The acousto-optic effect is the change in the refractive index of a medium caused by sound. If light propagates along the \(z\)-axis, the phase shift of the light propagating through a sound field in air is given by \[\phi_{s}(x,y,t)=k_{L}\frac{n_{0}-1}{\gamma P_{0}}\int_{z_{1}}^{z_{2}}p(x,y,z,t)dz, \tag{1}\] where \(k_{L}\) is the wavenumber of light, \(\gamma\) is the specific heat ratio, \(n_{0}\) and \(P_{0}\) are the refractive index and pressure of air in a static condition, respectively, and \(p\) is the sound pressure. The phase shift of light is proportional to the sound pressure along the laser path. When sound-field imaging is performed based on this principle, the observed data can be written as a three-dimensional array \(\Phi_{\rm noisy}\) whose elements are of the form \(\phi_{s}(x_{i},y_{j},t_{m})\), where \((i,j)\) is the pixel index and \(m\) is the time index. Any processing method that can extract \(\phi_{s}\) from noisy data can be applied. ### DNN-based sound-field denoising The overview of the inference process is shown in Fig. 1(b). First, a time-domain FT is performed on all pixels of \(\Phi_{\text{noisy}}\) to obtain the complex-amplitude sound field, that is, \(\Psi_{\text{noisy}}=\mathcal{F}_{t}[\Phi_{\text{noisy}}]\), where \(\mathcal{F}_{t}\) denotes a 1D FT along the temporal axis. \(\Psi_{\text{noisy}}\) is the complex amplitude at the corresponding spatial position and the Fourier frequency. Then, for each Fourier frequency, the 2D complex amplitude is converted into a two-channel image with real and imaginary parts. The two-channel complex-amplitude image is normalized and inputted to the neural network. The network is trained to output a clean complex-amplitude image from the input noisy complex-amplitude image. The output image is multiplied by the reciprocal of the normalization factor to maintain the magnitude of the sound field. After processing all frequencies independently with the same DNN, the denoised complex amplitude, \(\Psi_{\text{denoise}}\), is inverse Fourier transformed, and the denoised sound field \(\Phi_{\text{denoise}}=\mathcal{F}_{t}^{-1}[\Psi_{\text{denoise}}]\) is obtained. Since the proposed method uses DNNs to denoise two-channel input images, any network that is able to perform denoising on images can be used with it. Here, Unet-based networks are often used in optical metrology [28]. In particular, we chose NAFNet [22], which has excellent performance and can run with relatively small memory and training time. A discussion of the optimal network structure and parameters for denoising the sound field may be a subject of future work. ### Training data Although optical sound measurements have been actively studied in recent years, no dataset exists for training a neural network on them. It is difficult to collect sound-field data under various conditions through experiments. Therefore, this study used acoustic numerical simulation to create a training dataset. A 2D sound-field simulation with randomized parameters was used. Figure 2(a) shows a schematic illustration of the simulation. The inner rectangle is the measurement area, outside of which is the sound source area where point sources are randomly placed. To generate sound fields with diverse spatial characteristics from simple to complex, the number of point sources was varied from 1 to 5, and the position and relative amplitude of each source was randomly assigned. Each true sound field is a superposition of the sound waves generated by these point Figure 2: (a) Sound-field data generation. Point sources are randomly generated within the sound source area, and the 2D true sound fields in the center area are generated using the Green’s function of the 2D Helmholtz equation. (b) Examples of the generated sound-field data. \(N\) represents the number of sound sources. Two examples are shown for each \(N\). sources and can be calculated as \[p\left(\mathbf{r},k\right)=A\sum_{i=1}^{N}a_{i}\frac{j}{4}H_{0}^{(2)}\left(k|\mathbf{r_{i }}-\mathbf{r}|\right), \tag{2}\] where \(\mathbf{r}=(x,y)\), \(k\) is the magnitude of acoustic wavenumber, \(A\) is a constant determining the overall magnitude of the sound field, \(N\) is the number of sound sources, \(a_{i}\) and \(\mathbf{r_{i}}=(x_{i},y_{i})\) are the relative amplitude and position of the \(i\)th sound source, respectively, and \(H_{0}^{(2)}\) is a Hankel function of the second kind of order zero. Inside \(\Sigma\) is the product of the relative amplitude of the \(i\)th source and Green's function of the 2D Helmholtz equation. The true sound fields were created by randomly selecting \(k\), \(a_{i}\), and \(\mathbf{r}_{i}\) from uniform distributions. The measurement area was a square of side length 1, and the sound source area was ten times larger than that. The random parameters were generated from uniform distributions of \(0\leq a_{i}\leq 0.1\), \(1.26\leq k\leq 40.2\), \(0.5\leq|x_{i}|\leq 10\), and \(0.5\leq|y_{i}|\leq 10\). The amplitude of the entire sound field was set to \(A=0.1\). These parameters were determined based on the authors' experience with typical experimental conditions of this measurement technology. \(a_{1}\) was set to 1 regardless of the number of sources to avoid all sources having small amplitudes. The simulated data was calculated by discretizing the measurement area into \(128\times 128\) pixels. The top row of Fig. 2(b) shows examples of the generated sound fields. It can be seen that the generated sound fields have different complexities, wavelengths, and directions of arrival. Two types of noise were added to the training data: additive white Gaussian noise and speckle noise. The amplitudes of the white noise were randomly selected from a uniform distribution between 0 to 0.1. The method of generating the speckle noise data is shown in the Appendix. Examples of the noisy training data are shown in Fig. 2(b). Data with different amounts of noise were generated. Although the differences between white and speckle noise may be difficult to recognize, spatially correlated random patterns appear in the speckle noise images. Such speckle noise can occur, for example, in a sound field observation using electronic speckle pattern interferometry and a holographic interferometer equipped with Fresnel lenses [46]. ### Implementation details This study used almost the same network as in the original NAFNet article [22], except for the number of image channels. The network consisted of 32 blocks with widths of 32, two image channels (real and imaginary), and a 128 \(\times\) 128 pixel image size. The root mean square error was used as the loss, Adam was used as the optimizer, and the learning rate was set to 0.001. A total of 2,000 training data were created, 400 for each number of sources. The training batch size was 32, and the epoch was 50. The network trained on the white noise dataset is denoted by Ours (W), and the one trained on the speckle noise dataset is denoted by Ours (W+S). The data for evaluation consisted of 500 sound fields (100 for each number of sound sources) generated by simulation under the same conditions as those used for generating the training data. The peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) were used as evaluation metrics. ### Conventional methods Three conventional denoising methods were used for comparison with the proposed method, i.e., 2D Gaussian filter, 2D median filter, and spatiotemporal band pass filter (ST BPF). The kernel sizes for the Gaussian and median filters were set to 7 pixels. They were applied to the real and imaginary parts of the complex amplitude image, respectively. The ST BPF is a spatial frequency filter based on the wave equation [15]. In the wavenumber spectrum, sound components lie on the circumference of \(k=(k_{x}^{2}+k_{y}^{2})^{1/2}=\omega/c\). Therefore, noise can be reduced by removing the spatial frequencies that do not satisfy the equation. First, if the input signal is a time series of images, a 2D complex sound field for each frequency is obtained by taking a 1D FT. Next, a fourth-order image Butterworth band-pass filter was created for and applied to each 2D complex-amplitude image. The lower cutoff frequency was set to \(0.5k\), and the higher cutoff frequency was \(1.2k\), where \(k\) is determined by the center frequency of each Fourier frequency bin. Note that since the image resolution was not very high, the bandwidth of the bandpass filter was set wide in order to avoid removing the broadened components by low-resolution 2D FT. The lower cutoff frequency was determined carefully to avoid erasing too many components near the origin in the wavenumber spectrum. ## 4 Numerical results ### Denoising of white noise data Table 1 and Fig. 3 show the evaluation metrics and denoised sound-field images of the conventional and proposed methods for the white-noise data. The table shows that Ours (W) scored the highest in terms of PSNR and SIIM for all \(N\). Among the conventional methods, the Gaussian filter had the highest PSNR for the overall score, and ST BPF had the highest SSIM. Figure 3 shows that the Gaussian filter smoothed the noisy wavefront, but it blurred short-wavelength \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{PSNR [dB]} & \multicolumn{6}{c}{SSIM} \\ \cline{2-13} \(N\) & Noisy & Gaussian & Median & ST BPF & Ours (W) & Ours (W+S) & Noisy & Gaussian & Median & ST BPF & Ours (W) & Ours (W+S) \\ \hline 1 & 3.78 & 18.1 & 17.9 & 16.7 & **30.7** & 11.5 & 0.158 & 0.627 & 0.554 & 0.774 & **0.961** & 0.433 \\ 2 & 2.38 & 19.8 & 16.5 & 17.1 & **27.7** & 13.6 & 0.122 & 0.681 & 0.441 & 0.770 & **0.930** & 0.566 \\ 3 & 0.74 & 19.2 & 14.9 & 17.9 & **24.1** & 13.4 & 0.131 & 0.633 & 0.386 & 0.759 & **0.844** & 0.539 \\ 4 & -0.67 & 20.3 & 13.8 & 17.7 & **25.1** & 14.7 & 0.098 & 0.672 & 0.327 & 0.753 & **0.847** & 0.586 \\ 5 & -1.92 & 19.4 & 12.5 & 17.1 & **24.0** & 15.2 & 0.092 & 0.640 & 0.299 & 0.712 & **0.796** & 0.571 \\ \hline All & 0.86 & 19.4 & 15.1 & 17.3 & **26.3** & 13.6 & 0.120 & 0.651 & 0.401 & 0.754 & **0.876** & 0.539 \\ \hline \hline \end{tabular} \end{table} Table 1: PSNR and SSIM of denoising results for white-noise data. Figure 3: Examples of denoised images for white-noise data. Two examples are shown for each \(N\). sound waves. The median filter performed worse than the other methods. The ST BPF showed good overall results, but wavefront distortion remained when the noise amplitude was large. Ours (W) produced better noise reduction results than the conventional methods did, regardless of the sound-field parameters, such as the number of sound sources and acoustic wavelength, and the amount of noise. Ours (W+S) seemed to properly restore the wavefronts; nevertheless, its scores were significantly lower than those of Ours (W). This is because the overall amplitude of the denoised sound field increased, as can be seen from the images. Figure 4 plots the scores or each denoising method on the 500 evaluation data as a function of the wavenumber to investigate the dependence of the denoising performance on the wavenumber. The Gaussian filter performed well for the low wavenumbers, but its performance deteriorated as the wavenumber increased. The ST BPF had low scores for very low wavenumbers because the spatial frequency bandpass filter unintentionally eliminated the very low wavenumber components. The proposed method outperformed the conventional methods over the entire wavenumber range, although it showed a slight decrease in performance as the wavenumber increased. The performance of Ours (W+S) leveled off, probably due to the increase in the overall amplitude. ### Denoising of speckle noise data Table 2 and Fig. 5 show the evaluation metrics and denoised sound-field images for the speckle noise data. All methods except Ours (W+S) scored lower compared with their white-noise results. Figure 5 indicates that the spatial correlations caused by speckles remained in the images of the conventional methods and Ours (W), resulting in hazy denoised images. By contrast, Ours (W+S) removed the local amplitude distortions in the noisy data. The scatter plots of the scores are shown in Fig. 6. The PSNRs of the conventional filters and Ours (W) leveled off around 20 dB for almost all wave numbers. Ours (W+S) scored higher than the conventional filters and Ours (W) regardless of the wavenumber. These results confirm that the network properly learned the nonlinear transformation caused by speckle noise from the created training dataset. ## 5 Experiments We denoised experimental data measured by two optical systems: PPSI [1], in which the primary noise source was white noise, and HSI using Fresnel lenses [46], in which speckle noise was superimposed. ### Parallel phase-shifting interferometry PPSI is a system that combines a Fizeau interferometer and a polarized high-speed camera, as shown in Fig. 7(a). It measures four phase-shifted interference fringe images simultaneously, Figure 4: (a) PSNR and (b) SSIM plotted as a function of acoustic wavenumber for white-noise data. The size of the circular markers represents the amplitude of the white noise in four levels. which enables instantaneous and quantitative observation of sound fields. For details of the measurement technique, see, for example, [1]. In this experiment, a 12-kHz burst wave generated from a loudspeaker (FOSTEX FT48D) was observed. The sound measured by a microphone placed 20 cm from the loudspeaker is shown in Fig. 7(b). The generated sound was a three-cycle 12 kHz burst wave with a peak sound pressure of 13 Pa at the microphone position. The frame rate of the high-speed camera was set to 50 kfps, the number of frames was 1000, the image resolution was 128 \(\times\) 128, and the imaging area size was 80 mm \(\times\) 80 mm. The optical phase map at each frame was calculated using a typical arctangent operation, followed by 1D unwrapping along the time direction for each pixel. Subsequently, a time-directional low-pass filter with a cutoff frequency of 500 Hz was applied to remove low-frequency noise components. We call this data the noisy data. The denoising was performed on the noisy data by using the same conventional filters and trained DNNs as in the previous section. Figure 7(c) shows the time-series sound field images of the noisy and denoised data. In the noisy data, random noise and oblique noise patterns appeared in addition to sound waves propagating from the left outside of the image to the right. These oblique patterns should be \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{PSNR [dB]} & \multicolumn{6}{c}{SSIM} \\ \cline{2-11} \(N\) & Noisy & Gaussian & Median & ST BPF & Ours (W) & Ours (W+S) & Noisy & Gaussian & Median & ST BPF & Ours (W) & Ours (W+S) \\ \hline 1 & 1.70 & 12.0 & 11.0 & 12.2 & 14.0 & **25.7** & 0.322 & 0.379 & 0.336 & 0.477 & 0.589 & **0.916** \\ 2 & 0.49 & 13.9 & 11.2 & 13.6 & 15.4 & **23.0** & 0.242 & 0.459 & 0.290 & 0.528 & 0.631 & **0.850** \\ 3 & -0.67 & 14.6 & 10.8 & 14.6 & 15.8 & **20.1** & 0.215 & 0.453 & 0.264 & 0.542 & 0.559 & **0.749** \\ 4 & -1.72 & 15.0 & 10.4 & 14.6 & 16.3 & **21.4** & 0.172 & 0.492 & 0.242 & 0.557 & 0.606 & **0.760** \\ 5 & -2.84 & 14.9 & 9.59 & 14.4 & 16.2 & **20.4** & 0.186 & 0.469 & 0.223 & 0.527 & 0.566 & **0.675** \\ \hline All & -0.61 & 14.1 & 10.6 & 13.9 & 15.5 & **22.1** & 0.227 & 0.451 & 0.271 & 0.526 & 0.590 & **0.790** \\ \hline \hline \end{tabular} \end{table} Table 2: PSNR and SSIM of denoising results for data with white and speckle noises. Figure 5: Examples of denoised images for the data with white and speckle noises. Two examples are shown for each \(N\). phase shift errors caused by imperfections in the optical system. The Gaussian filter and ST BPF produced smooth wavefronts for the peak wavefront of the burst wave, but noisy components remained in the low-amplitude parts before and after the peak wavefront, as can be seen in the right half of the 60 \(\mu\)s image and left half of the 240 \(\mu\)s image. In contrast, Ours (W) restored both the peak and low-amplitude wavefronts smoothly. The wavefront buried in the noise data were also visualized in the results for Ours (W). For Ours (W+S), the restored sound wave amplitudes increased, which is consistent with the numerical results in Fig. 3. ### Holographic speckle interferometry with Fresnel lens An overview of the measurement using HSI is shown in Fig. 8(a). This experiment used a measurement system with Fresnel lenses, as proposed in [46]. It was proposed to establishs a lightweight and inexpensive large-aperture sound-field imaging system using Fresnel lenses. However, the measured sound-field images showed significant spatial distortion due to speckle noise. In the original paper, narrow spatial bandpass filters were used for noise reduction, but such narrow filters may not be so useful for practical applications. Here, we investigated the Figure 6: (a) PSNR and (b) SSIM plotted as a function of acoustic wavenumber for data with white and speckle noises. The size of the circular markers represents the amplitude of the white noise in four levels. Figure 7: (a) Schematic diagram of the PPSI measurement system. A three-cycle burst wave with a center frequency of 12 kHz was emitted from the loudspeaker. (b) Sound pressure waveform measured by the microphone placed 20 cm from the loudspeaker’s diaphragm. (c) Denoising results of transient sound fields measured by PPSI. effectiveness of the proposed DNN-based denoising method. Sinusoidal signals of 5, 10, and 15 kHz were radiated from the same loudspeaker used in the PPSI experiment. The amplitudes were adjusted so that the sound pressure level at the microphone located 20 cm in front of the loudspeaker diaphragm was 110 dB at all frequencies. The frame rate of the high-speed camera was 50 kfps, the number of frames was 1000, the image resolution was \(128\times 128\), and the size of the captured area was 100 mm \(\times\) 100 mm. The phase maps of the speckle interference fringes were estimated using the 2D FT method [47], and a complex sound field at each frequency was extracted via 1D FT along the time direction. Figure 8(b) shows the real parts of the noisy and denoised complex amplitudes. The noisy data shows that the wavefronts were significantly distorted. The conventional methods and Ours (W) did not eliminate the distortions for all frequencies. In contrast, in the case of Ours (W+S), where speckle noise was included in the training data, it can be seen that smooth wavefronts were restored. Since the same loudspeaker as in the PPSI experiment was used, the harmonic wavefront should be smooth. Therefore, Ours (W+S) showed good noise reduction and wavefront restoration performance in speckle sound-field imaging. ## 6 Conclusions We developed a DNN-based sound-field denoising method in which the trained network decomposes time-varying sound field data into 2D complex amplitude images and denoises each individual image. A 2D sound field simulation with random parameters was used to generate the training dataset. By taking into account the measurement process of the optical system, the network was successfully trained to remove not only white Gaussian noise but also speckle noise. We confirmed that the proposed method was effective on experimental data and that it outperformed conventional denoising methods. There are questions to be tackled in future work. First, in this study, we employed only one architecture, NAFnet, with a fixed network size. Therefore, the effect of the choice of optical network architecture and its size should be investigated. Second, the simulation method and Figure 8: (a) Schematic diagram of the HSI measurement system. The sound field between the two Fresnel lenses is measured. The sSinusoidal waves of 5, 10, and 15 kHz was are emitted from the loudspeaker. (b) Denoising results of harmonic sound fields measured by HSI. the number of training data should also be investigated. The generalization abilities against the wavenumber range, complexity of sound fields, and amount and types of noise must depend on the training data set. Last but not least, it is important to extend the proposed method to different measurement situations, such as spatial 3D data, randomly sampled data, and data with occlusions to provide a versatile denoiser for optically measured sound field data. Disclosures.The authors declare no conflicts of interest. Data availability.Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
2301.06535
Case-Base Neural Networks: survival analysis with time-varying, higher-order interactions
In the context of survival analysis, data-driven neural network-based methods have been developed to model complex covariate effects. While these methods may provide better predictive performance than regression-based approaches, not all can model time-varying interactions and complex baseline hazards. To address this, we propose Case-Base Neural Networks (CBNNs) as a new approach that combines the case-base sampling framework with flexible neural network architectures. Using a novel sampling scheme and data augmentation to naturally account for censoring, we construct a feed-forward neural network that includes time as an input. CBNNs predict the probability of an event occurring at a given moment to estimate the full hazard function. We compare the performance of CBNNs to regression and neural network-based survival methods in a simulation and three case studies using two time-dependent metrics. First, we examine performance on a simulation involving a complex baseline hazard and time-varying interactions to assess all methods, with CBNN outperforming competitors. Then, we apply all methods to three real data applications, with CBNNs outperforming the competing models in two studies and showing similar performance in the third. Our results highlight the benefit of combining case-base sampling with deep learning to provide a simple and flexible framework for data-driven modeling of single event survival outcomes that estimates time-varying effects and a complex baseline hazard by design. An R package is available at https://github.com/Jesse-Islam/cbnn.
Jesse Islam, Maxime Turgeon, Robert Sladek, Sahir Bhatnagar
2023-01-16T17:44:16Z
http://arxiv.org/abs/2301.06535v4
# Case-Base Neural Networks: survival analysis with time-varying, higher-order interactions ###### Abstract [Summary of Case-Base Neural Networks: survival analysis, machine learning, case-base, neural network ## 1 Introduction Smooth-in-time accelerated failure time (AFT) models can estimate absolute risks by modeling the hazard directly through a user-specified baseline hazard distribution (Kleinbaum & Klein, 2012). Cox proportional hazards models are used more often than AFT models, causing analyses to be based on hazard ratios and relative risks rather than on survival curves and absolute risks (Hanley & Miettinen, 2009). The identification of an appropriate distribution for the baseline hazard in an AFT model may be difficult for common diseases that have many interacting risk factors, or a Cox model where the disease pathogenesis may change with age, making it difficult to maintain the proportional hazards assumption. For example, previous studies of breast cancer incidence have discovered time-varying interactions with covariates of interest, such as tumor size (Coradini et al., 2000). One approach to provide flexibility in the baseline hazard involves using the basis of splines on time in our model (Royston and Parmar, 2002). However, regression-based models are limited in that they require prior knowledge about potential time-varying interactions and their quantitative effects. Neural networks provide a data-driven approach to approximating interaction terms. For example, DeepSur is a neural network-based proportional hazards model that implements the Cox partial log-likelihood as a custom loss function (Katzman et al., 2018), resulting in a stepwise absolute risk curve that cannot accommodate time-varying interactions. Compared to Cox regression, DeepSurv shows better performance on the Study to Understand Prognoses Preferences and Risk Treatments (SUPPORT) dataset (Knaus et al., 1995). To handle non-proportional hazards, a modification to the loss function was proposed (Faraggi and Simon, 1995). To remove the need for tuning the number of layers and nodes, the concept of extreme learning machines has been applied to a Cox neural network model (Wang and Li, 2019). As an alternative method that assumes a baseline hazard distribution, DeepHit specifies each survival time of interest in the model and directly estimates survival curves, rather than deriving a hazard function (Lee et al., 2018). It assumes an inverse Gaussian distribution as the baseline hazard and it outperforms DeepSur on the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) dataset (Curtis et al., 2012). Providing further flexibility, Deep Survival Machines (DSM) is a parametric survival model using neural networks with a mixture of distributions as the baseline hazard (Nagpal et al., 2021). On both the SUPPORT and METABRIC datasets, DSM outperforms DeepSur and DeepHit (Nagpal et al., 2021). However, like DeepHit and DeepSur, DSM cannot model time-varying interactions. We note that these alternative neural network approaches all require custom loss functions (Katzman et al., 2018) (Lee et al., 2018) (Nagpal et al., 2021). DeepHit introduces a hyperparameter weighing its two loss functions (negative log-likelihood and ranking losses) while DSM requires a two-phase learning process and user implementations for distributions beyond Log-Normal or Weibull (Lee et al., 2018) (Nagpal et al., 2021). Regression-based approaches require prior specification of all interaction terms, which makes it challenging to model covariate effects that change over time. The current neural network models provide flexibility at the cost of opacity, while regression models provide clarity at the cost of flexibility. In this article, we propose Case-Base Neural Networks (CBNN) as a method that models time-varying interactions and a flexible baseline hazard using commonly available neural network components. Our approach to modeling the full hazard uses case-base sampling (Hanley and Miettinen, 2009). This sampling technique allows probabilistic models to predict survival outcomes. As part of the case-base framework, we use transformations of time as a feature (covariate) to specify different baseline hazards. For example, by including splines of time as covariates, we can approximate the Royston-Parmar flexible baseline hazard model (Royston and Parmar, 2002), however, this still requires explicit use of time-varying interactions. CBNN can model both without extra tuning parameters. In Section 2, we describe how case-base sampling and neural networks are combined both conceptually and algebraically, along with our hyperparameter choices and software implementation. Section 3, describes our metrics and compares the performance of CBNN, DeepSur, DeepHit, DSM, Cox regression and case-base using logistic regression (CBLR) on simulated data. Section 4 describes the real-data analysis, while Section 5 explores the implications of our results and contextualizes them within neural network survival analysis in a single event setting. ## 2 Case-base neural network methodology, metrics and software In this section, we define case-base sampling, which converts the total survival time into discrete person-specific moments (person-moments). Then, we detail how neural networks can be used within this framework, explicitly incorporating time as a feature while adjusting for the sampling bias. Finally, we report on the software versions used. An R package is available for use at [https://github.com/Jesse-Islam/cbnn](https://github.com/Jesse-Islam/cbnn). The entire code base to reproduce the figures and empirical results in this paper is available at [https://github.com/Jesse-Islam/cbnnManuscript](https://github.com/Jesse-Islam/cbnnManuscript). ### Case-base sampling Case-base sampling is an alternative framework for survival analysis (Hanley and Miettinen, 2009). In case-base sampling, we sample from the continuous survival time of each person in our dataset to create a _base series_ of _person-moments_. This _base series_ complements the _case series_, which contains all person-moments at which the event of interest occurs. For each person-moment sampled, let \(X_{i}\) be the corresponding covariate profile \((x_{i1},x_{i2},...,x_{ip})\), \(T_{i}\) be the time of the person-moment and \(Y_{i}\) be the indicator variable for whether the event of interest occurred at time \(T_{i}\). We estimate the hazard function \(h(t\mid X_{i})\) using the sampled person-moments. Recall that \(h(t\mid X_{i})\) is the instantaneous potential of experiencing the event at time \(t\) for a given set of covariates \(X_{i}\), assuming \(T_{i}\geq t\). Now, let \(b\) be the (user-defined) size of the _base series_ and let \(B\) be the sum of all follow-up times for the individuals in the study. If we sample the base series uniformly across the study base, then the hazard function of the sampling process is equal to \(b/B\). Therefore, we have the following equality 1: Footnote 1: We are abusing notation here, conflating hazards with probabilities. For a rigorous treatment, see Saarela and Hanley (2015) section 3 Saarela and Hanley (2015). \[\frac{P\left(Y_{i}=1\mid X_{i},T_{i}\right)}{P\left(Y_{i}=0\mid X_{i},T_{i} \right)}=\frac{h\left(T_{i}\mid X_{i}\right)}{b/B}. \tag{1}\] To provide some intuition for this equation, the hazard is the rate at which the event (\(Y_{i}=1\)) occurs at a given time, conditional on your survival up to the follow-up-time of interest. As we are looking at person-moments, we may retrieve the rate at which an event does not occur (\(Y_{i}=0\)) at a given time by taking the number of samples in \(b\) and dividing it by the total number of moments in the study base \(B\) at the current follow-up-time. by taking the ratio. of these two metrics, we end up with our odds on the left hand side. The odds of a person-moment being a part of the _case series_ is the ratio of the hazard \(h(T_{i}\mid X_{i})\) and the uniform rate \(b/B\). Though an intuitive approach is used to describe this equation, a complete derivation of this equation can be found at (Saarela and Hanley, 2015). Sampling techniques are known to result in a loss of information. To address this concern, the original case-base methodology paper uses a measure for relative information when comparing two averages, based on the size of our base-series and case-series. If we let the size of \(b=100c\), then the variances and covariances are expected to be proportional to \(\frac{1}{c}+\frac{1}{100c}\) rather than \(\frac{1}{c}+\frac{1}{\ln t}\), about 1 percent larger. If there is a concern with a 1 percent inflation of these estimates, a larger ratio than 100 may be used. As there are concerns about dependency due multiple person-moments coming from each individual, the information loss was studied in Saarela 2015 ((Saarela and Hanley, 2015)), where they found a ratio of 100:1 base-series to case series gives an efficiency comparable to Mantel-Haenszel and conditional logistic regression methods (Saarela and Hanley, 2015). \[\log\left(h\left(t\mid X_{i}\right)\right)=\log\left(\frac{P\left(Y_{i}=1 \mid X_{i},t\right)}{P\left(Y_{i}=0\mid X_{i},t\right)}\right)+\log\left( \frac{b}{B}\right). \tag{2}\] To estimate the correct hazard function, we adjusting for the bias introduced when sampling a fraction of the study base \(B\) by adding an offset term \(\log\left(\frac{B}{b}\right)\) (2). Next, we propose using neural networks to model the odds. ### Neural networks to model the hazard function After case-base sampling, we pass all features, including time, into any user-defined feed-forward neural network, to which an offset term is added, then passed through a sigmoid activation function (Figure 1). We use the sigmoid activation function as it is the inverse of the log-odds and commonly available in popular neural network packages (keras, pytorch, etc) (Allaire and Chollet, 2021)(Paszke et al., 2019). which we can use to calculate the hazard. The general form for the neural network using CBNN is: \[P\left(Y=1|X,T\right)=\mathrm{sigmoid}\left(f_{\theta}(X,T)+\log\left(\frac{B }{b}\right)\right), \tag{3}\] where \(T\) is a random variable representing the event time, \(X\) is the random variable for a covariate profile, \(f_{\theta}(X,T)\) represents any feed-forward neural network architecture, \(\log\left(\frac{B}{b}\right)\) is bias term set by case-base sampling, \(\theta\) is the set of parameters learned by the neural network and \(\mathrm{sigmoid}(x)=\frac{1}{1+e^{-x}}\). By approximating a higher-order polynomial of time using a neural network, the baseline hazard specification is now data-driven, where user-defined hyperparameters such as regularization, number of layers and nodes control the flexibility of the hazard function. We provide a detailed description of the choices we made in the next sub-section. The following derivation shows how our probability estimate is converted to odds: \[\log\left(h(t\mid X)\right) =\log\left(\frac{\operatorname{sigmoid}\left(f_{\theta}(X,T)+\log \left(\frac{B}{b}\right)\right)}{1-\operatorname{sigmoid}\left(f_{\theta}(X,T) +\log\left(\frac{B}{b}\right)\right)}\right)+\log\left(\frac{b}{B}\right)\] \[=\log\left(\frac{\frac{\exp\left(f_{\theta}(X,T)+\log\left(\frac{ B}{b}\right)\right)}{\exp\left(f_{\theta}(X,T)+\log\left(\frac{B}{b}\right) \right)+1}}{1-\frac{\exp\left(f_{\theta}(X,T)+\log\left(\frac{B}{b}\right) \right)}{\exp\left(f_{\theta}(X,T)+\log\left(\frac{B}{b}\right)\right)+1}}+ \log\left(\frac{b}{B}\right)\right.\] \[=\log\left(\exp\left(f_{\theta}(X,T)+\log\left(\frac{B}{b}\right) \right)\right)+\log\left(\frac{b}{B}\right)\] \[=f_{\theta}(X,T)+\log\left(\frac{B}{b}\right)+\log\left(\frac{b} {B}\right)\] \[=f_{\theta}(X,T).\] We use binary cross-entropy as our loss function (Gulli and Pal, 2017): \[L(\theta)=-\frac{1}{N}\sum_{i=1}^{N}y_{i}\cdot\log(\hat{f}_{\theta}(x_{i},t_{ i}))+(1-y_{i})\cdot\log(1-\hat{f}_{\theta}(x_{i},t_{i})),\] where \(\hat{f}_{\theta}(x_{i},t_{i})\) is our estimate for a given covariate profile and time, \(y_{i}\) is our target value specifying whether an event occurred and \(N\) represents the number of individuals in our training set. Backpropagation with an appropriate minimization algorithm (e.g. Adam, RMSPropagation, stochastic gradient descent) is used to optimize the parameters in the model (Gulli and Pal, 2017). For our analysis, we use Adam as implemented in Keras (Gulli and Pal, 2017). Note that the size of the _case series_ is fixed as the number of events, but we can make the _base series_ as large as we want. A ratio of 100:1 _base series_ to _case series_ is sufficient (Hanley and Miettinen, 2009). We pass our feed-forward neural network through a sigmoid activation function (Figure 1). Finally, we can convert this model output to a hazard. When using our model for predictions, we manually set the offset term to 0 in the new data, as we account for the bias during the fitting process. Since we are directly modeling the hazard, we can readily estimate the risk function (\(F\)) at time \(t\) for a covariate profile \(X\), viz. \[F\left(t\mid X\right)=1-\exp\left(-\int\limits_{0}^{t}h(u|X)\,\mathrm{d}u \right). \tag{4}\] We use a finite Riemann sum (Hughes-Hallett, Gleason, & McCallum, 2020) to approximate the integral in (4). Figure 1: Steps involved in CBNN from case-base sampling to the model framework we use for training. The first step is case-base sampling, completed before training begins. Next, we pass this sampled data through a feed-forward neural network. We add the offset and pass that through a sigmoid activation function, whose output is a probability. Once the neural network model completes its training, we can convert the probability output to a hazard, using it for our survival outcomes of interest. ### Hyperparameter selection Neural networks are flexible when defining the architecture and optimization parameters. These hyperparameter decisions can affect the estimated parameters and were chosen during a set of initial simulations to determine if CBNN can learn complex interactions in practice. We set a max number of epochs to be \(2000\), batch size as \(512\), learning rate as \(10^{-3}\), decay as \(10^{-7}\), patience as \(10\) epochs, \(\{50,50,25,25\}\) nodes in each hidden layer with \(50\%\) dropout at each layer, a minimum delta loss on the validation set of \(10^{-7}\) over 10 epochs and Adam [12] as our optimizer. These choices may permit the model to approximate higher-order interactions while preventing over-fitting [13]. We fix a train-validation-test split that allows us to update the weights with a subset of the data (training), assess performance at each epoch (validation) and gauge performance of the final model (test) for each method. We select the best weights after training based on validation loss [12]. ### Software implementation R [14] and python [15] are used to evaluate methods from both languages. We fit the Cox model using the **survival** package [13], the CBLR model using the **casebase** package [16], the DeepSurv model using the **survivalmodels** package [17], the DeepHit model using **pycox** [18] and the DSM model using **DeepSurvivalMachines** [19]. We made the components of CBNN using the **casebase** package for the sampling step and the **keras**[1] package for our neural network architecture. The **simsurv** package [16] is used for our simulation studies, while **flexsurv**[1] is used to fit a flexible baseline hazard using splines for our complex simulation. We use the implementation of \(C_{IPCW}\) from the python package **sksurv**[12]. The **riskRegression** package [16] is used to get the Index of Prediction Accuracy (IPA metric). Both metrics are described in detail in the following section. We modify the **riskRegression** package to be used with any user supplied risk function \(F\). To ensure that both R and Python-based models are running in union on the same data through our simulations and bootstrap, we use the **reticulate** package [14]. ## 3 Simulation studies In this section, we use simulated data to evaluate the performance of CBNN and compare our approach with existing regression-based (Cox, CBLR) and neural network-based (DeepHit, DeepSurv, DSM) methods. We specify a linear combination of each covariate as the linear predictor in regression-based approaches (Cox, CBLR), which contrasts with neural network approaches that allow for non-linear interactions. We simulate data under a simple exponential model and a complex baseline hazard with time-varying interactions, each with 2000 individuals and 10% random censoring. For both settings, we simulate three covariates: \[z_{1}\sim\mathrm{Bernoulli}(0.5) z_{2}\sim\begin{cases}N(0,0.5)&\text{if }z_{1}=0\\ N(10,0.5)&\text{if }z_{1}=1\end{cases} z_{3}\sim\begin{cases}N(8,0.5)&\text{if }z_{1}=0\\ N(-3,0.5)&\text{if }z_{1}=1.\end{cases}\] The DeepHit-specific hyperparameter alpha is set to \(0.5\) (equal weight between its negative log-likelihood and ranking losses [18]). We modify the **DeepSurvivalMachines**[19] package to include dropout and a minimum delta loss during the fitting process. For DSM, we define a mixture model of six Weibull distributions for the baseline hazard. All other hyperparameters are held constant across all neural network methods in both the simulation studies and real data applications. Besides the methods mentioned above, we include the Optimal model in our comparisons using CBLR. That is, we include the exact functional form of the covariates in a CBLR model (referred to as Optimal for simplicity). We calculate \(t\)-based 95% confidence intervals using 100 replications of the simulated data. For all analyses, we use 80% for training and 20% for the test set. 20% of the training set is kept for validation at each epoch. We predict risk functions \(F\) using (4) for individuals in the test set, which are used to calculate our \(C_{IPCW}\) and IPA scores. ### Performance metrics We use two metrics to assess the performance of the different methods of interest: 1) (IPA) [16] and 2) inverse probabililty censoring weights-adjusted concordance index (\(C_{IPCW}\)) [10], which we define below. We may be interested in an absolute risk a specific number of years in the future. For the sake of comparison, we plot each metric over time \(t\) so that the performance is transparent. As we go further along follow-up-time, we expect the performance of all models to drop as there are fewer individuals left in the study by then. For \(C_{IPCW}\), the score demonstrates performance up to time \(t\)[10], while IPA is performance at time \(t\). #### 3.1.1 Index of prediction accuracy (IPA) The IPA is a function of the Brier score (\(BS(t)\)) [11], which is defined as \[BS(t)=\frac{1}{N}\sum_{i=1}^{N}\left(\frac{\left(1-\widehat{F}(t\mid X_{i}) \right)^{2}\cdot I(T_{i}\leq t,\delta_{i}=1)}{\widehat{G}(T_{i})}+\frac{ \widehat{F}(t\mid X_{i})^{2}\cdot I(T_{i}>t)}{\widehat{G}(t)}\right), \tag{5}\] where \(\delta_{i}=1\) shows individuals who have experienced the event, \(N\) represents the number of samples in our dataset over which we calculate \(BS(t)\), \(\widehat{G}(t)=P[c>t]\) is a non-parametric estimate of the censoring distribution, \(c\) is censoring time and \(T_{i}\) is an individual's survival or censoring time. The Brier score provides a score that accounts for the information loss because of censoring. There are three categories of individuals that may appear within the dataset once we fix our \(t\) of interest. Individuals who experienced the event before \(t\) are present in the first term of the equation. The second term of the equation includes individuals who experience the event or are censored after \(t\). Those censored before \(t\) are the third category of people. The inverse probability censoring weights (IPCW) adjustment (\(G(\cdot)\)) is to account for these category three individuals whose information is missing. The IPA score as a function of time is given by \[\mathrm{IPA}(t)=1-\frac{BS_{model}(t)}{BS_{null}(t)},\] where \(BS_{model}(t)\) represents the Brier score over time \(t\) for the model of interest and \(BS_{null}(t)\) represents the Brier score if we use an unadjusted Kaplan-Meier (KM) curve as the prediction for all observations [10]. Note that IPA has an upper bound of one, where positive values show an increase in performance over the null model and negative values show that the null model performs better. These scores show how performance changes over follow-up time. A potential artifact of IPA is that the score is unstable at earlier and later survival times. This is because of a near equivalent Brier score among each model and the null model. At small values, a difference of \(0.1\) creates a more significant fold change than at larger values. As the Brier score is potentially small at the start and at the end of their respective curves, The IPA score may be unstable at the same locations. #### 3.1.2 Inverse probability censoring weights-adjusted concordance index The \(C_{IPCW}\) is a non-proper, rank-based metric that does not depend on the censoring times in the test data [12]. The \(C_{IPCW}\) is given by \[C_{IPCW}(t)=\frac{\sum_{i=1}^{N}\sum_{j=1}^{N}\delta_{i}\left\{\widehat{G}(T_ {i})\right\}^{-2}I(T_{i}<T_{j},T_{i}<t)I\left(\widehat{F}(t|X_{i})>\widehat{F }(t|X_{j})\right)}{\sum_{i=1}^{N}\sum_{j=1}^{N}\delta_{i}\left\{\widehat{G}(T _{i})\right\}^{-2}I(T_{i}<T_{j},T_{i}<t)}. \tag{6}\] where the follow-up period of interest is (0,\(t\)), \(I(\cdot)\) is an indicator function, \(\widehat{F}(X_{i},t)\) is the risk function estimated for everyone in the study at time \(t\) and \(C_{IPCW}\) can compare the performance of different models, where a higher score is better. Note that the \(C_{IPCW}\) may produce misleading performance, as it ranks based on survival times, not event status [1]. This metric is considered an unbiased population concordance measure because of the IPCW adjustment [12]. ### Simple simulation: constant baseline hazard We simulate data from a simple model that primarily depends on a constant baseline hazard: \[\log h(t\mid X_{i})=\beta_{1}z_{1}+\beta_{2}z_{2}+\beta_{3}z_{3},\] The covariate effects are given by \(\beta_{1}=0.1,\beta_{2}=0.1,\beta_{3}=0.1\). Once we simulate survival times, we introduce 10% random censoring. #### 3.2.1 Performance comparison in simple simulation Figure 2 A, B and Table 1 A show the results for the simple simulation. The regression-based methods (CBLR, Cox, Optimal) outperform the neural network ones in the simple simulation setting. Among the neural network approaches, CBNN outperforms all other methods in terms of both IPA and \(C_{IPCW}\) (Figure 2 A, B). Specifically, we see CBNN is consistent across time with smaller confidence intervals compared to DeepHit, DeepSurv and DSM. In this simple setting, the regression models are much closer to the Optimal model, while the neural network models perform worse than the KM null model. The wide confidence bands in \(C_{IPCW}\) suggest the neural network models may be over-parameterized. ### Complex simulation: flexible baseline hazard, time-varying interactions This simulation demonstrates performance with the presence of a complex baseline hazard and a time-varying interaction. Originally used to show the spline-based hazard model proposed by Royston and Parmar (Royston and Parmar 2002), the breast cancer dataset provides a complex hazard from which we simulate, available in the **flexsurv** R package (Jackson 2016). To increase the complexity of our data-generating mechanism for this simulation, we design the model as follows: \[\log h(t\mid X_{i})=\sum_{i=1}^{5}(\gamma_{i}\cdot\psi_{i})+\beta_{1}(z_{1})+ \beta_{2}(z_{2})+\beta_{3}(z_{3})+\tau_{1}(z_{1}\cdot z_{2}\cdot t)+\tau_{2}(z_ {1}\cdot z_{3})+\tau_{3}(z_{2}\cdot z_{3}),\] where \(\gamma_{1}=3.9,\gamma_{2}=3,\gamma_{3}=-0.43,\gamma_{4}=1.33,\gamma_{5}=-0.86, \beta_{1}=1,\beta_{2}=1,\beta_{3}=1,\tau_{1}=10,\tau_{2}=2,\tau_{3}=2\) and \(\psi\) are basis splines. The \(gamma\) coefficients are obtained from an intercept-only cubic splines model with three knots using the _flexsurv spline_ function from the **flexsurv** package (Jackson 2016). Note that we fix these values for the analysis. The \(\beta\) coefficients represent direct effects, \(\tau_{2}\) and \(\tau_{3}\) represent interactions and \(\tau_{1}\) is a time-varying interaction. #### 3.3.1 Performance comparison in complex simulation Figure 2 C, D and Table 1 B show the performance over time on a test set in the complex simulation. Apart from the Optimal regression model, CBNN outperforms the competing models when examining IPA and up to the 75-percentile of follow-up time for \(C_{IPCW}\). We expected the Optimal model to perform best in both metrics. However, this was not the case for \(C_{IPCW}\) and may be due to an artifact of concordance-based metrics, where a misspecified model may perform better than a correctly specified one (Blanche et al. 2019). We attribute the performance of CBNN to its flexibility in modeling time-varying interactions and baseline hazard, flexibility the other neural network models do not have. Figure 2: Summarizes the simple simulation (A, B), complex simulation (C, D), SUPPORT case study (E, F) and METABRIC case study (G, H) results. The first row shows the IPA score for each model in each study over follow-up time. Negative values mean our model performs worse than the null model and positive values mean the model performs better. The second row demonstrates the \(C_{IPCW}\) score for each model in each study over follow-up time. A score of 1 is the maximum performance for either metric. Each model-specific metric in each study shows a 95-percent confidence interval over 100 iterations. The models of interest are case-base with logistic regression (CBLR), Case-Base Neural Networks (CBNN), Cox, DeepHit, DeepSurv, Deep Survival Machines (DSM), Optimal (a CBLR model with the exact interaction terms and baseline hazard specified) and Kaplan-Meier (to serve as a baseline, predicting the average for all individuals). Note than Cox and CBLR are superimposed, due to very similar prediction outcomes. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{1}{c}{**A: Simple**} & \multicolumn{3}{c}{IPA} & \multicolumn{3}{c}{c} \multicolumn{3}{c}{c-ipcw} \\ \hline Method & 25 & 50 & 75 & 100 & 25 & 50 & 75 & 100 \\ \hline Cox & **0.05 (0.05,0.06)** & **0.07 (0.07,0.08)** & **0.08 (0.07,0.08)** & **0.07 (0.06,0.08)** & **0.0 (0.39,0.6)** & **0.0 (0.39,0.6)** & **0.0 (0.39,0.6)** & **0.0 (0.59,0.6)** & **0.0 (0.59,0.6)** \\ CBLR & **0.05 (0.05,0.06)** & **0.07 (0.07,0.08)** & **0.08 (0.07,0.08)** & **0.07 (0.06,0.08)** & **0.0 (0.59,0.6)** & **0.0 (0.59,0.6)** & **0.0 (0.59,0.6)** & **0.0 (0.59,0.6)** \\ DeepSurv & -0.26 (0.27,0.25) & -0.36 (0.38,0.34) & -0.22 (0.24,0.21) & 0 (0.01,0) & 0.51 (0.50,52) & 0.51 (0.50,5.52) & 0.51 (0.50,5.52) & 0.51 (0.50,5.52) \\ DeepHt & -0.03 (0.04,0.02) & -0.05 (0.07,0.04) & -0.03 (0.04,0.02) & -0.01 (0.01,0) & 0.52 (0.51,0.52) & 0.52 (0.51,0.52) & 0.52 (0.51,0.53) & 0.50 (0.49,0.5) \\ CBNN & -0.03 (0.04,0.03) & -0.04 (0.05,0.04) & -0.04 (0.05,0.03) & -0.03 (0.04,0.03) & 0.54 (0.54,0.55) & 0.54 (0.54,0.55) & 0.54 (0.54,0.55) & 0.54 (0.54,0.55) & 0.54 (0.54,0.55) \\ Optimal & **0.05 (0.05,0.06)** & **0.07 (0.07,0.08)** & **0.08 (0.07,0.08)** & **0.07 (0.06,0.08)** & **0.0 (0.59,0.6)** & **0.0 (0.59,0.6)** & **0.0 (0.59,0.6)** & **0.0 (0.59,0.6)** & **0.0 (0.59,0.6)** \\ DSM & -0.28 (0.3,0.27) & -0.4 (0.43,0.37) & -0.28 (0.3,0.25) & 0.01 (0.01,0.02) & 0.55 (0.54,0.56) & 0.55 (0.54,0.56) & 0.55 (0.55,0.56) \\ \hline \hline \multicolumn{1}{c}{**B: Complex**} & \multicolumn{3}{c}{IPA} & \multicolumn{3}{c}{c-ipcw} \\ \hline Method & 25 & 50 & 75 & 100 & 25 & 50 & 75 & 100 \\ \hline Cox & 0.55 (0.54,0.56) & 0.45 (0.44,0.46) & -0.19 (0.21,0.17) & -0.51 (0.54,0.48) & 0.77 (0.76,0.79) & 0.74 (0.72,0.76) & 0.56 (0.54,0.58) & 0.54 (0.52,0.56) \\ CBLR & 0.53 (0.52,0.54) & 0.45 (0.44,0.46) & -0.15 (0.16,0.13) & 0.46 (0.48,0.43) & 0.79 (0.78,0.79) & 0.79 (0.78,0.79) & 0.79 (0.78,0.79) & **0.78 (0.76,0.79)** \\ DeepSurv & 0.03 (0.01,0.05) & 0.06 (0.05,0.07) & -0.27 (0.29,0.25) & -0.04 (0.07,0.02) & 0.68 (0.56,0.7) & 0.68 (0.56,0.7) & 0.68 (0.66,0.7) & 0.68 (0.66,0.7) \\ DeepHt & 0.26 (0.22,0.31) & 0.07 (0.06,0.00) & -0.06 (0.08,0.04) & 0.05 (0.04,0.06) & 0.72 (0.70,7.04) & 0.65 (0.62,0.67) & 0.37 (0.33,0.4) & 0.35 (0.33,0.30) \\ CBNN & 0.83 (0.81,0.84) & 0.69 (0.67,0.71) & 0.49 (0.45,0.52) & 0.56 (0.53,0.59) & **0.86 (0.85,0.86)** & **0.87 (0.87,0.88)** & **0.8 (0.79,0.81)** & 0.67 (0.65,0.68) \\ Optimal & **0.94 (0.94,0.95)** & **0.9 (0.9,0.91)** & **0.84 (0.84,0.85)** & **0.85 (0.84,0.85)** & 0.74 (0.74,0.75) & 0.82 (0.82,0.82) & 0.63 (0.63,0.63) & 0.57 (0.56,0.57) \\ \hline \hline \end{tabular} \end{table} Table 1: Four tables representing performance at certain follow-up times for the simple simulation, complex simulation, SUPPORT and METABRIC. Each table shows performance for each method in each study at \(25\%\), \(50\%\). \(75\%\) and \(100\%\) of follow-up time. The bold elements show the best model for each study, at each follow-up time of interest. These tables are included to provide exact measures at certain intervals. The models of interest are: Cox, case-base with logistic regression (CBLR), DeepSurv, DeepHt, Case-Base Neural Network (CBNN), Optimal and Deep Survival Machines (DSM). ## 4 Application to Support and Metabric Data Our complex simulation demonstrates the superior performance of CBNN in ideal conditions with clean data. To obtain a more realistic performance assessment, we compared models using two real datasets with a time-to-event outcome. The first case study examines the SUPPORT dataset (Knaus et al., 1995). The second case study examines the METABRIC dataset (Curtis et al., 2012). We use the same hyperparameters as in the simulation studies. As we do not know the true model for the real data, we exclude the Optimal model. We split the datasets keeping 20% of the observations as a test set. 20% of the training set is kept aside for validation at each epoch. We predict risk functions for everyone in the test set, which is used to calculate our metrics. We conduct 100 bootstrap re-samples for the real data applications to obtain confidence intervals. ### Performance evaluation using the SUPPORT dataset The SUPPORT dataset tracks the time until death for seriously ill patients at five American hospitals (Knaus et al., 1995). We use a pre-processed version of the dataset made available in the DeepSurv package (Katzman et al., 2018). This dataset contains 9104 samples and 14 covariates (age, sex, race, number of comorbidities, presence of diabetes, presence of dementia, presence of cancer, mean arterial blood pressure, heart rate, respiration rate, temperature, white blood cell count, serum's sodium and serum's creatinine) (Katzman et al., 2018). Patients with missing features were excluded and 68.10% of the patients died during the 5.56-year study period (Katzman et al., 2018). Figure 2 E, F and Table 1 C demonstrates the performance over time on a test set. The regression models (CBLR, Cox) perform best considering IPA, followed by CBNN from the \(25^{th}\) to \(100^{th}\) percentile of follow-up time. We note a drop in performance for CBNN before the \(25^{th}\) percentile of follow-up. For \(C_{IPCW}\), CBNN outperforms the competing models consistently over follow-up time. Note that performance is similar for all models, aside from DeepSurv whose \(C_{IPCW}\) is lower than the rest (Figure 2 E, F and Table 1 C). ### Performance evaluation using the METABRIC dataset METABRIC is a 30-yearlong study aiming to discover the molecular drivers of breast tumors, following 2000 individuals with breast cancer until death (Curtis et al., 2012). They described these growths as primary invasive breast carcinomas, with the goal of discovering both genetic and clinical risk factors for breast cancer survival (Curtis et al., 2012). We used the processed dataset made available through DeepSurv (Katzman et al., 2018), which includes 1980 patients of which 57.72% die due to breast cancer within a median 10 years of follow-up (Katzman et al., 2018). There are 10 covariates in total: time, 4 RNA-Seq gene expressions (MK167, EGFR, PGR and ERBB2) and 5 clinical features (hormone treatment indicator, radiotherapy indicator, chemotherapy indicator, ER-positive indicator and age at diagnosis) (Katzman et al., 2018). Figure 2 G, H and Table 1 D shows the performance on a test set over time. The IPA scores suggest that regression models outperform competing models on this dataset, as all the neural network models are equal to or perform worse than KM over follow-up time. Our CBNN model is comparable to KM until around the 50-percentile of follow-up time, after which CBNN, DeepSurv and DSM drop in performance. The \(C_{IPCW}\) produces a different ranking. Our CBNN model outperforms the other models up to the \(25^{th}\) percentile of follow-up time, whereas DeepHit performs best from the \(50^{th}\) to \(75^{th}\) percentile of follow-up. With this metric, CBNN and DeepHit outperform the regression models. This disagreement between IPA and \(C_{IPCW}\) may be due to the misspecification issue of concordance-based metrics (Blanche et al., 2019). The neural network models may be over-parameterized, as shown by the wide confidence bands in \(C_{IPCW}\). ## 5 Discussion CBNN models survival outcomes by using neural networks on case-base sampled data. We incorporate follow-up time as a feature, providing a data-driven estimate of a flexible baseline hazard and time-varying interactions in our hazard function. The three competing neural network models we evaluated cannot model time-varying interactions by design (Nagpal et al., 2021)(Katzman et al., 2018)(Lee et al., 2018). DSM requires a mixture component distributions for a flexible baseline hazard to be fit (Nagpal et al., 2021). As our goal is to fix the design and compare performance, we did not change any shared hyperparameters. With our choice of shared hyperparameters and model design, DSM did not converge in the complex simulation and SUPPORT case study. Despite this limitation, we include this method when it did function as it can fit flexible baseline hazards. Compared to CBNN, the remaining two models also have limitations. DeepSurv is a proportional hazards model and does not estimate the baseline hazard (Katzman et al., 2018). DeepHit requires an alpha hyperparameter, is restricted to a single distribution for the baseline hazard and models the survival function directly (Lee et al., 2018). The alternative neural network methods match on time, while CBNN models time directly. To assess performance among these models, we use both IPA and \(C_{IPCW}\) metrics. Concordance-based measures are commonly used in survival analysis to compare models and we opt to keep them in our analyses. However, \(C_{IPCW}\) is a non-proper metric and may cause misspecified models to appear better than they should (Blanche et al., 2019). Therefore, we contextualize our \(C_{IPCW}\) results in relation to IPA, a proper scoring rule. The model rankings between \(C_{IPCW}\) and IPA differed for both the complex simulation and METABRIC application. Wider \(C_{IPCW}\) confidence intervals for DeepHit and DeepSurv show potential misspecification of the models (Figure 2 D, H). With this interpretation of \(C_{IPCW}\) in mind, we assess two simulations with minimal noise and two case studies in real scenarios. The simple simulation demonstrates potential pitfalls associated with neural network models, particularly overparameterization. All neural network-based approaches performed worse than null KM model (IPA score), while the regression-based performed better. We attribute this to potential overparameterization in the neural network models, as the wide confidence intervals suggest over-fitting, even with dropout. Both DeepSurv and DSM are affected, while DeepHit and CBNN are less so. This is a limitation to our strategy for evaluating the methods, which uses a fixed study design across all assessments. From this baseline benchmark, we move on to a complex simulation that requires a method that can learn both time-varying interactions and have a flexible baseline hazard. Here, CBNN demonstrates a distinct advantage over all other methods. The regression models show improved performance over the null KM model, while the competing neural network models performed worse. Based on our complex simulation results (Figure 2 C, D and Table 1 B), CBNN outperforms the competitors when time-varying interactions and a complex baseline hazard are present. This simulation shows how CBNN can perform under ideal conditions, while the following two analyses on real data serve to assess its performance in realistic conditions. In the SUPPORT and METABRIC case studies, flexibility in both interaction modeling and baseline hazard did not improve CBNNs relative performance, suggesting that this flexibility does not aid prediction in either case study. From a biological perspective, the 30-year follow-up time in the METABRIC study may contain competing causes of death. The causes at the start of the study may not match the causes towards the end, potentially explaining the drop in performance as we reach later survival times with fewer individuals. The baseline hazard is unlikely to cause this drop as DSM is the most flexible competing model in our comparison. Over-fitting is also unlikely given the tight confidence intervals. Further research is required to determine the cause of the drop in performance seen in the METABRIC case study. In both case studies, CBNN outperforms the competing neural network methods. The way neural networks are used in this paper are not directly interpretable. This is by design, as we only wish to compare to competitors based on predictive ability. All feed-forward neural network models can make use of different architectures to improve performance or interpretability. While a deep neural network as used in this paper is not directly interpretable, techniques such as Local Interpretable Model-Agnostic Explanations (LIME) (Ribeiro, Singh, & Guestrin, 2016) may be used to interpret the relevance of each feature to a given prediction. An example of architectural interpretability is a convolutional neural network for images, which provide positional information about the relevant pixels in an image (Allaire & Chollet, 2021). Note that neither of these are specific to CBNN. As CBNN is concerned with learning time-varying interactions, a concern may be interpreting which time-varying interactions have been learned. Other secondary methods after the fitting process may be used to retrieve said information [https://ui.adsabs.harvard.edu/abs/2017arXiv1705049777/abstract](https://ui.adsabs.harvard.edu/abs/2017arXiv1705049777/abstract). ## 6 Conclusions CBNN outperforms all competitors in the complex simulation, demonstrating its value in survival settings that may involve time-varying interactions and a complex baseline hazard. Once we perform case-base sampling and adjust for the sampling bias, we can use a sigmoid activation function to predict our hazard function. Our approach simplifies the incorporation of censored individuals, allowing survival outcomes to be treated as binary ones. Forgoing the requirement of custom loss functions, CBNN only requires the use of standard components in machine learning libraries (specifically, the add layer to adjust for sampling bias and the sigmoid activation function). Due to the simplicity in its implementation and by extension user experience, CBNN is both a user-friendly approach to data-driven survival analysis and is easily extendable to any feed-forward neural network framework. ### Data and code availability statement The pre-processed data for the SUPPORT case study can be found at [https://github.com/jaredleekatzman/DeepSurv/tree/master/experiments/data/support](https://github.com/jaredleekatzman/DeepSurv/tree/master/experiments/data/support). The pre-processed data for the METABRIC case study can be found at [https://github.com/jaredleekatzman/DeepSurv/tree/master/experiments/data/metabric](https://github.com/jaredleekatzman/DeepSurv/tree/master/experiments/data/metabric). The data is accessed using [https://github.com/havakv/pycox](https://github.com/havakv/pycox). The code for this manuscript and its analyses can be found at [https://github.com/Jesse-Islam/cbnnManuscript](https://github.com/Jesse-Islam/cbnnManuscript). The software package making CBNN easier to use can be found at [https://github.com/Jesse-Islam/cbnn](https://github.com/Jesse-Islam/cbnn). ## Acknowledgements We would like to thank Dr. James Meigs, the project leader of these awards, for his support and helpful discussions. The work was also supported as part of the Congressionally Directed Medical Research Programs (CDMRP) award W81XWH-17-1-0347. We would also like to thank Dr. James Hanley for his support and discussions while extending the case-base methodology. ## Financial disclosure This work was supported by subcontracts from UM1DK078616 and R01HL151855 to Dr. Rob Sladek. ## Conflict of interest The authors declare no potential conflict of interests.
2310.02774
Graph Neural Networks and Time Series as Directed Graphs for Quality Recognition
Graph Neural Networks (GNNs) are becoming central in the study of time series, coupled with existing algorithms as Temporal Convolutional Networks and Recurrent Neural Networks. In this paper, we see time series themselves as directed graphs, so that their topology encodes time dependencies and we start to explore the effectiveness of GNNs architectures on them. We develop two distinct Geometric Deep Learning models, a supervised classifier and an autoencoder-like model for signal reconstruction. We apply these models on a quality recognition problem.
Angelica Simonetti, Ferdinando Zanchetta
2023-10-04T12:43:38Z
http://arxiv.org/abs/2310.02774v1
# Graph Neural Networks and Time Series as Directed Graphs for Quality Recognition ###### Abstract Graph Neural Networks (GNNs) are becoming central in the study of time series, coupled with existing algorithms as Temporal Convolutional Networks and Recurrent Neural Networks. In this paper, we see time series themselves as directed graphs, so that their topology encodes time dependencies and we start to explore the effectiveness of GNNs architectures on them. We develop two distinct Geometric Deep Learning models, a supervised classifier and an autoencoder-like model for signal reconstruction. We apply these models on a quality recognition problem. ## 1 Temporal convolutional Networks Convolutional neural networks (CNNs, see [14], [15], [16], [12], [13]) are deep learning algorithms employing so called _convolutional layers_: these are layers that are meant to be applied on grid-like data, e.g. images. For data organized in sequences, ld CNNs were developed ([10], [11]) and, more recently, TCNs have become popular in the study of time series (see [l] and the references therein). Throughout the paper, \(\mathrm{TS}(r,m)\) will denote the set of multivariate time series with \(m\) channels and length \(r\) in the temporal dimension. Given \(\mathbf{x}\in\mathrm{TS}(r,m)\) we will denote as \(\mathbf{x}(i)_{j}\) (or simply as \(\mathbf{x}_{ij}\) when no confusion is possible), for \(i=1,...,r\) and \(j=1,...,m\), the \(j\) the coordinate of the vector \(\mathbf{x}(i)\in\mathbb{R}^{m}\). For a given natural number \(n\), we shall denote as \([n]\) the ordered set \((1,...,n)\). Now, recall that given a filter \(K\in\mathbb{R}^{f}\), we can define a one-channel, one-dimensional (ID) convolution as an operator \[\mathrm{conv}\mathrm{1D}:\mathrm{TS}(r,1)\rightarrow\mathrm{TS}(l,1)\] \[\mathrm{conv}\mathrm{1D}(\mathbf{x})_{j}=\sum_{i=1}^{f}K_{i} \mathbf{x}_{\alpha(j,i)}+b_{j}\] where \(\alpha(j,-):[f]\rightarrow\mathbb{Z}\) are injective index functions, \(\mathbf{x}_{i}:=0\) if \(i\notin[r]\) and \(b\in\mathbb{R}^{l}\) is a bias vector. The numbers \(K_{i}\) are called the _parameters_ or _weights_ of the convolution. The most commonly used index functions are of the form \(\alpha(j,i)=(n+d\cdot i)+j\) for some integers \(n,d\). As a consequence, from now on we shall assume that the one dimensional convolutions we consider have this form. If, \(\alpha(j,i)\leq j\) for all \(i,j\) then the convolution is said to be _causal_ as it will look only 'backward'. These are the building blocks of TCNs, that are CNNs where only causal convolutions appear. If \(|d|>1\), the convolution is called _dilated_. One could define multi-channel (i.e. handling multivariate time series), ld convolutions in two steps. First, we define convolutions taking a multivariate time series to an univariate time series as operators \(\mathrm{conv}:\mathrm{TS}(r,n)\rightarrow\mathrm{TS}(l,1)\) as \(\mathrm{conv}(\mathbf{x})_{i}=\sum_{j=1}^{n}\mathrm{conv}\mathrm{1D}_{j}( \mathbf{x}(-)_{(j)})\) where \(\mathrm{conv}\mathrm{1D}_{j}\) are one-channel, one-dimensional convolutions. Then we can define ld convolutions transforming multivariate time series into multivariate time series as operators \(\mathrm{conv}:\mathrm{TS}(r,n)\rightarrow\mathrm{TS}(l,m)\) that are multi-channel ld convolutions when co-restricted at each non temporal dimension of the output. The usefulness of TCNs in the context of time series arises from the fact that causal convolutions by design are able to exploit temporal dependencies, while not suffering from some of the algorithmic problems of RNNs such as LSTMs: for example they appear to be faster to train and more scalable (see [1] for a discussion). ## 2 Time series as Directed Graphs ### Generalities. **Definition 2.1**.: A _directed graph_ (_digraph_) \(G\) is the datum \(G=(V_{G},E_{G},h_{G},t_{G})\) of two sets \(V_{G}\) (the _set of vertices_), \(E_{G}\) (the _set of edges_) and two functions \(h_{G},t_{G}:E_{G}\to V_{G}\) associating to each edge \(e\) its _head_\(h_{G}(e)\) and its _tail_\(t_{G}(e)\) respectively. A morphism \(\varphi:G\to H\) between two directed digraphs \(G\) and \(H\) is the datum of two functions \(\varphi_{V}:V_{G}\to V_{H}\), \(\varphi_{E}:E_{G}\to E_{H}\) such that \(h_{H}\circ\varphi_{E}=\varphi_{V}\circ h_{G}\) and \(t_{H}\circ\varphi_{E}=\varphi_{V}\circ t_{G}\) From now on, for simplicity we will assume that our digraphs have at most one edge connecting two different nodes (for each direction) and at most one self loop for each node. In this case, given a digraph \(G=(V_{G},E_{G},h_{G},t_{G})\) and an ordering of the vertices \((v_{i})_{i\in[|V_{G}|]}\), we can define the _adjacency matrix of \(G\)_ as the matrix defined by \(A_{ij}=1\) if there is there exists an edge having as tail \(v_{i}\) and head \(v_{j}\) or \(A_{ij}=0\) otherwise. If the adjacency matrix of a graph is symmetric, we say that our graph is _undirected_. We can assign _weights_ to the edges of a graph by letting the entries of the adjacency matrix to be arbitrary real numbers. When speaking about weighted digraphs, we shall always assume that there is an edge having as tail \(v_{i}\) and head \(v_{j}\) if and only if \(A_{ij}\neq 0\). We speak in this case of _weighted digraphs_. **Definition 2.2**.: A _digraph with features of dimension \(n\)_ is the datum \((G,F_{G}=(h_{v})_{v\in V_{G}})\) of a (weighted or unweighted) digraph \(G\) and vectors \(h_{v}\in\mathbb{R}^{n}\) of _node features_ for each vertex \(v\in V_{G}\). For a given digraph \(G\), we shall denote as \(\operatorname{Feat}(G,n)\) the set of all digraphs with features of dimension \(n\) having \(G\) as underlying digraph. Real-world graph-structured datasets usually come in form of one or more digraphs with features. Given a digraph with features \((G,F_{G}=(h_{v})_{v\in V_{G}})\) and a digraph morphism \(\varphi:H\to G\), we can pullback the features of \(G\) to obtain a graph with features \((H,\varphi^{*}F_{G}=(h_{\varphi_{V}(v)})_{v\in V_{H}})\). This defines a function \(\varphi^{*}:\operatorname{Feat}(G,n)\to\operatorname{Feat}(H,n)\). Graph Neural Networks (GNNs, [23]) are models that are used on graph-structured data using as building blocks the so called _graph convolutions_ ([4], [6]): given a graph, they update each node feature vector combining the information contained in the feature vectors of adjacent nodes. In general, a graph convolution is a function \(\operatorname{gconv}:\operatorname{Feat}(G,n)\to\operatorname{Feat}(G,m)\) that is permutation invariant in a sense that we shall make precise below. graph convolutions update node features of a digraph using a _message passing mechanism_ that can be written in the following general form \[h^{\prime}_{v_{i}}=\sigma(\psi(h_{v_{i}},\oplus_{v_{j}\in N^{\alpha}(v_{i})} \varphi(h_{v_{i}},h_{v_{j}},A_{ij},A_{ji})) \tag{2.3}\] where \(\sigma\) is an activation function, \(\alpha\in\{h,t,u\}\), \(N^{h}(v_{i})=\{v_{j}\in|V_{G}|\mid A_{ji}\neq 0\}\), \(N^{t}(v_{i})=\{v_{j}\in|V_{G}|\,|A_{ij}\neq 0\}\),\(N^{u}(v_{i})=N^{h}(v_{i})\cup N^{t}(v_{i})\), \(\oplus\) denotes a permutation invariant function and \(\psi\), \(\varphi\) denote differentiable functions (weaker regularity assumptions can be made). Many popular message passing mechanisms are a particular case of the following one: \[h^{\prime}_{v_{i}}=\sigma(\sum_{v_{j}\in N^{\alpha}(v_{i})}c_{f^{\alpha}(i,j) }A_{f^{\alpha}(i,j)}Wh_{v_{j}}+l_{i}A_{ii}Bh_{v_{i}}) \tag{2.4}\] here \(\sigma\) is an activation function, \(f^{\alpha}(i,j)=(i,j)\) if \(v_{j}\in N^{t}(v_{i})\) and \((j,i)\) if \(v_{j}\in N^{h}(v_{i})\), \(c_{ij}\), \(l_{i}\) are optional normalization coefficients, \(W,B\) are matrices of weights. For digraphs, the choice of \(\alpha\) should be thought of as whether a node embedding should be updated by looking at the information of the nodes that are sending to it a signal, by looking at the information of the nodes that are receiving from it a signal or both. These are three different semantics, all justifiable depending on the problem at hand. Two important graph convolutions arise as particular cases of the previous formula: Kipf and Welling's graph convolution for undirected graphs (see [9]) and GraphSage convolution ([7]) as it appears in the popular package PyTorch Geometric ([17], [5]). Notice that message passing mechanisms as in (2.3) are _permutation invariant_ in the sense that they do not depend on the ordering given to the set of vertices and only depend on the topology of the underlying graph structure. We remark that in [9] and [7] the above convolutions are described only for undirected graphs, but the formulas also make sense for digraphs. In fact the standard implementations used by practitioners are already capable to handle digraphs and are being used also in that context. For example, some papers introducing attention mechanisms (e.g. GAT, see [22]) explicitly introduce this possibility. However, in the digraph case the underlying mathematical theory is somewhat less mature (see [20] and the reference therein, for the roots of the mathematics behind the directed graph Laplacian the reader is referred to [2]). ### Graph Neural Networks for time series as directed graphs. There are many ways to turn time series into digraphs with features. To start with, we introduce two basic examples. **Example 1:** A multivariate time series \(\mathbf{x}\in\mathrm{TS}(n,m)\cong\mathbb{R}^{n\times m}\) can be seen as an unweighted digraph with features as follows. To start with, we consider a set of \(n\times m\) nodes \(v_{ij}\), with \(i=1,...,n\) and \(j=1,...,m\). Then, we create edges \(v_{ij}\to v_{lk}\) only if \(l\geq i\) and the edge is not a self loop (i.e. edges receive information only from the present and the past). We assign the scalar \(\mathbf{x}(i)_{j}\) as feature for each node \(v_{ij}\). This construction results in an unweighted digraph with features \((G_{\mathbf{x}},F_{\mathbf{x}}=(\mathbf{x}_{ij}\in\mathbb{R}))\). One can modify the topology of the graph just constructed. For example, one could create edges \(v_{ij}\to v_{lk}\) only if the edge is not a self-loop, \(l\geq i\) and \(l-i=0,1\) or if the edge is not a self-loop, \(l\geq i\), and \(l-i\) is both divisible by a given positive integer \(d\) and is smaller than \(k\cdot d\) for a given positive integer \(k\). This construction results in the directed graph structure pioneered in [24]): see figure 1. **Example 2**: A multivariate time series \(\mathbf{x}\in\mathrm{TS}(n,m)\cong\mathbb{R}^{n\times m}\) can be seen as a one dimensional time series with \(m\) channels. In this case, the time series can be turned into a digraph with features \((G_{\mathbf{x}},F_{\mathbf{x}}=(\mathbf{x}(i)\in\mathbb{R}^{m}))\) by considering a directed graph of \(n\) nodes \(v_{i}\), \(i=1,...,n\) and edges \(v_{i}\to v_{l}\) are added if the edge is not a self-loop and \(l\geq i\), \(l-i=1\) or and \(l-i\) is both divisible by a given positive integer \(d\) and is smaller than \(k\cdot d\) for a given positive integer \(k\). We assign the vector \(\mathbf{x}(i)\in\mathbb{R}^{m}\) as a vector of features for each node \(v_{i}\). This completes the construction of the desired directed digraph with features. These examples are of course just a starting point and one can take advantage of further domain knowledge to model the topology of the graph in a more specific way. For instance, one could use auto-correlation functions, usually employed to determine the order of ARMA models (see [21]), to choose the right value for parameters like \(k\) or \(d\). As proved in Lemma 2.5 under certain hypotheses, ordinary convolutions on time series can be seen as transformations between graphs, that is graph convolutions, however the latter evidently carry a very different meaning than ordinary TCNs and can be more flexible. Thus thinking of a time series as a digraph opens to a whole new set of possibilities to be explored. For example, they can be effective as temporal pooling layers when combined with other algorithms or they can leverage the message passing mechanisms that is thought to be more effective to solve the task at hand. **Lemma 2.5**.: _Consider a convolution \(\mathrm{conv1d}:\mathbb{R}^{d}\cong\mathrm{TS}(d,1)\to\mathrm{TS}(d-r,1) \cong\mathbb{R}^{d-r}\), \((\mathrm{conv1d}(\boldsymbol{x}))_{i}=\sum_{j=0}^{r-1}K_{j}\boldsymbol{x}_{i+j}\). Then there exists a weighed digraph \(G\), a graph convolution \(\mathrm{gconv}:\mathrm{Feat}(G,1)\to\mathrm{Feat}(G,1)\) and a subgraph \(\iota:H\subseteq G\) such that \(\mathrm{Feat}(G,1)\cong\mathrm{TS}(d,1)\), \(\mathrm{Feat}(H,1)\cong\mathrm{TS}(d-r,1)\) and, under these bijections, \(\mathrm{conv1d}\) arise as the map \(\iota^{*}\circ\mathrm{gconv}:\mathrm{Feat}(G,1)\to\mathrm{Feat}(H,1)\)._ Proof.: Define \(G\) to be the digraph having \(d\) vertices \(v_{1},...,v_{d}\) and whose weighted adjacency matrix is given by \(A_{ij}=K_{j-i+1}\) if \(1\leq j-i+1\leq r\) and zero otherwise. Let \(\iota:H\subseteq G\) be its weighed subgraph consisting of the vertices \(v_{r},...,v_{d}\). We consider the graph convolution \(\mathrm{gconv}:\mathrm{Feat}(G,1)\to\mathrm{Feat}(G,1)\) arising from the message passing mechanism given by Figure 1: One possible structure of a time-digraph, as described in Example 1. Here only adjacent connections and all the connections for node \(v\) are shown, and \(d=4\) formula (2.4) with \(\alpha=h\), \(W=1\), \(l_{i}=0\), \(\sigma=1\) and \(c_{ij}=1\). We define the bijection \(\mathrm{TS}(d,1)\cong\mathrm{Feat}(G,1)\) as follows: for each \(\mathbf{x}\in\mathrm{TS}(d,1)\), \(\mathbf{x}(i)\) becomes the feature of the node \(v_{i}\) in \(G\) (and analogously for \(\mathrm{TS}(d-r,1)\cong\mathrm{Feat}(H,1)\)). The previous lemma can be extended mutatis mutandis also to the case of multivariate time series and contains as a particular case the one of dilated convolutions. The process of learning the weights of a dilated convolution can be thought as the process of learning the weights of the adjacency matrix of a graph. **Remark 2.6**.: Simple Laplacian-based graph convolutions on undirected graphs can be seen as 1-step Euler discretizations of a Heat equation (see fore example [6, 4]). In general, a GNN consisting of a concatenation graph convolutions can be thought as a diffusive process of the information contained in the nodes along the edges: in our context, directed graph convolutions applied to time series digraphs "diffuse" the information through time by updating at each step node features using the information coming from a 'temporal neighbourhood'. ## 3 Our Models We propose two different types of models, both taking advantage of the time-digraph structure described in the previous section: a supervised classifier/regressor and an unsupervised method made of two separate steps, an autoencoder to reconstruct the time series, followed by a clustering algorithm applied to the reconstruction errors. The core of the algorithm is the same for the two approaches, so we will first focus on this core building block and then proceed to discuss the two models separately. ### Main Building Block The main building block for all the models presented in this paper is a collection of layers that combines TCNs with GNNs. Inspired by what has already been done in the literature, we propose this main building block in two versions: an encoder and a decoder (they are sketched in Figure 2(a)). They can be used on their own or put together to construct an autoencoder model. In the encoder version, the input is first given to a block of either \(n\) TCN layers or \(n\) GNN layers (we tested primarily Sage convolution-based layers as they appeared more effective after some preliminary tries) with skip connections, following the approach proposed in [19]. The effectiveness of skip-connections in the model developed in _op.cit_ is due to the fact that stacking together the outputs of dilated convolutions with different dilations allows to consider short and long term time dependencies at the same time. Skip connections have been used also in the context of GNN: for example, Sage convolutions ([7]) introduce them already in the message passing mechanism. This motivates the introduction of skip connections in GNNs handling digraphs with features coming from time series: in this context they do not only allow to bundle together information coming from "temporal neighbourhoods" of different radii as in the architecture developed in [19] but they also help to reduce oversmoothing that traditionally curses GNNs architectures (see [3] for a discussion and the references therein). The skip-connections block is described in figure (2(b)) and works as follows. The input goes through the first TCN/GNN layer, followed by a 1-dimensional convolution and an activation function. We call the dimension of the output of this convolution skip dimension. Then the output is both stored and Figure 2: The structure of the main building block, both as an encoder and a decoder passed to the next TCN/GNN layer as input and so on. In the architectures we have tested, the TCN/GNN layers are simply dilated 1d convolutions or a single graph convolutions (followed by an activation function), but more involved designs are possible. At the end all the stored tensors are stacked together and passed to a series of \(m\) graph convolutions, each one followed by an activation function. We tested the convolutions: GCN (cfr. [9]), Sage (cfr. [7]), GAT (cfr. [22]). The graph convolutions are defined to encode in the embedding of a given node, at each pass, the information provided by all the nodes in its neighbourhood. Now, looking at how a time-digraph is defined, one sees that in our set up this translates to building the embedding of a given node, that is a data point of the time series, using the information given by the data points that are close in time (short-term memory behaviour) or at a certain distance \(d\) away in time (long-term memory behaviour), where \(d\) is set in the construction of the graph. Finally the intermediate output produced by the graph convolutions is given to an optional 1d convolution with kernel size 1 to adjust the number of channels and then to either an average pooling layer or a max pooling layer that shrinks the temporal dimension of the graph. In other words, if we think about the time-graph as a time window of length \(T\), the pooling layers outputs a time window, and therefore a time-graph, of length \(T/s\), thus realizing the characteristic _bottleneck_ of an autoencoder as described for instance in [18, 19, 8]. We will refer to \(s\) as the shrinking factor. The decoder version changes the order of the blocks we just described, in a symmetric way. It starts with an upsample of the input time series which is then passed to the the graph convolutions followed by the skip-connections block. It terminates with a 1d final convolution with kernel size 1 that reduces the number of channels, or hidden dimensions, thus giving as output a time series of the same dimensions as the initial input. In the case where the skip-connections block is built with GNN layers, this final convolution can be replaced by a final TCN layer. ### Regression/Classification The classifier/regressor model uses the main building block described above as an encoder. The input graph is given to the encoder which predicts a suitable embedding for each of its nodes. At this point the embeddings of the single nodes are combined together into a vector that gives a representation of the whole graph. Recalling that each graph represent a time window, one can think of this first part as a way to extract a small number of features from each time window. These features are then fed to a multi-layer perceptron that outputs a vector of probabilities in the case we use it as a classifier or a value in the case it is used as a regressor instead. As for the way the node embeddings are combined together we explored a few possibilities in the context of classification, the two main options being a flattening layer and a mean pooling layer. For easiness of notation, from now on we will refer to these models as TCNGraph-Classifier/Regressor if the skip-connections block uses TCN layers, TGraphClassifier/Regressor if it is built with graph convolutions. Figure 4: The structure of the classifier (or regressor) Figure 3: The structure of the block with skip connections ### Autoencoders for unsupervised anomaly detection The second architecture we propose is a an autoencoder model, thus it employs two main building blocks, first used as an encoder and then as a decoder and the output is the reconstruction of the given time series represented by the time graph. In our experiments we use the signal reconstruction obtained with this architecture for anomaly detection purposes. Let us briefly describe our method. The main idea is that the autoencoder model provides a good reconstruction of the input time series when the signal is normal and worst reconstructions on time windows where an anomaly appears (again we refer to [19] among others for a similar approach to anomaly detection), as it is constructed to remove noise from a signal. Thus, once we have the reconstructed times series, we compute both the Root Mean Square Error and the Mahalanobis score (see [19] for more details in a similar context), for each given time window, with respect to the original time series. In the case one has to deal with more than one time series bundled together in the time-digraph, there are simple methods to get a single score for each time window. Now we can treat these two measures of the reconstruction error as features of the time windows and use an unsupervised clustering algorithm (we tested both Kmeans and DBscan), to assign a binary label to each window, based on the cluster they fall into (see Section 4.2 for more details). This approach gives a completely unsupervised method to handle anomaly detection of one or more time series. Again, from now on we will refer to these models as: TCNGraphAE if the skip-connections block uses TCN layers, TGraphAE if it is built with graph convolutions. and TGraphMixedAE if the encoder uses graph convolutions and the Decoder uses TCN layers. If the skip connections blocks consist only of dilated convolutions and we do not have a final graph convolution to filter the signal, we obtain an TCN autoencoder/Classifier with a structure similar to the one described in [19]. We call this latter models TCNAE and TCNClassifier. We regard these models as state of the art models in this context and we use them as a benchmark. ## 4 Experiments For our experiments we used a database made of ECG signals. These signals have been recorded with a Butterflive medical device at a sampling rate of 512 Hz. A lowband Butterworth filter at 48 Hz was applied to each signal. Then every 5 second long piece of signal was manually labeled according to readability of the signal: label \(3\) was given to good quality signals, label \(2\) was given to medium quality signals and label \(1\) was given to low quality/unreadable signals. In total we had a database made of 10590 \(5\)-second-long sequences. We turned the problem into a binary classification: label \(0\) was assigned to signals having label=1 and label \(1\) was assigned to signals having label=2,3. A Train/Valid/Test split was performed on the database with weights \(0.3/0.35/0.35\). Train, Valid and Test sets are made of signals coming from different recordings and the signals having label \(0\) are approximately the I8% of each set. For the final evaluation of our models, we run the models for \(10\) times, then, for each score, the best and the worst results were removed and the mean and standard deviation of the remaining \(8\) runs were computed and are reported in Tables I, \(2\). ### Supervised Classification To test how graph convolutions perform for our classification problem using a supervised method, we applied a convolution smoother of window \(20\) to our dataset, and then we performed a downsample of ratio 4 (i.e. we kept one point every 4). We subdivided each Train/Valid/Test set in non overlapping slices of 5 seconds and we applied a min-max scaler to each sequence independently. Each 5 second long time series \(\mathbf{x}\in\mathrm{TS}(640,1)\) was given a simple directed graph structure as in Example 2, consisting of one node per signal's point (resulting in 640 nodes). We call \(k\cdot d\) the _lookback window_. We used l28 as our lookback window (l second) and we set \(d\) to be equal to \(4\). We used the Adam optimizer to train all our models. Results are displayed in Table I. Further details about the models used are contained Figure 5: The structure of the autoencoder in the Appendix. For the graph convolutions involved in model TCNGraphClassifier the underlying message passings used \(\alpha=t\) as in Formula 2.4: this results in the time dependencies to be read in the reversed direction by these layers. ### Unsupervised Anomaly Detection with Autoencoders For our unsupervised experiments, the data was pre-processed and prepared as in the supervised case with the only difference that the signals were divided into pieces of one second and that the lookback window was set to 25. Then, we trained the autoencoder to reconstruct these 1 second long signals. \(d\) was set to be either \(4\) or \(8\) depending on the model. We used the following training procedure: first each model has been trained on the train set for \(50\)-\(100\) epochs. Then the worst reconstructed \(20\)% of signals was discarded. This decreased the percentage of unreadable data in the training set from approximately \(18\)% to \(6\)% for each model. Each model was then retrained from scratch for \(150\)-\(325\) epochs on the refined training set. The rationale of this choice is that, for autoencoders to be used effectively as anomaly detectors, the 'anomalies' should be as few as possible to prevent overfitting the anomalies (see [19], where signals with too many anomalies were discarded). We found that using an autoencoder trained for less epochs can be employed effectively to reduce the proportion of anomalies in the training set. The trained models were then used to compute, for each signal in the Valid set, the reconstruction loss and the Mahalanobis scores as in [19]. Both the resulting scores were then averaged and normalized to provide one mean reconstruction error and one mean Mahalanobis score for each labeled \(5\) second slice of signal. We thus obtained a set of pairs of scores, one for each 5 second long signal in the Valid set: we will refer to it as the errors Valid set. We used the error Valid set as a feature space to train two unsupervised cluster algorithms: Kmeans and DBscan. For dbscan we set the minimum number of points in a cluster to be equal to 4, as customary for 2 dimensional data, and we used as epsilon the mean of the distances of the points plus 2 times their standard deviation. For both, to get the final labels on the Test set, we used two different techniques. One option we considered is to obtain the final labels for the Test set repeating exactly the procedure \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{Positive class = label 1} & \multicolumn{3}{c}{Positive class = label 0} \\ & **Precision** & **Recall** & **Accuracy** & **Precision** & **Recall** & **Accuracy** \\ \hline TGraphClassifier & \(0.965\pm 0.002\) & \(0.991\pm 0.002\) & \(0.962\pm 0.003\) & \(0.941\pm 0.012\) & \(0.806\pm 0.011\) & \(0.962\pm 0.003\) \\ TCNGraphClassifier & \(0.939\pm 0.013\) & \(0.988\pm 0.006\) & \(0.936\pm 0.010\) & \(0.912\pm 0.044\) & \(0.653\pm 0.083\) & \(0.936\pm 0.010\) \\ TCNClassifier & \(0.975\pm 0.003\) & \(0.994\pm 0.002\) & \(0.973\pm 0.003\) & \(0.962\pm 0.011\) & \(0.863\pm 0.017\) & \(0.973\pm 0.003\) \\ \hline \hline \end{tabular} \end{table} Table 1: Results of the classifiers. Best scores are colored in purple. \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{Positive class = label 1} & \multicolumn{3}{c}{Positive class = label 0} \\ & **Precision** & **Recall** & **Accuracy** & **Precision** & **Recall** & **Accuracy** \\ \hline _Kmeans_, _approach A_ & & & & & & \\ \hline TGraphMixedAE & \(0.973\pm 0.006\) & \(0.974\pm 0.011\) & \(0.955\pm 0.006\) & \(0.854\pm 0.051\) & \(0.847\pm 0.035\) & \(0.955\pm 0.006\) \\ TGraphAE & \(0.960\pm 0.007\) & \(0.993\pm 0.006\) & \(0.959\pm 0.003\) & \(0.952\pm 0.040\) & \(0.765\pm 0.044\) & \(0.959\pm 0.003\) \\ TCNGraphAE1 & \(0.967\pm 0.003\) & \(0.998\pm 0.002\) & \(0.968\pm 0.002\) & \(0.985\pm 0.014\) & \(0.806\pm 0.016\) & \(0.969\pm 0.002\) \\ TCNGraphAE2 & \(0.965\pm 0.002\) & \(0.997\pm 0.001\) & \(0.966\pm 0.002\) & \(0.976\pm 0.007\) & \(0.796\pm 0.012\) & \(0.966\pm 0.002\) \\ TCNAE1 & \(0.966\pm 0.012\) & \(0.995\pm 0.005\) & \(0.964\pm 0.009\) & \(0.966\pm 0.032\) & \(0.798\pm 0.074\) & \(0.964\pm 0.009\) \\ TCNAE2 & \(0.949\pm 0.005\) & \(0.999\pm 0.001\) & \(0.951\pm 0.004\) & \(0.995\pm 0.006\) & \(0.692\pm 0.031\) & \(0.954\pm 0.004\) \\ \hline _Dbsscan_, _approach B_ & & & & & & \\ \hline TGraphMixedAE & \(0.984\pm 0.005\) & \(0.944\pm 0.012\) & \(0.939\pm 0.007\) & \(0.745\pm 0.037\) & \(0.909\pm 0.028\) & \(0.939\pm 0.007\) \\ TGraphAE & \(0.968\pm 0.006\) & \(0.989\pm 0.006\) & \(0.962\pm 0.002\) & \(0.933\pm 0.034\) & \(0.813\pm 0.038\) & \(0.962\pm 0.002\) \\ TCNGraphAE1 & \(0.971\pm 0.004\) & \(0.991\pm 0.005\) & \(0.966\pm 0.003\) & \(0.940\pm 0.028\) & \(0.829\pm 0.022\) & \(0.966\pm 0.003\) \\ TCNGraphAE2 & \(0.979\pm 0.007\) & \(0.985\pm 0.006\) & \(0.967\pm 0.001\) & \(0.913\pm 0.031\) & \(0.877\pm 0.043\) & \(0.967\pm 0.001\) \\ TCNAE1 & \(0.971\pm 0.007\) & \(0.985\pm 0.011\) & \(0.962\pm 0.007\) & \(0.913\pm 0.057\) & \(0.833\pm 0.043\) & \(0.962\pm 0.007\) \\ TCNAE2 & \(0.973\pm 0.006\) & \(0.988\pm 0.005\) & \(0.966\pm 0.002\) & \(0.925\pm 0.027\) & \(0.846\pm 0.039\) & \(0.966\pm 0.002\) \\ \hline \hline \end{tabular} \end{table} Table 2: Results of the autoencoder algorithms. Best scores are colored in purple and the second best in blue. used for the Valid set (approach A). The second technique we used goes as follows: first, we trained an SVM classifier on the errors Valid set, labeled using the clustering provided by the unsupervised method. Then, we obtained an errors Test set applying the procedure described above for the Valid set, but using the normalizers fitted on the errors Valid set. Finally, we used the trained SVC to predict the labels of the Test signals (approach B). We report the results in table 2, while the clusters obtained for models TGraphAE and TCNAEI are displayed in Figure 7. The signals reconstructed by these models are displayed in Figure 6. Both models reconstruct good signals in a comparable way and fail to properly reconstruct the bad signals, as expected, despite their small number of parameters. Notice that these methods are fully unsupervised and do not require the use of even a few labeled samples. As in the supervised setting, the graph convolutions involved in models TGraphMixedAE, TCGraphAEI and TCNGraphAEI use \(\alpha=t\) in their underlying message passings as in Formula 2.4. As a consequence, in these models, time dependencies were read in the right direction by the TCNs and in the reversed one by the graph convolutions, resulting in parameter-efficient "bidirectional" structures. ### Discussion In the case of the supervised classification, a TCN classifier without the use of graph convolutions proved to be the best performing one. This is probably due to the effect of the final flattening layer that may provide the best mechanism in this context to link the encoder to the final MLP. The graph based model had a worse performance but achieved its best using a mean pooling mechanism, as it can be expected. However the graph based classifier obtained good results and had less than half of the parameters than the TCN classifier (see Table 3), thus exhibiting the more expressive power of the graph convolutions. In the case of the unsupervised classification, the best Figure 6: The reconstructed signals with TGraphAE (a) and TCNAEI (b). For each subfigure, the top image represents the reconstruction of a good signal (label 1) and the bottom one is the reconstruction of a bad signal (label 0). performing models were on average the TCN based ones where graph convolutions were added right before and after the bottleneck. This gives a good indication of the fact that graph convolutions applied to digraphs with features can serve as good layers to filter signal coming from different layers. Also the second best performing model overall consists of a Graph based encoder and a TCN decoder, strengthening the hypothesis that graph convolution can be used to improve the effectiveness of ordinary models or to reduce considerably the number of parameters of state of the art architecture without decreasing too much their performance. It has to be noted that in the best performing models, the message passing mechanisms of the graph convolutions were particular cases of formula 2.4 with \(\alpha=t\): as a consequence, these layers learn time dependencies as if the features learnt by the first part of the encoder where reversed in time. Therefore two types of time dependencies were combined in the same algorithm in a parameter-efficient way, mimicking the behaviour of other 'bidirectional' models (such as bidirectional LSTMs). Summing up, GNN applied to digraphs with features coming from time series showed their effectiveness in improving established algorithms and also their potential to replace them. Moreover, effective fully unsupervised pipelines can be devised to solve anomaly detection and quality recognition problems using the models described in this paper. We plan to continue the study of GNNs applied to time digraphs with features in the context of multivariate time series, constructing more complex time digraph structures and using more capable message passing mechanisms. ### Reproducibility Statement The experiments described in this paper are reproducible with additional details provided in Appendix A. This section includes a list of all the hyperparameters used to train our models (see Table 3) and more details on the algorithms' implementation. ### Acknowledgements The authors wish to thank Prof. Rita Fioresi for many stimulating discussions on the content of this paper and A. Arigliano, G. Faglioni and A. Malagoli of VST for providing them with the labeled database used for this work. A. Simonetti wishes to thank professor J. Evans for his continuous support. The research of F. Zanchetta was supported by Gnsaga-Indam, by COST Action CaLISTA CA2109, HORIZON-MSCA-2022-SE-01-01 CaLIGOLA, MNESYS PE, GNSAGA and has been carried out under a research contract cofounded by the European Union and PON Ricerca e Innovazione 2014-2020 as in the art. 24, comma 3, lett. a), of the Legge 30 dicembre 2010, n. 240 e s.m.i. and of D.M. 10 agosto 2021 n. 1062. Figure 7: The clusters obtained with TGraphAE and TCNAEI, as follows: (a) TGraphAE-dbscan, (b) TGraphAE-kmeans, (c) TCNAEI-dbscan, (d) TCNAEI-kmeans. For each subfigure, in the top image the points are colored based on their true label and in the one on the bottom they are colored based on the predicted cluster (label 1 in orange and label 0 in blue). ## Appendix A Models' hyperparameters and details The specific hyperparameters of the models described in the previous sections are listed in table 3. Here is a description of this table, to understand all the names and abbreviations there appearing. _Num Channels_ gives the number of layers used in the skip connections block, together with the output dimension of each layer in the form of a list; the number after the comma indicates the channel dimension of the signal in the bottleneck resulting after the application of the (ID convolution) graph convolution at the end of the encoder. _Skip dims_ gives the list of the skip dimensions as described in Section 5.1. _GConv type_ gives the information on the graph convolution that follows the skip connections block, if present: it gives the type of convolution used and the list of its hidden dimensions - in case the convolution is a GAT layer, it also specifies the number of heads chosen (GAT2H, for instance, means that 2 heads were selected). As for _Pool type_, _Downsample_, _Upsample_, we indicate the type of layer used referring to their standard notation (see for example pytorch geometric libraries). Finally for dilated convolutions, we used a kernel size of \(7\) for autoencoders models and \(8\) for the supervised models. When skip connections blocks consist of a sequence of dilated convolutions, we used increasing dilations \(2^{0},2^{1},2^{2},...,2^{n}\) where \(n\) is the number of convolutions appearing in the considered block. We used SiLU activation functions and we employed both batch normalization and dropout between layers. In model TGraphClassifier, a final ID dilated convolution with dilation \(1\) was applied after the decoder.
2306.11950
Mitigating Communication Costs in Neural Networks: The Role of Dendritic Nonlinearity
Our comprehension of biological neuronal networks has profoundly influenced the evolution of artificial neural networks (ANNs). However, the neurons employed in ANNs exhibit remarkable deviations from their biological analogs, mainly due to the absence of complex dendritic trees encompassing local nonlinearity. Despite such disparities, previous investigations have demonstrated that point neurons can functionally substitute dendritic neurons in executing computational tasks. In this study, we scrutinized the importance of nonlinear dendrites within neural networks. By employing machine-learning methodologies, we assessed the impact of dendritic structure nonlinearity on neural network performance. Our findings reveal that integrating dendritic structures can substantially enhance model capacity and performance while keeping signal communication costs effectively restrained. This investigation offers pivotal insights that hold considerable implications for the development of future neural network accelerators.
Xundong Wu, Pengfei Zhao, Zilin Yu, Lei Ma, Ka-Wa Yip, Huajin Tang, Gang Pan, Tiejun Huang
2023-06-21T00:28:20Z
http://arxiv.org/abs/2306.11950v1
# Mitigating Communication Costs in Neural Networks: The Role of Dendritic Nonlinearity ###### Abstract Our comprehension of biological neuronal networks has profoundly influenced the evolution of artificial neural networks (ANNs). However, the neurons employed in ANNs exhibit remarkable deviations from their biological analogs, mainly due to the absence of complex dendritic trees encompassing local nonlinearity. Despite such disparities, previous investigations have demonstrated that point neurons can functionally substitute dendritic neurons in executing computational tasks. In this study, we scrutinized the importance of nonlinear dendrites within neural networks. By employing machine-learning methodologies, we assessed the impact of dendritic structure nonlinearity on neural network performance. Our findings reveal that integrating dendritic structures can substantially enhance model capacity and performance while keeping signal communication costs effectively restrained. This investigation offers pivotal insights that hold considerable implications for the development of future neural network accelerators. ## 1 Introduction In the past decade, we have observed a remarkable increase in artificial neural network (ANN) utilization across different domains. To some extent, this gives us an impression that AI is closing in on human-level intelligence [1, 2]. Looking back to the beginning of neural networks, we will find out that those ANNs were structured to mimic the neuronal networks in our brains. However, it is crucial to acknowledge that neurons in contemporary ANNs exhibit considerable differences from their biological counterparts. The following equation can represent a typical ANN neuron: \[h=\sigma(\sum_{i=1}^{n}w_{i}x_{i}+b)\,. \tag{1}\] Here, \(\sigma\) denotes the nonlinear output function, \(w_{i}\) and \(x_{i}\) correspond to the weights and inputs, and \(b\) is the bias term. These neurons, commonly called point neurons, are characterized by their simple weighted summation properties, which contrast with the intricate dendritic structures observed in biological neurons, as illustrated in Fig. 1. The dendritic structure is indispensable in neuronal network computation because it offers a better surface-area-to-volume ratio [7, 8]. Unlike cells with a more compact shape, this attribute enables neurons to gather synaptic inputs through their branched and elongated form effectively. In contemporary ANN models processed on general-purpose computing hardware, such as CPUs and GPUs, a physical dendrite structure is no longer required to enhance information collection efficiency. Suppose a dendritic structure is not needed for collecting synaptic inputs, and point neuron-based ANN models have been quite successful in the last decade. Is the dendritic structure just a fancy decoration no longer necessary for our modern ANNs? Evidence suggests that localized nonlinear signal processing happens inside the dendritic tree. Therefore we cannot dismiss whether dendrites should play a significant role for ANNs yet. In recent decades, experimental and computational neuroscience research has curated strong evidence that dendrites actively funnel synaptic inputs toward cell bodies instead of passively. Dendrites are active because their membrane is embedded with many voltage-gated ion channels, for example, voltage-gated sodium, calcium, and NMDA channels [9, 10, 11, 12]. Those channels lead to the nonlinearity of the dendritic input-output function. Earlier studies have assigned many different roles to the active dendrites, including counter-balance spatial attenuation at the distal end of dendrites, improving model expressivity, enabling efficient learning, and enabling dendrites to detect temporal sequences [7, 13, 14, 15, 16, 17, 18]. We can assign many more roles to the active dendrites with diverse signal-processing functions. However, it is well established that any nonlinear function performed by active dendrites can be replicated by a series of point neurons, as per the Universal Approximating Theory [19]. Given this, the question remains: Why are dendrites necessary in the nervous system, Are dendrites relevant for ANNs? We endeavor to address this crucial question via a machine-learning perspective. Through our analysis, we identify the central role of active dendrites to efficiently enhance model capacity without incurring excessive communication overhead. Notably, recent research has underscored communication as the predominant factor in energy consumption for both ANNs and biological neural networks during computation, as highlighted in the works of Dally et al. [20] and Levy et al. [21]. Our study illuminates the pivotal role of dendrites within neuronal networks, offering insights with significant implications for practical applications in a real-world context. Figure 1: (**A, B, C**) Illustration of three representative neurons showcasing distinct dendritic structures from left to right: A chicken bipolar neuron [3], a human hippocampal pyramidal neuron [4], and a ferret neocortical pyramidal neuron [5]. All neuron models were derived from the Neuromorpho.org database [6]. (**D**) Portrays a layer of a neural network made up of point neurons, as characterized by Equation 1. (**E**) Illustrates a comparable network layer, but composed of dendritic neurons as detailed by Equation 3. An exemplary dendritic neuron is highlighted within the red dotted line for clearer understanding. ## 2 Results In this study, we aim to identify the primary role played by the active dendritic structure of neurons. To this end, we reduce dendritic structure to a dual-layered neural network [11, 22], as exemplified in Fig. 1E. The mathematical representation of the simplified dendritic neuron employed in our study is provided by the equations (Eqs. 2, 3) shown below. The first line of the expression defines the dendritic computation, where \(W_{i}\) represents the weight vector for a specific dendrite and \(X\) denotes the input activation vector. The second line of the expression illustrates that the outputs of every \(K\) dendrites are aggregated to yield the output of a neuron. \[d_{i} =\sigma(W_{i}X+b_{i})\,, \tag{2}\] \[h =\theta(\sum_{i=1}^{K}d_{i})\,. \tag{3}\] In the proposed architectural framework, incoming data is initially integrated at each dendrite before undergoing a transformation via a nonlinear function, denoted as \(\sigma\). Subsequently, the outputs of these nonlinear functions are aggregated and, if necessary, further processed by an optional nonlinear function represented by \(\theta\) (not used in this study). This refined output is then transmitted to the downstream recipients. It is imperative to note that while each dendritic unit possesses a similar information-processing capacity to a point neuron, a distinct divergence exists in how their respective outputs are conveyed to downstream neurons. Contrary to a point neuron's output, which is independently channeled to the downstream neurons, dendritic outputs necessitate sharing a common channel with fellow dendrites of the same neuron for information dispatch. ### Dimension expansion with active dendrites Before exploring the experimental aspects in-depth, we want to first develop an intuitive comprehension of dendrites and their significance in biological brains, particularly from a machine-learning standpoint. Dimension expansion is an essential technique within the machine learning domain, which facilitates the mapping of original input data into an alternate basis within a higher-dimensional space, thereby enhancing pattern separation capabilities [23, 24]. It is postulated that this methodology bears a striking resemblance to strategies employed within biological brains, such as in the vertebrate cerebellum [24]. In this context, a relatively limited number of mossy fibers project onto a substantially larger number of granule cells, as demonstrated in studies by [25, 26]. Analogously, a similar expansion of inputs can be observed in various sensory pathways, such as in the case of cats, where the input signals from the lateral geniculate nucleus to the V1 cortex undergo a 25-fold expansion [27, 28]. On the other hand, through empirical exploration, researchers and practitioners of deep neural networks have discovered that scaling up networks is the core receipt to achieve good model performance [29]. The mechanism behind this scaling behavior remains elusive, with dimension expansion potentially playing a role. Both expanding feature dimensionality and model capacity are associated with the high costs in biological and artificial neural networks. In the case of the mossy fiber projection to granule cells, granule cells account for 99% of all neurons in the cerebellum [26, 30]. The scaling behavior observed in artificial neural networks has led to the adoption of large models such as GPT-3 [31], ChatGPT [32] and MT-NLG [33]. With the advent of these greatly expanded models, both biological and artificial neural networks face increased costs. In biological brains, more synapses and neurons are required to carry out tasks. In artificial neural networks, larger memory space is required to store weights and intermediate activation values. More computing hardware is needed to handle the expanded computing needs, resulting in a high energy cost. ### Reducing communication cost The energy required for computing in neural networks is considerable, yet it is not the primary cost involved. The dominant cost in the computing process of neural networks stems from communication rather than computing itself. In biological brains, only a small fraction of energy is spent on the computing part. As highlighted by Levy et al. [21], the communication process consumes 35 times more energy than the computing part of the brain. To put things in perspective, communication costs in artificial neural networks can be orders of magnitude higher than computing costs. For instance, the energy cost of adding two 32-bit numbers may only be 20 femtojoules (fJ), but fetching those two numbers from memory can consume 1.3 nanojoules (nJ). This means that the communication process in this example consumes 64,000 times more energy than the computing process [20]. Artificial neural networks must grapple with controlling communication costs like their biological counterparts. By examining biological neuronal networks, we can glean strategies to minimize these costs in artificial systems. Our study demonstrates that incorporating active dendrites can play a significant role in addressing this issue. ### Evaluating efficiency of dendritic structure To bolster a neural network's capacity for encoding information and enhancing expressivity, a prevalent technique involves widening the network, specifically by adding more neurons to hidden layers. This approach has been consistently demonstrated to increase a model's capacity and improve its generalization Figure 2: Comparison of Resnet-18-style models composed of point and dendritic neurons trained on the ImageNet dataset. Each experiment was performed 5 times, with standard deviations displayed. (A) Training loss values, (B) Training accuracy, and (C) Test accuracy for models with varying numbers of dendrites per neuron at four distinct levels of network width. \(x\)-axis indicates the number of dendrites per neuron; models with one dendrite per neuron are point neuron-based models. performance. As previously noted, communication energy consumption constitutes a significant cost in neural network computing. Each hidden layer neuron possesses its own nonlinear output function in a commonly utilized artificial neural network. It creates a one-to-one relationship between each nonlinear function and the hidden layer activation output to the next layer. Conversely, a dendritic neuron generates its activation output by aggregating outputs from multiple nonlinear functions, enabling it to form more efficacious synapses than a point neuron. This prompts the question: Does incorporating nonlinear dendrites into a neuron serve as an effective means of augmenting model capacity? To address this inquiry, we undertook a series of experiments. In the first part of our experiment, our objective is to scrutinize the impact of amalgamating active dendritic outputs on the behavior of neural networks. Although incorporating dendrites into a neuron can theoretically bolster its information storage capacity, given that more synapses are available for storing information [13], whether this method is efficient persists. To address this, we draw comparisons between models composed of point neurons and those integrating dendritic structures of varying configurations. ### Dense model on ImageNet dataset We begin by employing the Resnet-18 network [34] as a baseline point neuron-based model, a widely utilized computer vision model. In the case of models featuring active dendrites, we substitute point neurons in the original comparative models with dendritic neurons, as illustrated in Fig. 1E. We ensure that each nonlinear summation unit--whether it be a point neuron or an active dendrite unit--receives no more than one copy of input from the preceding network layer. We first compare the baseline models with models where point neurons are directly substituted with dendritic neurons while maintaining the overall architecture. For this part of the experiment, each neuron's dendrite receives the same set of inputs from the last layer. The models were trained with widely used ImageNet dataset [35]. Further details about model training, evaluation and architecture can be found in the Methods section. The results of this part of the experiment are displayed in the left half of Fig. 2. (A) shows the model training loss values, (B) illustrates the model accuracy on the training set, and (C) shows the test accuracy of the models. Each curve in the three panels compares models with the same inter-layer communication cost. The data points on the left end of each curve are from models comprised of point neurons, while the remaining data points are from models with dendritic neurons of different numbers of dendrites per neuron, indicated on the \(x\)-axis. We also scaled the baseline model by proportionally increasing or decreasing the number of channels per network layer, shown as different curves. In this way, the communication bandwidths between network layers were proportionally scaled. The results shown in Fig. 2A and B indicate that adding dendrites can lead to an improved model expressivity, leading to better fittings to the training data. When comparing models of the same inter-layer communication budget, the models with more dendrites consistently exhibit lower loss values and higher training accuracy than those with fewer or no dendrites. Is it possible to translate the enhanced fitting capabilities conferred by dendritic neurons into tangible benefits, as measured by model test accuracy? As illustrated in Fig. 2C, incorporating additional dendrites into each neuron consistently results in improvements in test accuracy across all four distinct levels of inter-layer communication cost. It is evident that, under the same hidden layer communication channel width level, the integration of active dendrites can substantially enhance model capacity and performance, and the effect remains stable across different inter-layer communication width scales. Enhancing model capacity while keeping a reasonable level of communication cost is a desirable goal. The increased number of parameters associated with additional dendrites can pose a significant challenge for networks that require a transfer of weights between on-chip and off-chip locations. Those dendrites' additional computing and space requirements can also be a big issue. To address this challenge, we investigated the effectiveness of replacing point neurons with dendritic neurons in computing cost, as measured by the total number of model parameters and FLOPs required for inference. The outcomes of this experiment segment can be observed in the right half of Fig. 2. We compare three distinct levels of computational complexity to provide a clearer understanding of the results. Assume a dendritic neuron contains \(K\) branches, for the dense model we study here, each dendrite receives the same number of inputs/weights as a point neuron; that is, a dendritic neuron receives \(K\) times more inputs than a point neuron with the same input dimensionality. Assume that two sequential fully connected neural network layers both have \(D\) channels. For the second network layer, the computational complexity and the number of parameters will both be \(D^{2}\). Then for two dendritic neuron layers with \(\hat{D}\) channels, assume each neuron is equipped with \(K\) dendrites, the computational and parametric complexity will be \(K\hat{D}^{2}\). Therefore, for a model of dendritic neuron with \(K\) dendrites to have the same level of computational complexity as a point neuron-based model, we need to reduce the number of inter-layer communication channels in each network layer to \(1/\sqrt{K}\) of the original numbers, that is \(\hat{D}=D/\sqrt{K}\). In Fig. 2D, E, and F, the blue dashed curves represent experimental results obtained from various models with standard complexity. The leftmost data point corresponds to the standard ResNet-18 model, which serves as a baseline for this group. Subsequent data points to the right denote dendritic models with \(K\) values of 4, 16, and 64, respectively. Concomitantly, these models' channels have been adjusted to 1/2, 1/4, and 1/8 of the original model's values, respectively, barring the input and output network layers. By maintaining this configuration, data points on the same curve exhibit equivalent parametric and computational complexities. The orange curves demonstrate data from models in which the number of inter-layer communication channels has been uniformly scaled up by a factor of two, while the green dashed curves represent models where the number of channels has been scaled by a factor of four. To facilitate a comprehensive understanding of the data, channel scale factors for each model, as compared to the standard model, have been explicitly labeled on the curves in Fig. 2F. Our analysis yields a particularly intriguing result concerning the performance of dendritic neuron models compared to point neuron-based models under the constraints of equivalent computing and parametric complexity. As depicted in Fig. 2D, E, and F, we observe that a dendritic neuron model is capable of achieving comparable or superior fitting power relative to an equivalent point-neuron-based model when the channel width is set to be greater than or equal to one-fourth of the width of the baseline model. Remarkably, when the channel width is increased to half of the width of the baseline model, the dendritic neuron-based model consistently outperforms its point neuron-based counterpart regarding test accuracy. ### Further Findings To further substantiate our research and enhance the robustness of our findings, we conducted supplementary experiments using a diverse array of model architectures and datasets. This analysis included an assortment of models encompassing those lacking residual connections, others that employed sparse network connections, as well as those leveraged transformer-based architectures. For the sake of clarity, we have included these additional results in the Appendix. ## 3 Communication cost analysis The experimental analysis presented above has demonstrated that dendritic models can provide superior model capacity and performance while maintaining constrained inter-layer communication cost. To gain a better understanding on the benefits dendritic models can offer, we performed theoretical analysis on the full communication cost of point neuron and dendritic neuron-based models. Results reported in this part are based on the following considerations: * We study and quantify the data movement process for computation between two sequential neural network layers, which can be easily generalized to many-layer settings. * We evaluate the suitability of adopting dendritic networks for real-world applications, where wiring that follows a city-walk route is more relevant, by measuring the data movement path length using Manhattan distance metric. * For the sake of clarity, a standard feed-forward network structure is employed for the analysis. However, the results obtained can also be applied to other network architectures. * The computation of each network layer is executed within a discrete square area with dimensions of one-by-one. * We only consider movement of neuron output values. This setting is most relevant for in-memory or near-memory computing. This is also relevant for computing with large batch size. No tiling based acceleration is considered. We model the signal communication cost \(C_{T}\) as a sum of three parts: \(C_{A}\) is the cost for all processing elements (PEs) to propagate their outputs toward a convergence point (top-right corner is used) at the edge of the chip. \(C_{I}\) characterizes the process of transmitting data between two computational stages (network layers). \(C_{E}\) describes the communication cost associated with distributing signals within a chip for layer inference. This is represented by the following equation: \[C_{T}=C_{A}+C_{I}+C_{E}\,. \tag{4}\] ### Cost with point neuron model Our analysis initiates with a model predicated on point neurons. As previously mentioned, our investigation focuses on two network layers. We assume that the first layer sends an output of \(D\) dimensions to the second layer. For convenience and without loss of generality, we assume that each of the \(D\) dimensions originates from one PE on the chip. In order to arrange \(D\) PEs on die area of size \(1\times 1\), each PE must have a height and width of \(l=1/\sqrt{D}\), resulting in an area size of \(1/D\). Similarly, the second layer is also composed of \(D\) PEs of the same size. Consequently, we obtain a grid of \(N\) by \(N\) PEs with \(N=\sqrt{D}\), with a distance of \(l\) between the center of each pair of neighboring PEs. See Fig. 11-A for a visual illustration. For this arrangement we have \[C_{A}=D(\sqrt{D}-1)l=D-\sqrt{D}\,, \tag{5}\] as measured with Manhattan distance. Furthermore, an illustrative example of signal propagation within this context is provided in Fig. 11-B. Derivation of Eq. 5 can be found in Appendix. A. \(C_{I}\) is very architecture dependent, thus we will abstain from attempting to estimate this component. We note \(C_{I}\) is linearly proportional to the size of \(D\). In scenarios where two network layers reside on different physical devices or the cost of moving data from PEs and memory is high, this portion of the cost may become the dominant communication expenses. We assess \(C_{E}\) with minimal rectilinear spanning tree (MRST) algorithm [36]. Given a grid of \(N\times N\) PEs, the objective is to deliver every dimension of the data to each PE. The MRST algorithm enables us to determine the minimal path length required to connect all PEs, which is \((N^{2}-1)\cdot l\). An example path is illustrated in Fig. 11C. Consequently, we obtain the cost of delivering data as \[C_{E}=(N^{2}-1)\cdot l\cdot D=(D-1)\sqrt{D}\,. \tag{6}\] ### Cost with dendritic neuron model This section estimates the communication cost associated with a model based on dendritic neurons. To ensure a fair comparison, we have maintained the number of parameters and floating-point operations (FLOPs) consistent with those in the point neuron model scenario. Similar to the case of point neuron-based models, we also give the total cost \(\hat{C}_{T}\) in the following equation: \[\hat{C}_{T}=\hat{C}_{A}+\hat{C}_{I}+\hat{C}_{E}\,. \tag{7}\] Given that each neuron has \(K\) dendrites, one layer of the model under examination will have a total of \(M=D\sqrt{K}\) dendrites. As illustrated in Fig. 11-D, every group of \(K\) dendrites aggregates to form a single output dimension. Consequently, the first layer will produce an output with a dimensionality of \(\hat{D}=D/\sqrt{K}\), which serves to maintain an equivalent computational complexity as the point neuron-based model previously described. We reiterate our assumption that those \(\hat{D}\) neurons are arranged in a grid format, specifically of size \(\hat{N}\times\hat{N}\), with \(\hat{N}=\sqrt{\hat{D}}\). We postulate that the computation of each dendrite is processed by one PE. In this scenario, the die area is divided into \(M\) units, with each unit occupying a specific area. The height and width of this area, denoted by \(\hat{l}\), can be calculated as \(\hat{l}=1/\sqrt{M}\). Through this, we arrive at the size of a PE for processing each dendrite being \(\frac{1}{D\sqrt{K}}\), which is \(1/\sqrt{K}\) of the point neuron-based model PE die size. This corresponds to the assumption that a dendrite in this analysis receives a proportion of \(1/\sqrt{K}\) of the inputs that a point neuron receives. In light of the aforementioned derivation, we note that the signal transfer cost, denoted as \(\hat{C}_{A}\), consists of two components. The first component, \(\hat{C}_{AG}\), refers to the cost of aggregating dendritic outputs for each neuron. The second component, \(\hat{C}_{AA}\), represents the cost of transmitting the aggregated data of all neurons off the die. Their expressions are as follows. \[\hat{C}_{AG} =(K-1)\cdot\hat{D}\cdot\hat{l}\] \[=\sqrt{D}(K^{1/4}-K^{-3/4})<\sqrt{D}K^{1/4}\,, \tag{8}\] \[\hat{C}_{AA} =\hat{N}\hat{N}(\hat{N}-1)\hat{l}(\sqrt{K})<\frac{D}{\sqrt{K}}\,,\] (9) \[\hat{C}_{A} =\hat{C}_{AG}+\hat{C}_{AA}<\sqrt{D}K^{1/4}+\frac{D}{\sqrt{K}}\,. \tag{10}\] Akin to the point neuron models, we will not attempt to derive \(\hat{C}_{I}\), although we have the relationship of \(C_{I}=\sqrt{K}\cdot\hat{C}_{I}\) under the assumptions of the equivalent parameter/FLOPs count setting. As for the \(\hat{C}_{E}\) component, note that the second layer receives \(\frac{D}{\sqrt{K}}\) inputs and consists of \(M\) units. Utilizing the MRST method, the cost associated with one-dimensional input connecting to \(M\) units can be computed as \((M-1)\cdot\hat{l}\). We arrive at \[\hat{C}_{E}=\frac{D}{\sqrt{K}}(D\sqrt{K}-1)\cdot\hat{l}\approx D^{\frac{3}{2}}/K ^{\frac{1}{4}}\,. \tag{11}\] ### Comparative analysis From the above derivations, we are equipped to compare the communication costs associated with point neuron-based and dendritic neuron-based models under different configurations. As previously established, \(C_{I}\) and \(\hat{C}_{I}\) are highly dependent on the architecture, and \(\hat{C}_{I}\) is \(\sqrt{K}\) times smaller than \(C_{I}\). For the scope of this analysis, we will focus on analyzing \(C_{A}\), \(\hat{C}_{A}\), \(C_{E}\), and \(\hat{C}_{E}\). Fig. 3A depicts the ratio of communication costs between the dendritic neuron-based model and the point neuron-based model. The results for different inter-layer dimensions \(D\) of the point neuron models and various channel reduction ratios \(\sqrt{K}\) are displayed within the figure. Notably, as \(\sqrt{K}\) increases, dendritic neuron-based models consistently demonstrate lower communication costs compared to their point neuron-based counterparts at the same level of computational complexity. Upon examining \(\hat{C}_{A}\) and \(\hat{C}_{E}\), it becomes apparent that \(\hat{C}_{E}\) is typically much larger than \(\hat{C}_{A}\) when \(D\) assumes large values, a common scenario in this context. Moreover, in models characterized by sparse connectivity, \(\hat{C}_{A}\) remains unchanged regardless of connection sparsity levels, whereas \(\hat{C}_{E}\) can vary. As a result, it is necessary to delve deeper into the \(\hat{C}_{E}\) term and scrutinize its behavior within the context of sparse models. In congruence with the approach adopted for dense models, we also employ the MRST algorithm to estimate the communication cost when dealing with sparse models. Considering the variability in the communication cost due to different sparse connection patterns, we sample a set of 100 random connection patterns for each setting to provide a robust estimate of the average cost. Fig. 3B presents \(\hat{C}_{E}\) under varying model sparsity levels and diverse numbers of dendrites per neuron. Our observations reveal a negative power relationship between \(\hat{C}_{E}\) and \(K\), with a power of 0.51. This relationship is accurately mirrored in the \(K^{1/4}\) factor presented in simplified version of \(\hat{C}_{E}\) shown in Eq. 11. ## 4 Discussion In this study, we were inspired by the observation that biological neurons aggregate the outputs of multiple dendrites to form a single output. Within each dendrite, the integration of synaptic inputs is nonlinear rather than a linear summation due to the presence of various voltage-gated ion channels. Accordingly, we have constructed neural network units that mimic these characteristics by integrating their synaptic inputs nonlinearly. In the process of pooling outputs from multiple units, there is an inherent loss of information due to the many-to-one nature of the pooling function. This suggests a potential decline in model performance. Such behavior is indeed noticeable when dealing with models that possess limited inter-layer bandwidth, that is those models that are equipped with a smaller number of channels. This observation holds when the models under comparison maintain the same parametric complexity. It is likely that this phenomenon led Goodfellow et al. [37] to observe suboptimal model performance when they utilized architectures that pool ReLU units. However, our findings suggest that once the bandwidth is increased beyond a certain threshold, it becomes unnecessary to augment the model size by adding more channels. Instead, the addition of extra dendrites to neurons appears to offer superior efficiency in enhancing model performance. The implications of this discovery are substantial for both theoretical perspectives and practical applications. Theoretically, it highlights that when we widen the architecture to expand models, we are actually augmenting the number of features within the hidden layers rather than enhancing the features propagated toward subsequent layers. This insight refines our understanding of the internal dynamics of neural network development and behavior. Practically, the adoption of an active dendritic structure enables models to achieve superior performance compared to point neuron-based models, given a fixed inter-layer communication budget. This can lead to a linear reduction in memory access during neural network inference and a smaller memory footprint, particularly when large batch sizes are employed during model inference. Figure 3: (A) Topographic representation of the ratio \((\hat{C}_{A}+\hat{C}_{E})/(C_{A}+C_{E})\): The visualization highlights the influence of the variations in the values of \(D\) and \(\sqrt{K}\) on the ratio \((\hat{C}_{A}+\hat{C}_{E})/(C_{A}+C_{E})\). The choice of \(\sqrt{K}\) over \(K\) was made to provide a clearer depiction of the relationship between the decrease in communication cost and the corresponding reduction in the layer’s output bandwidth. (B) Demonstrates the variations in \(\hat{C}_{E}\) in response to different quantities of dendrites per neuron, symbolized as \(K\), and varying levels of sparsity. The axes are portrayed in a logarithmic scale. When \(K=1\), the models are point neurons based. For this experiment, we have utilized a \(D\) value of 256. Our comprehensive analysis of communication cost further reveals that adopting a dendritic structure can also yield a reduction in on-chip communication costs, following a square-root relationship to the inter-layer communication reduction ratio. Considering that communication costs dominate energy consumption in contemporary computing chips, our findings could significantly influence the design of future neural network accelerators. Our findings shed light on critical insights; however, our comprehension of why channel sharing among a group of features can yield performance on par with, or even superior to, traditional models still needs to be more refined. In conventional models, each feature, or nonlinear neuron, establishes multiple connections directly to the succeeding network layer. One potential explanation posits that the pooling process in our dendritic layer can be viewed as a low-rank approximation of a significantly larger weight matrix \(W\in\mathbb{R}^{D\sqrt{K}\times D\sqrt{K}}\) using a smaller weight matrix \(W_{d}\in\mathbb{R}^{\frac{D}{\sqrt{K}}\times D\sqrt{K}}\). However, this interpretation provides only a limited perspective in comprehending this process. These gaps in our understanding necessitate further research to appreciate the dynamics and implications of such channel-sharing configurations fully. Notably, the dendritic models used in our study are equipped with a single layer of nonlinearity, as opposed to two layers as suggested in Eq. 3. However, we observed improved performance in dendritic neuron models when adding an extra nonlinear function, particularly when neurons were equipped with many dendrites (results not shown). It would be intriguing to explore how different types of nonlinearity and more advanced nonlinear architectures would impact models. Finally, our observation aligns with patterns observed in the evolution and development of the brain. In simpler, early-stage brains, neurons exhibit less structural complexity, consistent with the preference for point neuron-based models in smaller neural networks. However, as brains evolve to more advanced stages, neurons exhibit greater complexity and richer connectivity patterns, analogous to the preference for dendritic neuron-based models in larger neural network architectures [7]. This parallel suggests that incorporating dendritic neurons in artificial neural networks may reflect fundamental principles underlying the organization and functionality of biological neural systems. Our study contributes valuable insights into the comparative utility of dendritic and point neuron models in neural network design and offers guidance for their applications in various computational contexts. ## 5 Methods ### Datasets The present study leverages three commonly used datasets: ImageNet, CIFAR-100, and LibriSpeech, for model training and evaluation. These datasets are commonly served as benchmarks in deep learning research. **ImageNet Dataset:** For this study, we use the ILSVRC 2012 subset of the ImageNet dataset, which consists of 1.2 million training images and 50,000 validation images from 1,000 categories [35]. The images vary in size and are resized to a fixed resolution of 224x224 pixels for uniformity, per the standard ResNet procedure [34]. The typical data augmentation techniques, such as random cropping, random horizontal flipping, and color jittering, were applied during training to enhance the model's generalization ability. **CIFAR-100 Dataset:** The dataset consists of 60,000 32x32 color images in 100 classes, with 600 images in each class. There are 50,000 training images and 10,000 test images [38]. Like the ImageNet data processing, we followed the typical data augmentation procedure [34]. **LibriSpeech dataset:** The dataset is a publicly available English speech corpus for Automatic Speech Recognition (ASR) training and evaluation from the LibriVox project's audiobooks. It consists of 1000 hours of transcribed speech, divided into training, development, and testing subsets [39]. The experiment utilizing this dataset can be found in Appendix B. ### Model architectures In this study, we primarily used the ResNet-18 architecture as the baseline model. ResNet-18 is an 18-layer deep residual neural network, a seminal model proposed by He et al. [34]. The baseline configuration of ResNet-18 encapsulates an initial convolutional layer, followed by four residual blocks, each of which consists of two convolutional layers. This pattern constitutes the primary structure of our working model; in contrast to the original ResNet-18 model, our adapted architecture positions the shortcut connection after the ReLU (Rectified Linear Unit) activation function. This modification is imperative to ensure the compatibility of the dendritic structure with the model architecture. For experiments on scaling up networks, we scaled up each network layer by the same designated factor except for the input and output of the model. For models with dendritic neurons, we replaced neurons in the standard model with dendritic neurons with \(K\) dendrites as specified by the experiment setting, except for the input and output layers of the model. To maintain the uniform model complexity scaling throughout the model, we equip the input layer and the penultimate layer of the model with neurons of \(\sqrt{K}\) instead of \(K\) dendrites. The same setting is also employed in experiments designed to compare models that share identical inter-layer communication costs. For models trained on CIFAR-100, we observed training instability. Therefore we clipped the gradient norm to 1.0 during model training. We also added an extra batch norm to each dendrite to improve model stability. This additional batch norm can be fused with the previous layer and thus will not add extra computation burden at the inference stage. In addition to models based on the ResNet-18 architecture, we have corroborated our findings using a model devoid of shortcut connections. This strategy ensures that the benefits observed are not strictly confined to a particular architecture. The configuration of this model is delineated in Appendix B, where the corresponding experimental outcomes can also be found. Moreover, our experimentation extended to the transformer-based model. Within this model, the standard feedforward layers are substituted with network layers based on dendritic neurons. Comprehensive details pertaining to this modification can be found in Appendix B. ### Model training We trained all models with a cosine learning rate decay schedule and the SGD optimizer with a momentum of 0.9. For ImageNet with dense ResNet models, the learning rate was initialized at 0.4 (instead of 0.1 to compensate for the batch size used for training), and models were trained for 120 epochs, including two warm-up epochs with a learning rate of 0.04. Weight decay was set to \(1\times 10^{-4}\). A batch size of 1024 was employed, and the training was distributed across 8 GPUs. For ImageNet with sparse ResNet models, the models were trained for 200 epochs with an initial learning rate of 0.1 and 2 warm-up epochs at a learning rate of 0.01. The weight decay parameter was set to \(1\times 10^{-4}\). To achieve a sparse ratio of 85%, we applied L1-unstructured global pruning in 5 rounds, conducted between epochs 40 and 140. Subsequently, the models were trained for an additional 60 epochs. Finally, for CIFAR-100 models, we trained them for 200 epochs with a learning rate of 0.05, including two warm-up epochs at a learning rate of 0.005. A batch size of 64 was utilized, and the weight decay parameter was set to \(5\times 10^{-4}\). Our investigation emphasizes the comparative analysis of the performance of various models under identical training conditions, facilitating an equitable assessment of the distinct capabilities of each model. Consequently, all models within the comparison group undergo training with the same hyperparameters, barring the requisite architecture adjustments. Further details concerning the experiments can be found in the accompanying source code. ### Code availability The entirety of the code used to produce the findings presented herein will be openly accessible to the public upon the publication of this paper.
2301.08530
Self-Organization Towards $1/f$ Noise in Deep Neural Networks
The presence of $1/f$ noise, also known as pink noise, is a well-established phenomenon in biological neural networks, and is thought to play an important role in information processing in the brain. In this study, we find that such $1/f$ noise is also found in deep neural networks trained on natural language, resembling that of their biological counterparts. Specifically, we trained Long Short-Term Memory (LSTM) networks on the `IMDb' AI benchmark dataset, then measured the neuron activations. The detrended fluctuation analysis (DFA) on the time series of the different neurons demonstrate clear $1/f$ patterns, which is absent in the time series of the inputs to the LSTM. Interestingly, when the neural network is at overcapacity, having more than enough neurons to achieve the learning task, the activation patterns deviate from $1/f$ noise and shifts towards white noise. This is because many of the neurons are not effectively used, showing little fluctuations when fed with input data. We further examine the exponent values in the $1/f$ noise in ``internal" and ``external" activations in the LSTM cell, finding some resemblance in the variations of the exponents in fMRI signals of the human brain. Our findings further supports the hypothesis that $1/f$ noise is a signature of optimal learning. With deep learning models approaching or surpassing humans in certain tasks, and being more ``experimentable'' than their biological counterparts, our study suggests that they are good candidates to understand the fundamental origins of $1/f$ noise.
Nicholas Chong Jia Le, Ling Feng
2023-01-20T12:18:35Z
http://arxiv.org/abs/2301.08530v2
# Self-Organization Towards \(1/f\) Noise in Deep Neural Networks ###### Abstract Despite \(1/f\) noise being ubiquitous in both natural and artificial systems, no general explanations for the phenomenon have received widespread acceptance. One well-known system where \(1/f\) noise has been observed in is the human brain, with this 'noise' proposed by some to be important to the healthy function of the brain. As deep neural networks (DNNs) are loosely modelled after the human brain, and as they start to achieve human-level performance in specific tasks, it might be worth investigating if the same \(1/f\) noise is present in these artificial networks as well. Indeed, we find the existence of \(1/f\) noise in DNNs - specifically Long Short-Term Memory (LSTM) networks modelled on real world dataset - by measuring the Power Spectral Density (PSD) of different activations within the network in response to a sequential input of natural language. This was done in analogy to the measurement of \(1/f\) noise in human brains with techniques such as electroencephalography (EEG) and functional Magnetic Resonance Imaging (fMRI). We further examine the exponent values in the \(1/f\) noise in "inner" and "outer" activations in the LSTM cell, finding some resemblance in the variations of the exponents in the fMRI signal. In addition, comparing the values of the exponent at "rest" compared to when performing "tasks" of the LSTM network, we find a similar trend to that of the human brain where the exponent while performing tasks is less negative. ## I Introduction Noise is an often unwanted phenomenon in many different systems such as audio systems, electrical systems, communications, and in measurements. A common type of noise is \(1/f\) noise, or pink noise, characterised by a power spectral density (PSD) \(S(f)\) that is inversely proportional to frequency: \(S(f)=kf^{-1}+C\). While there are relatively simple generative and stochastic models that explain other common types of noise such as white noise [1; 2] or Brownian noise [2], there are no such models for \(1/f\) noise in general. ### Sources of \(1/f\) Noise \(1/f\) noise has since been found and characterised in a variety of electrical systems, including ionic solutions [3], diodes and PN junctions [4], field effect transistors [5], and superconducting Josephson junctions [6]. In these systems, the definition of \(1/f\) noise has been expanded to include noise with a spectral density proportional to \(f^{\beta}\), with \(-2<\beta<0\). In this paper, the term \(1/f\) noise will be used to refer to these \(1/f\)-like signals that have an exponent \(\beta\) smaller than 0 (white noise) and greater than -2 (Brownian noise). Other than flicker noise in electronics, \(1/f\) noise is ubiquitous in many physical systems, both natural and man-made. This pattern has been found in undersea currents [7], global climate data [8], Nile river flood and minimum levels [9; 10], sunspot frequency [10], and many other natural processes [10]. Interestingly, \(1/f\) noise is also present in man-made systems such as traffic systems [11; 12], concrete structures [13], and surprisingly, even in canonical examples of man-made "data" such as in music and speech [14]. Another interesting source of \(1/f\) noise is in biological systems, such as human heart rate fluctuations, where the spectrum for a healthy human is \(1/f\), while that for someone with heart disease is closer to Brownian [15; 16]. \(1/f\) noise is also found in other biological systems such as in giant squid axons [17], human optical cells [18], and in activity scans of the human brain [19]. ### Motivation Despite the extraordinary ubiquity of \(1/f\) noise and it being studied in many different fields for almost a century now, there is still no universal description for its occurrence, only specific models made to explain specific processes such as in diodes [4; 20] or other electronic components [21]. Such a search is spurred by similar phenomena in other parts of statistical mechanics, such as the universal critical exponents in different universality classes [22; 23; 24]. As such, many believe that there is a similar universality in \(1/f\) noise, and there is thus a great interest amongst many towards finding a deep all-encompassing explanation for the phenomenon. One way of working towards that goal is to probe the areas where this \(1/f\) noise is present in order to add to the pool of knowledge we have about this phenomenon. If a \(1/f\) signal is persistent across both a system and a simpler analogue of it, one might be able to gain insight about the origin of the noise by studying the simpler system instead. One such pair of analogues is the previously mentioned \(1/f\) noise in the human brain, and the relatively less complex systems of deep neural networks (DNNs). While the neurons and connections in a DNN are many orders of magnitude less complex than those in the human brain, the general principle of operation of a DNN approximates that of a basic brain with simple connections [25]. Similar to the human heart, \(1/f\) noise presents itself strongly in the human brain [26]. Like in the heart, the \(1/f\) noise in the human brain presents itself differently in a healthy human brain compared to a brain with neurological conditions like schizophrenia [27]. While the study of this form of brain activity is in its early stages due to the \(1/f\) signal being regarded as extraneous noise in the past, it has been proposed that the \(1/f\) noise in the brain is important in regulating function and serves other cognitive purposes [28]. With the fast progress of artificial neural networks, or deep neural networks (DNN) in particular, these artificial neural systems are fast approaching the human cognition level. As such, it would be appropriate for an analysis of the presence of \(1/f\) noise in a DNN to include networks that approach a human level of competency. If this phenomenon also exists in DNN, the controllability and manipulability of the DNN would serve as a better experimental subject to further examine the origin of \(1/f\) noise than the brain. ### \(1/f\) noise in the human brain Brain activity can be measured in multiple different ways, such as non-invasive scalp electroencephalography (EEG) and functional magnetic resonance imaging (fMRI). These neuroimaging techniques detect different properties of the brain as a proxy for brain activity, which results in detection of different types of activity in different parts of the brain. Scalp EEGs map brain activity by measuring electrical signals with electrodes placed on the scalp. These electrodes detect voltage fluctuations relative to a reference potential [29]. Due to the positioning of the detectors outside the skull, the signals obtained for scalp EEGs are dominated by brain activity on the surface of the brain rather than in the bulk of the brain [29]. \(1/f\) noise has shown up in the recordings of EEGs for decades [26]. Once considered an unwelcome signal to be filtered out as background or instrumental noise, there is now significant interest and research into the role of \(1/f\) noise in a healthy functioning brain [27; 28]. Numerous studies have measured the scaling exponent \(\beta\) of EEGs with similar results [30; 31]. The aggregate results obtained in [30] gives a scaling exponent \(\beta=-1.33\pm 0.19\), demonstrating \(1/f\) noise. Another form of neuroimaging is with fMRI which tracks blood flow through the brain [32]. It has been shown that brain activity is linked to blood flow through the activated regions of the brain [33], and this fact is the basis of how the images formed by fMRI are linked to brain activity. When mapping the fMRI signal to neural activity by comparing it to other methods of measuring electrical activity in the brain, it has been found that the fMRI signal mainly reflects the activity within the individual neurons rather than the outputs between the neurons [32]. Similar to in EEGs, \(1/f\) noise also shows up in fMRI recordings [26; 34]. The scaling exponents measured in these studies are less negative than those in EEGs, with an average scaling exponent of \(\beta=-0.84\). This exponent became even less negative when the brain performs tasks, averaging to \(\beta=-0.72\) across the brain. ### Recurrent Neural Networks Recurrent Neural Networks (RNNs) are a type of DNN that preserve the state of an input across a temporal sequence by feeding the outputs of some nodes back into those same nodes. This is as opposed to feedforward DNNs, where data flows only from layer to layer. By retaining knowledge of previous inputs through this recurrence, RNNs are significantly more adept than simple feedforward networks at processing sequential data. Figure 1 shows the most basic form of an RNN, demonstrating the idea that these recurrent networks are deep through _time_, in contrast to the depth through _layers_ of a simple feedforward network. However, this also means that the simple RNN suffers from the same problem as DNNs with large depths - the vanishing gradient problem. RNNs struggle to converge for particularly long input sequences of more than 10s of timesteps. In a way, RNNs are similar to the human brain at an abstract level, as the human brain continuously receives information and processes it using our biological neural networks. In this study we use a particular type of RNN called Long Short-Term Memory (LSTM) networks. In this arthitecture, an LSTM cell was created to replace the recurrent cell in the vanilla RNN shown in Figure 1 in order to solve the vanishing gradient problem [35]. The LSTM network attempts to resolve the problem by maintaining an internal cell state **c**. In an LSTM network, the RNN cell shown in Figure 1 is replaced by the LSTM cell (Figure 2) which consists of many different activations compared to the single activation in a vanilla RNN cell. This LSTM cell, like the vanilla RNN cell, takes in \(\textbf{x_{t}}\) and \(\textbf{h_{t-1}}\) as inputs, along with the additional input of the previous cell state \(\textbf{c_{t-1}}\) Like the vanilla RNN, the LSTM cell also outputs the current hidden state \(\mathbf{h_{t}}\) and additionally the current cell state \(\mathbf{c_{t}}\) to itself in the next timestep. The addition of the internal cell state \(\mathbf{c}\) helps in preserving temporal correlations [35]. ## II Methods The task selected for this experiment is a popular benchmark AI task, which is to predict the sentiment of natural languages of the Large Movie Review Dataset [36], which contains 50000 labelled movie reviews from the Internet Movie Database (IMDb). Traditional machine learning techniques like the Naive Bayes classifier, Maximum Entropy (MaxEnt) classification, and Support Vector Machines (SVMs) are effective at topic-based text classification, which classifies text based on keywords. However, they tend to have trouble classifying text based on positive or negative sentiment, which can require a more subtle "understanding" of context beyond single words or short phrases [37]. LSTM networks have demonstrated long range temporal memory beyond simple n-grams (unit of \(n\) words used in traditional natural language processing (NLP) techniques). As such, they are prime candidates for this task and frequently demonstrate close to human level performance in basic sentiment analysis. In this work we use LSTM rather than other RNN structures due to its superior performance over other variants in real tasks. To analyse specifically the time series behaviour of the LSTM cell activations, the LSTM network used for this task will contain the minimum number of layers to properly classify the data. Additional layers that are traditionally used to augment the performance of the network will not be included as they carry the same key features and do not generate significantly new theoretical insights. ### Dataset The dataset chosen for the sentiment analysis task will be the Large Movie Review Dataset [36] which consists of 50000 highly polar movie reviews obtained from the Internet Movie Database (IMDb) [38]. This dataset consists of 25000 positive reviews (score \(\geq\) 7 out of 10) and 25000 negative reviews (score \(\leq\) 4 out of 10). Preprocessing steps such as the removal of punctuation and converting of words to lowercase were performed. The words were also converted to tokens, with the top 4000 words (88.3% of the full vocabulary) converted into unique tokens, and the rest of the words converted into a single [UNK] token. ### LSTM network architecture The LSTM network will consist of three layers: An embedding layer that converts the words into lower dimensional internal representation vectors, the LSTM layer, and an output layer consisting of a single neuron with a sigmoid activation that outputs a value indicating if the review is positive (\(y\geq 0.5\)) or negative (\(y<0.5\)). The IMDb dataset was obtained using the Keras[39] datasets application programming interface (API), with the preprocessing done with custom code [40]. The networks were trained using Keras with the TensorFlow 2.6.0 [41] backend on a GeForce GTX 1080 GPU, with preprocessing steps performed on a Ryzen 9 3900X CPU. Figure 1: A (many-to-one) recurrent neural network visualised in its temporally unrolled representation. A time series (in this case a movie review with \(n\) words) is input into the network sequentially. For each timestep \(t\), the \(t\)th word passes into the embedding layer, which converts the word into a vector using a learned representation of a continuous vector space. The vector \(\mathbf{x_{t}}\) then passes into the recurrent layer, which accepts both \(\mathbf{x_{t}}\) and the output of itself from the previous timestep, \(\mathbf{h}_{t-1}\). The recurrent layer then passes its output, \(\mathbf{h}_{t}\), into itself for the next timestep. At the final timestep \(n\), the recurrent layer passes its output \(\mathbf{h}_{n}\) to the output layer which converts it to the output \(\mathbf{y}\). \begin{table} \begin{tabular}{c c c c c} Size of embedding layer & Size of LSTM layer & Training batch size & Dropout factor & L2 regularisation factor & Learning rate \\ \hline 32 & 60 & 128 & 0.1 & 0.001 & 0.005 \\ \end{tabular} \end{table} Table 1: Hyperparameters selected for the LSTM networks The hyperparameters used for the LSTM networks are shown in Table 1. These hyperparameters were selected with the KerasTuner[42] library using the Hyperband[43] search algorithm, selected over 10 hyperband iterations. Overall, we follow the best practices of the state-of-the-art for LSTM models in this work. ### Measuring \(1/f\) noise In order to measure the spectral noise in the LSTM cell, temporal sequences of the specific activations have to be obtained. To obtain the internal activations of the Keras LSTM cells, the cell was recreated in vanilla Python with NumPy[44]. The code for this is available at [45]. The steps to obtain the power spectral density of any specific activation (\(\mathbf{f}\) in this case) is then as follows: 1. Propagate the review through the LSTM layer, recording the vector \(\mathbf{f}_{t}\) corresponding to the forget gate of each LSTM cell at each timestep \(t\), forming 60 time series of activations corresponding to the 60 LSTM cells. 2. Perform a fast Fourier transform (FFT) on each time series. 3. Take the square of the FFT to obtain the PSD of each time series. 4. Sum the activation power spectral density of each cell in the layer to get the total PSD of the LSTM layer [46]. ## III Results and discussion When picking the optimal epoch of the networks, the epoch with the lowest network loss was selected. The accuracy achieved across the 5 networks was high [47], ranging from 88.41% to 89.19% prediction accuracy on the test data. Table 2 provides a summary of the network loss and accuracy of the 5 LSTM networks used at the optimal epoch. ### Exponent \(\beta\) for the test set The steps described in section II.3 were performed for the reviews in the test dataset of length \(\geq 500\) to remove the impact of the padding as the repeated identical padding tokens has the effect of lowering the exponent in the PSD. Note that training of the LSTM is carried out on reviews regardless of their word lengths, to keep in line with the accepted practices in AI. Figure 3(a) shows the PSD of one of the reviews for one of the networks, with the exponent for \(\mathbf{h}\) obtained by taking the gradient of the PSD on a log-log scale. Figure 3(b) displays a histogram of the exponents of \(\mathbf{h}\) obtained for all the test \begin{table} \begin{tabular}{c c c} \hline \hline Network & Network loss on test data & Accuracy on test data \\ \hline 1 & 0.2778 & 89.19 \\ 2 & 0.2867 & 88.77 \\ 3 & 0.2896 & 88.41 \\ 4 & 0.2788 & 89.07 \\ 5 & 0.2927 & 88.57 \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of the 5 different LSTM network performances on the test dataset of 15000 reviews. Figure 2: The LSTM cell (dotted circle) with its internal structure shown. The red lines represent the “internal” activations while the green lines represent the “external” activations. \(\sigma_{\mathbf{t}}\) and \(\sigma_{\mathbf{s}}\) represent the tanh activation and sigmoid activation respectively. reviews with length \(\geq 500\) for the same network. We see clear \(1/f\) noise here with a mean \(\mu=-0.993\pm 0.073\). ### Ruling out \(1/f\) noise in the input One possibility for the presence of \(1/f\) noise is that the input \(\mathbf{x}\) has a PSD that is \(1/f\). As such, it is important to rule out this effect if we were to demonstrate the emergence of \(1/f\) noise from the LSTM. To determine the exponent of the input data, the same process from II.3 was performed, using the embedding vector instead of the activation vector. Figure 3(c) shows the PSD of one of the inputs, with the exponent for \(\mathbf{x}\) obtained by taking the gradient of the PSD on a log-log scale. Figure 3(d) displays a histogram of the exponents of \(\mathbf{x}\) obtained for all the test reviews with length \(\geq 500\). Unlike the clear \(1/f\) noise demonstrated with the activations from the LSTM layer, the histogram here shows that the noise in the reviews themselves are effectively uncorrelated white noise, with a mean \(\mu=-0.020\pm 0.033\). Figure 4 is a scatter plot relating the histograms shown in Figure 3, demonstrating the lack of correlation between the activation exponent and the input exponent with an \(R^{2}\) value of 0.083. This further supports our hypothesis that the \(1/f\) noise observed in the LSTM networks are inherent to the networks, rather than a consequence of \(1/f\) noise in the inputs to the networks. ### Overall results for \(\beta\) The data for all the activations across all the networks was collected and the aggregate values of \(\beta\) for each activation are shown in Table 3. The aggregate values for \(\beta\) are provided for the networks before and after training, with the weights of untrained networks randomly initialised with the default GlorotUniform initialiser. The summary of the aggregate results on exponents for different neurons is also shown in Figure 5, where the effect of training is very clearly demonstrated. The "internal" activations \(\mathbf{f}\), \(\mathbf{i}\), \(\mathbf{cc}\), and \(\mathbf{o}\) have a relatively less Figure 3: (a) PSD (solid, blue) of the activation \(\mathbf{h}_{\mathbf{z}}\) for the entire LSTM layer for a single 580 word review (truncated to 500 words) with a line of best fit plotted (dashed, orange). The slope obtained in the log-log plot is the exponent \(\beta\), with a value of \(\beta=-0.99\). (b) Histogram showing the spread of values of \(\beta\) for the activation \(\mathbf{h}\) for a single LSTM network across all the test reviews with length \(\geq 500\). The mean value (solid, red) \(\mu=-0.993\) and standard deviation (dashed, red) \(\sigma=0.073\) are indicated on the histogram. (c) PSD (solid, blue) of the input \(\mathbf{x}_{t}\) for the same 580 word review shown in (a) with a line of best fit plotted (dashed, orange), giving \(\beta=0.00\). (d) Histogram showing the spread of values of \(\beta\) for the inputs \(\mathbf{x}\) across all the test reviews with length \(\geq 500\). The mean value (solid, red) \(\mu=-0.020\) and standard deviation (dashed, red) \(\sigma=0.033\) are indicated on the histogram. \begin{table} \begin{tabular}{c c c} \hline \hline Activation & \(\beta\) (Untrained) & \(\beta\) (Trained) \\ \hline **f** & -0.86 \(\pm\) 0.31 & -0.58 \(\pm\) 0.17 \\ **i** & -0.87 \(\pm\) 0.27 & -0.62 \(\pm\) 0.13 \\ **cc** & -0.03 \(\pm\) 0.29 & -0.312 \(\pm\) 0.086 \\ **o** & -0.78 \(\pm\) 0.18 & -0.56 \(\pm\) 0.15 \\ **c\({}_{\mathbf{out}}\)** & -1.14 \(\pm\) 0.31 & -1.05 \(\pm\) 0.16 \\ **h** & -1.13 \(\pm\) 0.31 & -0.80 \(\pm\) 0.15 \\ \hline \hline \end{tabular} \end{table} Table 3: Summary of exponent \(\beta\) values across the 5 LSTM networks before and after training. Figure 4: Scatter plot of the exponents of \(\mathbf{h}\) vs the exponents of the input \(\mathbf{x}\) for test reviews with length \(\geq 500\). Model used is the same as Figure 3. The input exponents here are not impacting the activation exponent values, showing that the ‘\(1/f\)’ phenomenon in the activation values is not from a similar pattern in the inputs. negative trained exponent of between -0.3 and -0.6, while the "external" activations \(\mathbf{c_{out}}\) and \(\mathbf{h}\) are closer to pink noise, with relatively more negative trained exponents. Another point of interest is that the effect of training is to make the exponents less negative, with the exception of the exponent \(\mathbf{cc}\). This behaviour is similar to that of fMRIs compared to EEGs, where the exponents measured by fMRIs [34] corresponding to signals from the volume of the brain are less negative than those measured by EEGs [30] corresponding to signals from the surface of the brain. ### Effect of performing a task on \(\beta\) It has been reported that the exponent \(\beta\) from human brain exhibits different values when at rest and performing tasks. Specifically, its value is more negative when at rest as compared to the latter[34]. Here we also mimic the'rest' state and 'task' state of the LSTM: we assume that using inputs consisting of only 0 values mimics the'rest' state, and using inputs of actual movie reviews mimics the 'task' state. As shown from the results in Figure 6, intriguingly the LSTM exhibits the same trend in the value of \(\beta\): it is more negative when at'rest'. One possible explanation is that a more negative \(\beta\) value is associated with a longer memory process. And since the'rest' state has constant values as input, this input data naturally has longer memory than that of the 'task' inputs. This longer memory process then gets carried over to the outputs of the neural network values. ## IV Conclusion In summary, we have found that \(1/f\) noise is also present in artificial neural networks like the Long Short Term Memory networks that are trained on a real world dataset. Further analysis showed that such a pattern is not a trivial consequence of a similar pattern in input data, as the input data shows a clear white noise pattern that is distinct from pink noise a.k.a. \(1/f\) noise. Since the input data are also real world natural language sentences that our brain processes, our results demonstrates that artificial network networks that perform close to the cognition of human level exhibits very similar \(1/f\) patterns as the later biological counterparts[19; 26; 34]. The analogy was also further extended with the similarity of the trends in the noise exponents for "inner" and "outer" neurons within the LSTM compared to fMRI and EEG exponents respectively [30; 34]. Similarly, the noise exponents for the LSTM networks at "rest" state compared to when performing the tasks exhibit the same trend found in fMRI data [34]. It is intriguing that despite the vast differences in the microscopic details between biological neural networks and artificial neural networks, such macroscopic patterns of \(1/f\) are strikingly similar. Such similarity points at Figure 5: Aggregate values of \(\beta\) for all the activations plotted. The error bar represents 1 standard deviation \(\sigma\) value over 5 LSTM networks. The dotted pink line marks \(\beta=-1.0\) (pink noise), and the dotted black line marks \(\beta=0\) (white noise). Figure 6: The effect of performing a task on the exponent \(\beta\) of the various activations in the LSTM networks. This is drawn in direct analogy with the “rest” vs. “task” measurements for fMRI signals in human subjects [34]. The exponents obtained for “task” correspond to the LSTM cells processing input vectors that correspond to movie reviews. The exponents obtained for “rest” correspond to the LSTM cells processing zero vectors of equal dimension to the movie reviews (40 dimensions after the embedding layer) for 500 timesteps. some deeper principles that govern their healthy functioning, something that is independent of the detailed neural interactions. With the artificial neural networks being more 'transparent' to our experimental manipulation and examination unlike its biological counterpart, it is an ideal proxy to understand the origin of \(1/f\) noise going forward, as well as a possible tool to understand more about the healthy functioning of the brain through the \(1/f\) noise perspective.
2308.12063
Learning the Plasticity: Plasticity-Driven Learning Framework in Spiking Neural Networks
The evolution of the human brain has led to the development of complex synaptic plasticity, enabling dynamic adaptation to a constantly evolving world. This progress inspires our exploration into a new paradigm for Spiking Neural Networks (SNNs): a Plasticity-Driven Learning Framework (PDLF). This paradigm diverges from traditional neural network models that primarily focus on direct training of synaptic weights, leading to static connections that limit adaptability in dynamic environments. Instead, our approach delves into the heart of synaptic behavior, prioritizing the learning of plasticity rules themselves. This shift in focus from weight adjustment to mastering the intricacies of synaptic change offers a more flexible and dynamic pathway for neural networks to evolve and adapt. Our PDLF does not merely adapt existing concepts of functional and Presynaptic-Dependent Plasticity but redefines them, aligning closely with the dynamic and adaptive nature of biological learning. This reorientation enhances key cognitive abilities in artificial intelligence systems, such as working memory and multitasking capabilities, and demonstrates superior adaptability in complex, real-world scenarios. Moreover, our framework sheds light on the intricate relationships between various forms of plasticity and cognitive functions, thereby contributing to a deeper understanding of the brain's learning mechanisms. Integrating this groundbreaking plasticity-centric approach in SNNs marks a significant advancement in the fusion of neuroscience and artificial intelligence. It paves the way for developing AI systems that not only learn but also adapt in an ever-changing world, much like the human brain.
Guobin Shen, Dongcheng Zhao, Yiting Dong, Yang Li, Feifei Zhao, Yi Zeng
2023-08-23T11:11:31Z
http://arxiv.org/abs/2308.12063v2
# Metaplasticity: Unifying Learning and Homeostatic Plasticity in Spiking Neural Networks ###### Abstract The natural evolution of the human brain has given rise to multiple forms of synaptic plasticity, allowing for dynamic changes to adapt to an ever-evolving world. The evolutionary development of synaptic plasticity has spurred our exploration of biologically plausible optimization and learning algorithms for Spiking Neural Networks (SNNs). Present neural networks rely on the direct training of synaptic weights, which ultimately leads to fixed connections and hampers their ability to adapt to dynamic real-world environments. To address this challenge, we introduce the application of metaplasticity - a sophisticated mechanism involving the learning of plasticity rules rather than direct modifications of synaptic weights. Metaplasticity dynamically combines different plasticity rules, effectively enhancing working memory, multitask generalization, and adaptability while uncovering potential associations between various forms of plasticity and cognitive functions. By integrating metaplasticity into SNNs, we demonstrate the enhanced adaptability and cognitive capabilities within artificial intelligence systems. This computational perspective unveils the learning mechanisms of the brain, marking a significant step in the profound intersection of neuroscience and artificial intelligence. ## 1 Introduction The long-term stability and efficiency of the brain mainly attribute to the power of neuronal plasticity [1; 2; 3]. Neuronal plasticity serves as the fundamental attribute of the brain, enabling people to learn new information from the environment, remember various experiences, and adapt to constantly changing surroundings. Neuronal plasticity primarily manifests in two forms: learning plasticity (LP) and homeostatic plasticity (HP), which cooperatively adjust and balance how the brain processes new information and memory of experiences [4; 5; 6; 7]. LP relies on the activities of the neurons, reflecting environmental information or new learning experiences by adjusting the strength of synaptic connections [8; 9; 10]. Nevertheless, relying solely on learning plasticity does not guarantee the stability of neural networks, mainly when tasked with processing environmental information that is both complex and subject to dynamic changes. Under such circumstances, the role of homeostatic plasticity is paramount. Homeostatic plasticity operates as a stabilizing mechanism within neural networks. In instances where neuronal activity escalates too high or drops too low, this form of plasticity steps in to recalibrate and restore the activity level of neurons to an optimal state, thereby preserving the overarching stability of the neural network [11; 12; 13; 14; 15]. Metaplasticity, often referred to as the 'plasticity of plasticity', amplifies the brain's capacity for learning and adaptation. If fine-tunes neuronal learning strategies and manages the stability of neural networks, thereby allowing the brain the flexibility to adjust to diverse environmental shifts while safeguarding its capacity for normal function across many settings. This intricate balance between flexibility and stability equips our brains with the ability to learn and memorize effectively, even within the context of highly complex environments [16; 17; 18]. Spiking neural networks (SNNs) successfully simulate the discrete spike sequence information transmission process in biological nervous systems by finely modeling the dynamics of biological neurons. The event-driven and real-time nature of information processing in SNNs endows them with a superior ability to manage tasks involving temporal dynamics compared to traditional artificial neural networks (ANNs) [19]. The training methodologies for SNNs primarily bifurcate into two distinct categories. The first approach entails the optimization of synaptic weights utilizing a backpropagation algorithm based on surrogate gradients [20; 21; 22]. The second one involves the adjustment of synaptic weights through the application of biologically inspired synaptic plasticity rules, with a significant emphasis on LP [23; 24; 25]. Although these algorithms have solved the training problem of SNNs to a certain extent, these methods all use fixed learning rules to guide network training and lack high adaptability to the environment. Some research attempts to enhance the network's information processing capabilities by increasing the heterogeneity of neurons, but its flexibility still needs to be improved [26; 27; 28]. Some studies have coordinated various learning rules to enhance the learning capabilities of SNNs. However, these efforts primarily rely on manually preset coordination [29; 30]. Some studies leverage neural modulation factors to enact adaptive Spike-Timing Dependent Plasticity (STDP). However, these works tend to concentrate solely on LP, overlooking the role of HP in synaptic learning [31; 32]. In this study, we present a pioneering approach that shifts the focus from traditional weight-centric training to learning the principles of plasticity. This shift fosters the ability of SNNs, to adapt to evolving environments even in the absence of explicit reward signals. By embracing metaplasticity, a high-order process that governs plasticity rules, we enable SNNs to adjust their synaptic modifications based on the neurons' activity history. This mechanism enhances the adaptability and cognitive prowess of artificial intelligence systems and fosters their continual learning and evolution, mimicking the inherent characteristics of their biological counterparts. This augmentation in learning strategy and adaptability presents SNNs with improved generalization in dynamic real-world environments, boosting their multi-tasking learning abilities. Our contributions can be outlined as follows: * We introduce a novel framework emphasizing learning plasticity rules rather than direct synaptic weight modifications. This approach imbues artificial neural networks that can adapt to their environment, like biological systems. * By combining Learning Plasticity (LP) and Homeostatic Plasticity (HP), our method potentially increases the generalization and multitasking abilities of SNNs in dynamic and real-world scenarios. It provides a platform for continuous learning and adaptation, mirroring the extraordinary capabilities exhibited by biological nervous systems. * By closely aligning our artificial systems with biological mechanisms, our work significantly strides towards a more robust, adaptable, and biologically plausible artificial neural system. This alignment improves the system's learning and adaptation but also aids in understanding the underlying mechanisms of the brain's learning processes. ## 2 Results ### Metaplasticity: Learning to Balance Learning and Homeostatic Plasticity To endow artificial agents with plasticity across their entire lifespan, not just during the training phase, we need to shift our focus from optimizing fixed synaptic weights to optimizing plasticity rules. Inspired by the concept of metaplasticity in biological systems, such an approach would allow artificial networks to dynamically adapt their internal structures to evolving environmental conditions, similarly to their biological counterparts. This shift requires the development of new models and algorithms that can capture the dynamic, adaptable nature of biological plasticity. These models need to continuously learn from the environment and modify their learning rules in response to changing inputs and tasks, essentially 'learning to learn'. The challenge here lies in incorporating this metaplasticity into the design and training of SNNs, enabling them to mimic the adaptability and cognitive capabilities of biological neural systems better. As shown in Fig. 1, in our metaplasticity framework, we define two aspects of plasticity: LP and HP: * **Learning Plasticity (LP)**: LP, represented by Spike-Timing Dependent Plasticity (STDP), is vital for adjusting synaptic strength based on the timing of pre- and post-synaptic activity. This facet of plasticity helps shape a neuron's response to its inputs by enforcing the causality requirement in synaptic strengthening. * **Homeostatic Plasticity (HP)**: HP plays a crucial role in maintaining the overall level of excitation within a neuron or network within a regulated range and redistributing synaptic efficacy among different synapses of a neuron. This helps preserve the total synaptic strength of a neuron within certain bounds, thus preventing over-activation or inhibition. Inspired by these mechanisms, we propose a form of metaplasticity with learnable parameters. Throughout the lifespan of an agent, synaptic weights are updated as follows: \[\Delta w_{i,j}=\eta\underbrace{(A_{i,j}x_{i}(x_{j}-C_{i,j})}_{\text{Learning Plasticity (LP)}}+\underbrace{B_{i,j}x_{j}+D_{i,j}}_{\text{Homeostatic Plasticity (HP)}}) \tag{1}\] In Eq. 1, \(\Delta w_{i,j}\) represents the change in synaptic weight between neurons \(i\) and \(j\). \(x_{i}\) and \(x_{j}\) denote the traces of pre-synaptic and post-synaptic spikes, respectively. The traces increase whenever a spike occurs and gradually decay to zero with their respective intrinsic time constants. The parameters Figure 1: **Diagram of metaplasticity.** Top: By combining Learning Plasticity (LP) and Homeostatic Plasticity (HP), neurons can achieve diverse and heterogeneous plasticity. Bottom: Agents with metaplasticity learn plasticity rather than directly adjusting weights. Different forms of synaptic plasticity can be formed between neurons, enabling better multi-task learning. Plasticity helps the agents dynamically adjust weights and learn previously unseen scenarios during training, even without explicit reward signals. \(A_{i,j}\), \(B_{i,j}\), \(C_{i,j}\), and \(D_{i,j}\) are independent and learnable for each synapse, enabling the formation of distinct plasticity rules. These parameters govern the contributions of each plasticity mechanism to the overall update of synaptic weights. The term \(A_{i,j}x_{i}(x_{j}-C_{i,j})\) embodies the contribution of LP and is related to the activity of the pre- and post-synaptic neurons. The parameter \(A_{i,j}\) regulates the contribution of LP, while \(C_{i,j}\) serves as a threshold parameter determining the level of post-synaptic activity required for synaptic strengthening. The term \(B_{i,j}x_{j}\) and \(D_{i,j}\) represent the contribution of HP. Here, the parameter \(B_{i,j}\) adjusts the synaptic weight based on the post-synaptic activity \(x_{j}\), while \(D_{i,j}\) acts as a unique bias for each neuron, providing a constant contribution to the synaptic weight update regardless of pre- and post-synaptic activity. Working with other plasticity mechanisms, this term helps maintain the total synaptic strength within a specific range. The learning rate \(\eta\) scales the overall synaptic weight update. This metaplasticity rule allows the agent to learn the optimal parameters \(A_{i,j}\), \(B_{i,j}\), \(C_{i,j}\), and \(D_{i,j}\) that balance the contributions of the two plasticity mechanisms, thereby adapting its synaptic weights flexibly and stably throughout its lifespan. Building upon the metaplasticity rule defined in Eq. 1, we employ an Evolutionary Strategy (ES) [33] to optimize the parameters. This strategy is inspired by the notion that the natural selection and evolution process shapes the intrinsic properties of biological organisms, including their neural plasticity mechanisms. In this context, the parameters of the metaplasticity rule can be viewed as intrinsic priors optimized throughout evolution to ensure survival and adaptation to various environmental conditions. The ES will involve a population of agents, each with a unique set of metaplasticity parameters. The fitness of each agent will be evaluated based on its ability to adapt and perform in various tasks and environments. The fittest agents will be selected, and their metaplasticity parameters will be used to generate the next generation of agents, with some variations introduced to encourage exploration of the parameter space. Through this process, the metaplasticity parameters will be optimized, enabling the agents to maintain plasticity and dynamically adapt to the environment, even without explicit reward signals. ### Metaplasticity Enhances Working Memory Capacity In this section, we aim to demonstrate the effect of metaplasticity on Working Memory (WM). WM is the ability to maintain and process information temporarily and is the cornerstone of higher intelligence [34]. We explore the significant impact of metaplasticity on enhancing the WM capabilities of SNNs. We utilize a task known as the copying task [35], as illustrated in Fig. 2**A**. In this task, SNNs are initially presented with a sequence of stimuli, each lasting for \(200\)ms. This is then followed by a delay period of variable lengths, and finally, a test stimuli of equivalent duration to the sample stimuli is presented. The challenge for the SNNs lies in accurately reproducing the initial sequence of stimuli in the correct order upon receiving the test stimuli. To thoroughly demonstrate the advantages of employing a metaplasticity-based approach in working memory tasks, we carry out a comparison with the strategy of directly optimizing weights utilizing the ES based on widely used three-layer SNNs. As shown in Fig. 2**B**, SNNs equipped with metaplasticity exhibit faster convergence rates, an ability to retain memory over longer durations, and an enhanced memory capacity. To further investigate the influence of metaplasticity, we visualize synaptic weights following various stimuli inputs when the stimuli length is set to \(8\). As shown in Fig. 2**C**, SNNs endow with metaplasticity can form distinct connection weights for different stimuli, demonstrating their superior adaptive capacity. We compare the neuronal states at various stages between SNNs directly trained with weights and those incorporating metaplasticity, as illustrated in Fig. 2**D**. SNNs directly trained with weights rely on neuronal activity during the delay period to maintain memory. Resetting the membrane potential to \(0\) after the stimulus input leads to memory loss for input stimuli. This indicates that in SNNs directly optimized with weights, memory is primarily stored in neuronal activity. In contrast, SNNs incorporating metaplasticity encode the input stimulus into synaptic weights, demonstrating remarkable memory capabilities. Their ability to adjust synaptic weights facilitates the enhancement of working memory and allows neurons to remain in a resting state when not receiving task-related stimuli. This contributes to network efficiency and enhances network capacity, which has been validated in other biological and computational neuroscience studies [36; 37]. Finally, we visualize the average spike traces of SNNs under different training strategies, as shown in Fig. 2**E**. Notably, SNNs incorporating metaplasticity can sustain lower firing rates, thereby enhancing their efficiency in managing computational resources. In summary, these results highlight the significant role of metaplasticity in bolstering the working memory capacity of SNNs, demonstrating its potential for facilitating complex cognitive tasks in artificial intelligence systems. ### Metaplasticity Enhances Multi-task Learning Reinforcement Learning (RL) involves an agent learning to interact with its environment to achieve a specific goal, with the quality of its actions dictated by a reward signal. Conventional RL approaches often entail training the agent on a single task with a fixed reward function. However, in complex, real-world environments, agents need to handle multiple tasks and adjust to changing reward functions. This multi-task learning scenario presents significant challenges, mainly when the tasks involved are diverse and potentially conflicting. Figure 2: Design of the WM experiment and the impact of metaplasticity on WM. **A.** Schematic of the copying task. The SNNs first receive a sequence of motion stimuli, with each stimulus lasting \(200\)ms, followed by a delay period of varying lengths, and finally, a test stimulus of the same duration as the sample stimulus. The SNNs are required to reproduce the stimuli from the first phase to receive the test stimulus. **B.** Performance comparison between SNNs with plasticity and trained with direct weights. SNNs with plasticity show faster convergence, longer memory duration, and greater memory capacity. ’Len’ refers to the length of stimulus samples, while ’Lat’ refers to the number of steps in the delay period. **C.** Synaptic weights after different motion stimulus inputs when the number of motion samples is \(8\). SNNs with plasticity can form distinct connection weights for different stimuli. The left side of the dashed line shows the input weights associated with the stimulus, and the right side shows the output weights. **D.** Neuron states at different stages in SNNs trained directly with weights and those with metaplasticity. Directly trained SNNs require neural activity during the delay period to maintain memory. Resetting the membrane potential to \(0\) after the input stimulus leads to a chance-level memory accuracy, resulting in memory loss. In contrast, SNNs with plasticity can encode input stimuli into synaptic weights, demonstrating stronger memory functionality. **E.** Visualization of the firing rates at different stages and the average spike traces for SNNs using different strategies. SNNs with plasticity can maintain lower firing rates. In the field of RL, some of the most challenging problems lie within the domain of continuous control, usually modeled using sophisticated physics engines. These tasks require agents to manipulate simulated physical entities with high precision and coordination, similar to how humans control their limbs to carry out complex tasks. We use the Brax [38] simulator to design six continuous control environments. In these settings, agents need to navigate at various speeds, directions, and destination points. As shown in Fig. 3**A**, different task objectives are treated as observations to guide the agent in accomplishing different tasks. These challenging tasks serve as baselines in fields such as meta-learning [39; 40; 41]. During training, they are only exposed to a limited number of task instances, such as eight specific directions or eight fixed speeds. They use a single network to learn these unrelated or conflicting tasks. Our experiments compare two types of SNNs - one with synaptic weights that have been directly optimized and another with optimized metaplasticity. Both types of SNNs maintain the same scale and structure to ensure a fair comparison. Through this, we aim to highlight the advantages of optimizing metaplasticity over direct weight optimization in a multi-task environment. This comparison also evaluates the effectiveness and potential of our proposed model, especially in the face of the inherent challenges posed by complex continuous control tasks. We explore the performance of metaplasticity in a three-layer, fully-connected SNN model with \(128\) hidden spiking neurons. Both the synaptic weights and the plasticity parameters are initialized to \(0\). During testing, their plasticity rules are fixed for agents with plasticity, and synaptic weights are reset to \(0\). The trained weights are applied during testing for agents trained directly on weights. We utilize reinforcement learning tasks to thoroughly test metaplasticity, requiring the agents to learn to cope with different tasks simultaneously and generalize the acquired knowledge to unseen, more complex tasks. In our experiments, agents with metaplasticity show superior performance in multiple tasks compared to agents with directly optimized synaptic weights. As seen in Fig. 3**B** and Tab. 1, metaplastic agents exhibit more effective learning curves. The metaplastic agents quickly adapted to the changes in tasks, making them more capable of tackling the multi-task challenges inherent in the designed environments. In contrast, SNNs with directly trained weights fail to adapt to different tasks and can only acquire trivial solutions, such as maintaining approximate immobility in tasks involving multiple target directions. Ablation studies, depicted in Fig. 3**C** and Tab. 1, provide further insights into the contributions of different plasticity mechanisms. Removing any form of plasticity results in decreased agent performance, with the removal of HP causing a divergence due to the loss of the equilibrium mechanism. This result highlights the importance of all plasticity mechanisms in maintaining the stability and adaptability of the agents. The changing curve of a synapse's metaplasticity during training (Fig. 3**D**), along with the impact of plasticity on weights for different generations of agents (Fig. 3**E**), demonstrate the effective learning of optimal parameters for the metaplasticity rule, facilitated by the evolutionary strategy. Interestingly, the specific functions of plasticity across different generations of agents differed (Fig. 3**F**), indicating the evolution and fine-tuning of plasticity mechanisms to improve agent performance across generations. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Training & ant\_dir & swimmer\_dir & halfcheetah\_vel & hopper\_vel & fetch & urSe \\ \hline \hline \(\textbf{Opt}_{LP+HP}\) & \(6904\pm 801\) & \(10531\pm 827\) & \(-549\pm 95\) & \(869\pm 38\) & \(51\pm 3\) & \(86\pm 5\) \\ \(\textsc{Opt}_{HP}\) & \(3284\pm 570\) & \(7831\pm 1479\) & \(-870\pm 312\) & \(792\pm 50\) & \(26\pm 26\) & \(58\pm 13\) \\ \(\textsc{Opt}_{LP}^{*}\) & - & - & - & - & - & - \\ \(\textsc{Opt}_{Weight}\) & \(1069\pm 98\) & \(31\pm 8\) & \(-1598\pm 324\) & \(729\pm 116\) & \(15\pm 0.6\) & \(7\pm 5\) \\ \hline \hline Chance Level & \(995\pm 0.01\) & \(0.12\pm 0.02\) & \(-4946\pm 1.3\) & \(6.51\pm 0.3\) & \(4.74\pm 0.0\) & \(0\pm 0.0\) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of performance across various reinforcement learning tasks with different configurations of synaptic plasticity. Each task is evaluated using different training methods: Plasticity\({}_{LP+HP}\), Plasticity\({}_{LP}\), Plasticity\({}_{HP}\), and direct weight training. The values presented are the mean and standard deviation over \(5\) trials. The row ’Chance Level’ shows the performance metrics when randomly chooses actions. \({}^{*}\) Upon the removal of Homeostatic Plasticity (HP), the SNNs become unstable, leading to divergence. ### Metaplasticity Enhances Generalization Ability Metaplasticity serves as an important mechanism for enhancing an agent's generalization abilities, enabling the agent to exhibit stronger performance when dealing with unfamiliar tasks or when facing neuronal damage. We further investigate the performance of metaplasticity when the agents faced injuries. We design two different types of injuries: temporary injuries and permanent injuries. Temporary injuries refer to a scenario where all synaptic weights of the agents are reset to \(0\) and kept for \(50\) steps. Permanent injuries refer to a situation where some synapses are set to \(0\) initially and do not update according to plasticity. In the tests for injuries, agents with plasticity display a remarkable ability to recover from temporary neuronal damage simulated by resetting all synaptic weights to \(0\) for \(50\) steps (Fig. 4**A**). Fig. 4**B** illustrates the changes in network weights at various stages before and after the temporary Figure 3: **Metaplasticity’s performance in multi-task reinforcement learning tasks and the influence of different plasticity attributes on performance.****A.** Illustration of multi-task reinforcement learning. The agent is required to utilize a singular network to simultaneously learn multiple tasks with distinct objectives or even entirely opposing ones. The objectives of the tasks are treated as observations for the agent. They are inputted to the SNNs along with other observations such as joint positions, velocities, etc. **B.** Training curves of agents with metaplasticity versus those trained directly on weights. In these multi-task RL tasks, agents need to learn to move towards different directions (ant_dir, swimmer_dir), at varying speeds (halffcheetah_vel, hopper_vel), and to different locations (fetch,ur5e). Agents with metaplasticity maintain dynamic synaptic weights, learn characteristics of different tasks, and hence achieve superior performance in multi-task challenges. **C.** Ablation analysis of different plasticity mechanisms. Different colors represent training curves with some form of plasticity (LP or HP) removed. After removing HP, SNNs diverge due to the loss of the equilibrium mechanism. Both LP and HP play significant roles in enhancing agent performance. **D.** The change curve of a synapse’s metaplasticity during the training process. Through evolutionary strategies, agents learn to adjust their plasticity. **E.** During the training process, at different inter-spike intervals of pre-synaptic and post-synaptic neurons, the impact of the plasticity of agents from different generations on weights. **F.** The specific functions of plasticity in agents from different generations as shown in **E**. damage. Remarkably, despite losing all synaptic weights due to the inflicted temporary damage, agents manage to recover these weights using their inherent plasticity and incoming input stimuli. Moreover, even under permanent neuronal damage, with a proportion of neurons blocked and their weights unable to update, the plasticity-enabled agents continue to exhibit better performance and robustness (Fig. 4**C**). These results suggest that metaplasticity can contribute to the resilience of artificial agents, much as it does in biological systems. Figure 4: **Performance under temporary and permanent nerve injury.****A.** The agent’s performance in the face of temporary neural damage. At the \(500\)th step, all synaptic weights were reset to \(0\) to simulate a sudden neural system injury, and this condition lasted for \(50\) steps. Agents with plasticity were able to recover from this temporary loss, demonstrating better robustness. **B.** Network weights at different times before and after temporary damage. The synaptic weights of the input layer are shown above the dashed line, while the readout layer weights are below the dashed line. Even if the agent loses all synaptic weights due to temporary damage, it can still recover these weights based on its plasticity and input stimuli. **C.** The agent’s performance in the face of permanent neuronal damage of varying degrees. At the start of the test, neurons were blocked at different proportions, their synaptic weights set to \(0\), and could not be updated, simulating permanent neural network damage. Agents with plasticity performed better and exhibited stronger robustness when dealing with such permanent damage. Figure 5: **A.** Performance of different agents in trained tasks and tasks not seen during training. Agents with plasticity can generalize well to unseen tasks, while agents trained directly on weights have difficulty generalizing to unseen test tasks due to their weights being fixed during testing. **B.** Low-dimensional embeddings of neuronal states during reinforcement learning tasks, differentiated by training strategy. Each point corresponds to the state of the hidden layer neurons at a specific time step. The color coding signifies distinct tasks. Agents that possess plasticity demonstrate an enhanced capability to distinguish between different tasks. Moreover, the neuronal states associated with identical tasks exhibit the intriguing property of forming a manifold within the high-dimensional space. More strikingly, metaplastic agents show a robust ability to generalize to tasks unseen during training, as demonstrated in Fig. 5**A**. For tasks with various movement directions and speeds, the agents only encounter a small subset of cases during training, represented by the red and orange points in Fig. 5**A**. Therefore, in this experiment, the agents are required to move in directions and at speeds unseen in training, emphasizing the agents' more profound understanding and generalization capacity for the tasks. In contrast, agents trained directly on weights struggle with generalization due to their fixed weights during testing. This observation underscores the flexible adaptability offered by metaplasticity, enhancing the agent's ability to navigate unseen scenarios. During the training phase, the agents are instructed to move in straight lines in eight specific directions. However, as illustrated in Fig. 5**A**, agents with metaplasticity demonstrate a degree of generalization capability. They can learn to move in straight lines toward directions not encountered during training, while agents without metaplasticity struggle to generalize what they have learned. To further test the generalization capacity introduced by metaplasticity, we hope that agents could learn to turn or even form more complex paths simply by changing the target signal and without any additional feedback information related to posture. The results are shown in Fig. 6. Compared to agents that directly train their weights, those with metaplasticity demonstrate impressive generalization abilities toward this complex task. They can quickly adjust synaptic weights through metaplasticity and dynamically modify their state in previously unseen scenarios during different pieces of training. This allows them to progress toward varying target directions. In Fig. 5**B**, we visualize the neuronal states of agents trained using different strategies during RL tasks in low-dimensional space. Each point in this representation corresponds to the state of the hidden layer neurons at a particular time step, with the varying colors indicating different tasks. The remarkable aspect of these visualizations is how agents with inherent plasticity demonstrate a pronounced ability to discern between distinct tasks. Even more intriguing is the observation that the neuronal states corresponding to the same task tend to cluster together, forming a clear manifold in the high-dimensional space. This feature of metaplasticity contributes to the agent's robust ability to generalize across various tasks while maintaining distinctive task-specific patterns in neuronal states. This further illustrates the powerful capabilities of agents with metaplasticity and the profound impact of such plasticity on the agent's learning and adaptation abilities. ## 3 Discussion Current artificial intelligence algorithms often focus on solving specific tasks, and their performance may fall short when faced with scenarios that deviate from their training conditions. These algorithms' singular functionality, lack of robustness, and limited flexibility restrict their adaptability Figure 6: **Testing of the agent’s generalization capabilities. During training, the agent only learned to move in a straight line. The agent’s movement trajectories are shown when the target direction is altered during the testing process. The green line represents the expected trajectory of the agent moving at a constant speed. The orange line is the actual movement trajectory of the agent with plasticity. Agents with plasticity can better understand different tasks, adjust synaptic weights according to different target directions, and thus show greater flexibility and superior generalization performance.** and application in complex and variable environments. In contrast, biological entities demonstrate exceptional adaptability in complex and changing environments, typically attributed to the plasticity of biological neural networks. Synaptic plasticity is the cornerstone and heart of exploring more generalized intelligence [42; 43]. SNNs with their brain-inspired operating mechanisms, lay the foundation for constructing flexible and robust intelligent systems, thereby attracting considerable attention in the field of artificial intelligence [44; 45; 46; 47]. However, current SNN training algorithms primarily rely on the backpropagation of external error signals and biologically-inspired plasticity rules, such as Spike-Timing-Dependent Plasticity. Although these methods demonstrate robust performance on individual tasks, the fixed learning paradigm still limits the generalization ability of SNNs and adaptability in multi-task environments. Contrary to the static nature of synaptic weight adjustments in traditional SNNs, we delve into metaplasticity, facilitated by the synergy between LP and HP. Metaplasticity represents a higher-order learning process that dynamically adjusts plasticity rules. Through fostering adaptive synaptic modifications derived from the history of neuronal activity, metaplasticity promotes the emergence of a more dynamic and self-regulated learning system. Such a mechanism potentially narrows the gap between SNNs and their biological equivalents, thereby facilitating the progression towards continual learning and adaptation. Our experimental results reveal that metaplasticity significantly enhances the memory capacity of SNNs by encoding memories directly into synaptic weights. Moreover, it does not rely on spike activity to sustain memory, allowing the network to remain in a resting state when not processing task-related stimuli, thus significantly improving the energy efficiency of SNNs. Metaplasticity also dramatically amplifies the multi-task learning and generalization capabilities of SNNs, facilitating a swift transfer of knowledge learned from other tasks to more complex and unfamiliar tasks. Regarding adapting the paths and turning towards new directions, our metaplasticity models can maneuver in ways not encountered during training, a critical feature in complex, dynamic, and unpredictable real-world environments. Importantly, metaplasticity plays a pivotal role in bolstering the robustness of SNNs under simulated motor impairment scenarios. The exceptional resilience demonstrated in temporary damage scenarios validates the advantages of integrating metaplasticity into neural networks. Moreover, even in permanent damage, the exhibited resilience reinforces the case for metaplasticity as an inherent attribute of artificial systems. These characteristics reflect recovery mechanisms in biological systems where the brain mitigates damage through neural reorganization and the formation of new connections. In conclusion, our study highlights metaplasticity as a critical feature that enhances the resilience and adaptability of artificial agents. These findings provide valuable insights for designing future artificial systems, opening up new possibilities for creating adaptive, robust, and intelligent agents capable of navigating complex and dynamic environments. Further work can explore more sophisticated forms of metaplasticity and study their impacts on various facets of artificial agent performance. ## 4 Method ### Neuron and synaptic models We employed leaky integrate-and-fire (LIF) neurons in our network models due to their biological plausibility and computational efficiency. The state of each LIF neuron was represented by its membrane potential, which integrated the incoming signals and generated a spike when the potential crossed a predefined threshold, as shown in Eq. 2. \[\tau_{m}\frac{\partial v}{\partial t}=-(v-v_{0})+I(t) \tag{2}\] In Eq. 2, \(\tau_{m}\) is the membrane time constant, \(v\) is the membrane potential, \(v_{0}\) is the resting potential, and \(I(t)\) is the total synaptic current at time \(t\). Once the membrane potential \(v\) exceeds a certain threshold \(v_{th}\), the neuron generates a spike, and the potential is reset. A discrete form of the LIF neuron's behavior can be described as: \[u(t) =v(t-\Delta t)+\frac{\Delta t}{\tau_{m}}(\sum_{i}w_{i}s_{i}(t)-v(t- \Delta t)+v_{0}) \tag{3}\] \[s(t) =g(u(t)-v_{th})\] \[v(t) =u(t)(1-s(t))+v_{reset}s(t)\] In Eq. 3, \(\Delta t\) is the time step, \(v_{reset}\) is the reset potential, \(u(t)\) and \(v(t)\) represent pre- and post-spike membrane potentials, and \(g(\cdot)\) is the Heaviside function modeling spiking behavior. After the loss of the reward signal, updating the network weights stops. This strategy of directly optimizing the weights is set as a control group in our experiments. Neuronal parameters are given in Tab. 2 unless otherwise specified. _Traces_ are the tracks produced at the pre- and post-synaptic sites by the spikes of pre- or post-synaptic neurons. Generally, these traces represent the recent activation level of pre- and post-synaptic neurons [48]. Traces can be computed by integrating spikes using a linear operator in the model and a low-pass filter in the circuit or by using non-linear operators/circuits. In the experiments, the synaptic traces were modeled as follows: \[x(t)=\sum_{\tau=0}^{t}\lambda^{t-\tau}s(\tau) \tag{4}\] In Eq. 4, \(x(t)\) is the synaptic trace at time \(t\), \(\lambda\) is the decay factor reflecting how quickly a spike's influence fades with time, and \(s(\tau)\) represents the spike at time \(\tau\). In the context of our experiment, these synaptic traces maintain a short-term history of neuron activation, thereby adding an element of temporal dynamics to our network model. As shown in Eq. 1, these synaptic traces are used to maintain a short-term history of neuronal activation and, in conjunction with metaplasticity, to modulate synaptic weights. ### Experimental Settings #### 4.2.1 Working Memory Task To validate the impact of metaplasticity on working memory, we designed a working memory task. The agent would first receive a stimulus sequence, and after a delay of \(m\) steps, the agent is asked to reproduce the received stimulus. In each experiment, a random sequence of length \(n\) would be generated, where \(r_{t}\sim\mathcal{B}(1,\frac{1}{2}),1<t\leq n\). At each time step, the input is a three-dimensional vector \(\vec{a_{t}}\), which can be divided into three stages: * Stimulus reception: If \(1<t\leq n\), \(\vec{a_{t}}=(r_{t},1,0)\). The first element is the type of input stimulus, and the second element is the indicator for the input stimulus. * Delay: If \(n+1<t\leq n+m\), \(\vec{a_{t}}=(0,0,0)\). This phase represents a delay period where no new stimulus is presented. * Stimulus reproduction: If \(n+m+1<t\leq 2n+m\), \(\vec{a_{t}}=(0,0,1)\). The last element indicates whether a pulse needs to be reproduced. \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Parameter**} & \multicolumn{2}{c}{**Value**} & \multicolumn{2}{c}{**Description**} \\ \cline{2-4} & WM task & RL task & \\ \hline \(\Delta t\) & \(20\) ms & \(200\) ms & Simulation time step \\ \(\tau_{m}\) & \(40\) ms & \(400\) ms & Membrane time constant \\ \(\lambda\) & \(54\) ms & \(544\) ms & Decay factor \\ \(v_{th}\) & \(0.1\) V & \(0.1\) V & Membrane threshold \\ \(v_{reset}\) & \(0\) mV & \(0\) mV & Reset potential \\ \(v_{0}\) & \(0\) mV & \(0\) mV & Resting potential \\ \hline \hline \end{tabular} \end{table} Table 2: Parameters of the spiking neurons. At each step, the model has a scalar output \(s_{t}\), which is a prediction for the stimulus. The Mean Square Error over the last \(m\) steps is taken as the reward of the model: \[R=-\frac{1}{n}\sum_{\tau=1}^{n}(r_{t}-s_{m+n+\tau})^{2} \tag{5}\] Eq. 5 is used as a reward function in training. To intuitively compare agents with different strategies, as shown in Fig. 2**B**, we utilize the average accuracy per step as the performance measure during testing, as shown in Eq. 6. \[\text{Acc}=\frac{1}{n}\sum_{\tau=1}^{n}(r_{t}==s_{m+n+\tau}) \tag{6}\] During the stimulus reception stage, each stimulus follows the distribution \(\mathcal{B}(1,\frac{1}{2})\), which means that the average accuracy at the chance level is \(0.5\). #### 4.2.2 Multi-task Reinforcement Learning We evaluated our method on five continuous control environments based on the Brax simulator (ant_dir, swimmer_dir, halfcheetah_vel, hopper_vel, ur5e, fetch). * ant_dir: We train an ant agent to run in a target direction in this environment. The training task set includes \(8\) directions, uniformly sampled from \([0,360]\) degrees. As shown in Fig. 3**D**, the generalization test task set includes \(72\) directions, uniformly sampled from \([0,360]\) degrees. The agent's reward comprises speed along the target direction and control cost. * swimmer_dir: In this environment, we train a swimmer agent to move in a fixed direction. The settings for training and testing tasks are similar to ant_dir. * halfcheetah_vel: In the halfCheetah_vel environment, we train a half-cheetah agent to move forward at a specific speed. The training tasks include \(8\) speeds, uniformly sampled from [1, 10] m/s. The generalization test tasks include \(72\) different speeds, uniformly sampled from the same range as the training tasks. * hopper_vel: In the hopper_vel environment, we train a hopper agent to advance at a specific speed. The experimental setup is the same as halfcheetah_vel, but the sampling interval for the speed is \([0,2]\) m/s. * ur5e: The UR5e is a common 6-DOF (degrees of freedom) robotic arm frequently used in industrial automation and robotics research. The agent receives a reward when the distance between the robotic arm's end and the target position is less than \(0.02\) m. The target position is then randomly reset. The agent's goal is to reach the target position as many times as possible within the stipulated time. * fetch: We train a dog agent to run to a target location in this environment. The experimental setup is similar to ur5e. The agent's final reward is the average reward across all tasks, which encourages the agent to learn multiple tasks simultaneously. ### Training Strategies We employ Parameter-Exploring Policy Gradients (PEPG) [33] to optimize SNNs. For SNNs with plasticity, the plasticity parameters in Eq. 1 are used for optimization. Evolution across generations is facilitated by modifying synaptic plasticity rules rather than directly adjusting the weights. SNNs with directly trained weights are considered a control group, where synaptic weights are the optimization parameters. The implementation of PEPG used in the experiments is provided by Algorithm 1. Unless expressly stated otherwise, the parameter settings and their explanations are shown in Table 3. The way to compute fitness \(f(\theta)\) varies depending on the task. For the working memory task, fitness is provided by Eq. 5, while for multi-task reinforcement learning, fitness is the average episodic reward across different subtasks.
2302.01736
Relating EEG to continuous speech using deep neural networks: a review
Objective. When a person listens to continuous speech, a corresponding response is elicited in the brain and can be recorded using electroencephalography (EEG). Linear models are presently used to relate the EEG recording to the corresponding speech signal. The ability of linear models to find a mapping between these two signals is used as a measure of neural tracking of speech. Such models are limited as they assume linearity in the EEG-speech relationship, which omits the nonlinear dynamics of the brain. As an alternative, deep learning models have recently been used to relate EEG to continuous speech. Approach. This paper reviews and comments on deep-learning-based studies that relate EEG to continuous speech in single- or multiple-speakers paradigms. We point out recurrent methodological pitfalls and the need for a standard benchmark of model analysis. Main results. We gathered 29 studies. The main methodological issues we found are biased cross-validations, data leakage leading to over-fitted models, or disproportionate data size compared to the model's complexity. In addition, we address requirements for a standard benchmark model analysis, such as public datasets, common evaluation metrics, and good practices for the match-mismatch task. Significance. We present a review paper summarizing the main deep-learning-based studies that relate EEG to speech while addressing methodological pitfalls and important considerations for this newly expanding field. Our study is particularly relevant given the growing application of deep learning in EEG-speech decoding.
Corentin Puffay, Bernd Accou, Lies Bollens, Mohammad Jalilpour Monesi, Jonas Vanthornhout, Hugo Van hamme, Tom Francart
2023-02-03T13:51:01Z
http://arxiv.org/abs/2302.01736v4
# Relating EEG to continuous speech using deep neural networks: a review. ###### Abstract _Objective._ When a person listens to continuous speech, a corresponding response is elicited in the brain and can be recorded using electroencephalography (EEG). Linear models are presently used to relate the EEG recording to the corresponding speech signal. The ability of linear models to find a mapping between these two signals is used as a measure of neural tracking of speech. Such models are limited as they assume linearity in the EEG-speech relationship, which omits the nonlinear dynamics of the brain. As an alternative, deep learning models have recently been used to relate EEG to continuous speech, especially in auditory attention decoding (AAD) and single-speech-source paradigms. _Approach._ This paper reviews and comments on deep-learning-based studies that relate EEG to continuous speech in AAD and single-speech-source paradigms. We point out recurrent methodological pitfalls and the need for a standard benchmark of model analysis. _Main results._ We gathered 29 studies. The main methodological issues we found are biased cross-validations, data leakage leading to over-fitted models, or disproportionate data size compared to the model's complexity. In addition, we address requirements for a standard benchmark model analysis, such as public datasets, common evaluation metrics, and good practices for the match-mismatch task. _Significance._ We are the first to present a review paper summarizing the main deep-learning-based studies that relate EEG to speech while addressing methodological pitfalls and important considerations for this newly expanding field. Our study is particularly relevant given the growing application of deep learning in EEG-speech decoding. ## 1 Introduction Electroencephalography (EEG) is a non-invasive method that measures the electrical activity in the brain. When a person listens to speech, the EEG signal measured has been shown to contain information related to different features of the presented continuous speech. We can relate these speech features to the EEG activity using machine learning models to investigate if and how the brain processes continuous speech. Reasons for doing this include (1) understanding neural mechanisms of speech processing in the brain; (2) objectively measuring processes in the brain related to speech processing using single sound source stimulus-response models, which in turn are useful for research and clinical diagnostics of hearing; (3) designing auditory prostheses that incorporate auditory attention decoding (AAD) and use it to steer noise suppression to improve speech understanding in so-called cocktail party scenarios. AAD systems are designed to decode to which speaker from a mixture a listener attends; and (4) providing architectures from EEG speech processing modeling to interested readers from any field. Currently, primarily linear models are used to relate continuous speech to EEG (e.g., Ding and Simon, 2012; Vanthornhout et al., 2018; Crosse et al., 2016; de Cheveigne et al., 2018; Crosse et al., 2021). Such models are used to either predict EEG from speech (forward modeling) or to reconstruct speech from EEG (backward modeling). Once the EEG (or speech) is approximated, a correspondence measure between the predicted and the ground truth signal is computed and considered a measure of neural tracking. For single sound source stimulus-response models, the end-task is to maximize the prediction (or reconstruction) quality, whereas for AAD models, the task is to decode the attended speaker or direction. For single sound source stimulus-response models, the main correspondence measure for prediction (or reconstruction) quality is Pearson correlation, and it can be used either between the predicted and ground truth signal, or between two maximally-correlated embedded representations as implemented in the canonical component analysis (CCA) method (de Cheveigne et al., 2018). However, comparison of correlations across experiments is delicate if the preprocessing of the target signals is different. As an example, predicting an EEG in a narrow frequency band is easier than predicting the whole EEG spectrum. This implies that to maximize correlation, one can decide to filter the EEG in a given frequency band, while some speech-related information is contained in the filtered-out EEG (de Cheveigne et al. (2021)). To circumvent that issue, an simple objective task called the match-mismatch (MM) task (de Cheveigne et al., 2018), was developed. This classification task requires a model to decide whether a segment of EEG was evoked by a given segment of speech or not. Linear models are limited as they assume linearity between the EEG and speech signals, inadequately fitting the nonlinear nature of the auditory system. For example, it is well known that depending on the level of attention and state of arousal of a person response latencies can change (Ding and Simon, 2012), which cannot be modeled with a single linear model. Deep neural networks (DNNs) have been recently introduced to this field. Many studies have shown the ability of deep learning models to relate EEG to speech (see Section 2), be it for neural tracking assessment (e.g., Katthi and Ganapathy, 2021; Accou et al., 2021; Monesi et al., 2020; Thornton et al., 2022) or to decode auditory attention (e.g., de Taillez et al., 2020; Ciccarelli et al., 2019; Kuruvila et al., 2021). On certain tasks, DNNs have outperformed linear decoder baselines (e.g., Accou et al., 2023, 2021; Monesi et al., 2020; Puffay et al., 2023), but it is still not a general finding. In this paper, we summarize the methods present in the literature to relate EEG to continuous speech using deep learning models. In Section 2, we first review different experiment steps of the gathered studies and the different approaches chosen by authors. These steps include the task used to relate EEG to speech, the network architecture used, the dataset's nature, the preprocessing methods employed, the dataset segmentation, and the evaluation metric. In Section 3, we then address the methodological pitfalls to avoid when using such models and we recommend to establish a standard benchmark for models' analyses. In Section 7 and 8, we summarize the results of each study individually for multiple sound source and single sound source respectively. ## 2 Review of deep-learning-based studies to relate EEG to continuous speech Using Google Scholar, IEEE Xplore, Science Direct, Pubmed and Web of Science, we collected papers using search queries reported in Table 1. As a last step, we pruned the selection manually to exclude studies not including EEG data, continuous speech stimuli or deep learning models, and stopped searching for new papers in December 2022. In this section, we go through different features of the gathered studies including the task used to relate EEG to speech, the different architectures used, the dataset's nature, the preprocessing methods employed, the dataset segmentation, and the evaluation metrics. More detailed summaries of individual studies can be found in Section 7 and 8. \begin{table} \begin{tabular}{l l} \hline Search engine & Search query \\ \hline Google Scholar & (”EEG” OR ”Electroencephalography” OR ”Electroencephalogram”) AND speech \\ & AND (”deep learning” OR ”deep neural networks”) \\ IEEE Xplore & ((”All Metadata”:EEG) OR (”AlklMetadata”:Electroencephalogram)) \\ & AND (”All Metadata”:speech) \\ & AND ((”All Metadata”:deep neural networks) \\ & OR (”All Metadata”: non-linear) OR (”All Metadata”: nonlinear) ) \\ Science Direct & (EEG OR Electroencephalogram) AND (”continuous speech” OR ”natural speech”) \\ & AND ( ”neural network”)) \\ Pubmed & (EEG OR Electroencephalography OR Electroencephalogram) \\ & AND (speech) AND ( deeplearning OR deep learning OR neural networks) \\ & NOT (imagined[title]) NOT (motor[title]) NOT emotion[title] \\ Web of Science & (((((((((TS=((”EEG” or ”Electroencephalography” or ”Electroencephalogram”)))) AND TS=((”speech” or ”audio” or ”auditory” )))) \\ & AND TS=((”artificial neural network*” or ”ANN” or ”deep learning” or ”deeplearning” or ”CNN” or ”convolutional” or ”recurrent” or ”LSTM”)))) NOT TI=(”imagined” or ”motor imagery” or ”parkinson”)) \\ & NOT TI=(”emotion”)) NOT TS=((”dysphasia” or ”alzeimer**”)))) \\ & NOT TS=(”seizure”)))) AND DOP=(2010-01-01/2022-05-15) \\ \hline \end{tabular} \end{table} Table 1: Search queries for each search engine during paper collection. ### Tasks relating EEG to speech To relate EEG to speech, we identified two main tasks, either involving multiple simultaneous speech sources or a single speech source. #### 2.1.1 Multiple sound sources When more than one speaker talks simultaneously, the brain of a listener must cope with multiple speech sources. One of the main challenges arising from this scenario is AAD. The interest in this topic is two-fold: it provides a basis to overcome current hearing aid limitations in cocktail party scenarios, but also to investigate attention mechanisms in the brain. Linear methods have been widely used for AAD (review by Geirnaert et al. (2021c)), and we here solely investigate deep-learning-based studies that we report in Table 2. To decode attention, one possibility is to classify the speaker identity (i.e., direct classification). In the majority of studies, the focus has been on identifying the left or right speaker in a situation where two speakers are competing for attention. From 2016 to 2020, only three studies were reported (Shree et al., 2016; de Taillez et al., 2020; Tian and Ma, 2020), however since 2021 this has increased a lot(Su et al., 2021; Kuruvila et al., 2021; Lu et al., 2021; Zakeri and Geravanchizadeh, 2021; Vandecappelle et al., 2021; Hosseini et al., 2021; Xu et al., 2022b, a; Thornton et al., 2022a). Another approach is to decode the directional focus of attention. It presents advantages such as avoiding separating speech sources, but also enabling models to use brain lateralization (i.e. the directional focus is encoded spatially in the brain), which is an instantaneous spatial feature, rather than a temporal feature. We report only one deep-learning-based study attempting to decode locus of attention (Vandecappelle et al., 2021). #### 2.1.2 Single sound source Another popular task is to relate EEG to a single speech source. This approach usually aims to quantify the time-locking of a brain response to a single speech source, often referred to as neural tracking. Neural tracking of speech can be used in multiple applications, notably to model the speech processing in the brain but also as an objective measure of hearing or understanding speech. We identified two main approaches: using a match-mismatch task, and direct regression of stimulus or EEG (see Table 3). In the match-mismatch (MM) paradigm (de Cheveigne et al., 2018) a model is trained to associate a segment of EEG with the corresponding segment of speech. The accuracy obtained on this task is defined as a measure of neural tracking. We report four studies in which the MM task was used (Monesi et al., 2020, 2021; Accou et al., 2021b, a). EEG can also be related to speech in a reconstruction/prediction (R/P) task. In this case a stimulus feature is reconstructed from the EEG (or the EEG predicted from the speech, respectively), and correlated to the original signal. This relates to the commonly used linear backward (or forward, respectively) models. Four studies concern backward models (Krishna et al., 2021a, b; Sakthi et al., 2019; Thornton et al., 2022a), while only one was reported for forward modeling (Krishna et al., 2021a). In another linear approach, canonical component analysis (CCA), the stimulus and EEG are projected to separate subspaces and correlated in those subspaces de Cheveigne et al. (2018). Variations on this method have also been explored with deep learning (Reddy Katthi and Ganapathy, 2021; Katthi and Ganapathy, 2021). The MM and the R/P tasks were the most used methods, however some studies used different tasks to relate EEG to speech such as semantic incongruuities classification (Motomura et al., 2020), or sentence classification (Sakthi et al., 2021). Bollens et al. (2022) utilized solely an embedded representation to classify segments of speech. ### Model architectures The field of deep learning is evolving rapidly, and constantly providing novel architectures. Multiple layer types were integrated to AAD and single speech source decoding models. Globally, architectures to solve the tasks mentioned in Section 2.1 were inspired from other fields (e.g., automatic speech recognition, ASR). We provide a more in-depth description of each architecture in Section 6. Early attempts used general regression neural networks (GRNNs) (Shree et al., 2016) or fully-connected neural network (FCNN) models (de Taillez et al., 2020). As fully connected layers are very computationally expensive, later studies implemented convolutional neural network (CNN)-based models (Ciccarelli et al., 2019; Tian and Ma, 2020; Thornton et al., 2022; Vandecappelle et al., 2021). As an attempt to compensate for the delay in the brain response to speech, models with recurrent layers such as long-short term memory (LSTM) (Monesi et al., 2020, 2021; Kuruvila et al., 2021; Lu et al., 2021; Xu et al., 2022b), Bi-LSTM (Zakeri and Gervanchizadeh, 2021), gated recurrent unit (GRU) (Krishna et al., 2020, 2021b, a; Sakthi et al., 2021) or Bi-GRU (Motomura et al., 2020) were implemented. To enable the model to provide more weight to certain time points (Motomura et al., 2020; Krishna et al., 2021), or certain EEG electrodes (Su et al., 2021), channel attention mechanisms were integrated into the models. In one study (Xu et al., 2022a), channel attention was integrated into a transformer, an well-known architecture from automatic speech recognition (ASR) that allows parallel computation (to reduce training time), and reduces performance drops due to long dependencies (Vaswani et al., 2017a). Other popular model types such as generative adversarial networks (GANs) (Krishna et al., 2021b), or autoencoders (AEs) Bollens et al. (2022); Hosseini et al. (2021) were utilized. AEs find a compressed meaningful representation of a signal. They can be constrained to extract speech-related information in EEG, hence working as a denoiser. ### Datasets As explained in Section 2.2, deep learning architectures have very useful properties to relate EEG to speech. Compared to linear models, they often have a high number of parameters, which means lots of data are required to train properly, and to avoid overfitting. As collecting EEG data is tedious and time-consuming, research groups often work with their own small datasets, sometimes containing a few minutes of speech or less per participant (Lu et al., 2021; Shree et al., 2016; Krishna et al., 2020, 2021a,b; Tian and Ma, 2020). The other studies we reported used at least 30 min of data per subject, while some even published their datasets (Das et al., 2019; Fuglsang et al., 2018; Bollens et al., 2023a). We discuss the importance of the dataset and generalization in Section 3.3. ### Preprocessing Once a dataset is available, both the EEG and the presented speech can be preprocessed in various manners. #### 2.4.1 Eeg The preprocessing of EEG signals varies across studies. While it is not specific to deep learning, many models can learn to conduct steps that would traditionally be considered preprocessing. Hence we provide a general EEG preprocessing procedure, and invite readers to read the original individual studies to get more specific details. In addition, extensive considerations about preprocessing are reported by Crosse et al. (2021). Most of the studies we reviewed start with filtering the signal, first with a high-pass filter to remove any unwanted DC shifts or slow drift potentials, secondly with a low-pass filter to remove high frequencies as the SNR becomes lower in higher frequency ranges. In linear studies, models typically use low frequencies (e.g., between 0.5 and 8 Hz), while some deep learning studies report benefits from including higher frequencies (Puffay et al., 2022). A re-referencing step can then be added, typically by subtracting the mean over all channels from each individual channel. This contributes to increase the signal-to-noise ration (SNR). Downsampling is commonly performed to reduce the computational time during training. It is often done in accordance with previous filtering to avoid temporal aliasing as mentioned in Crosse et al. (2021). Typical sampling rates are 128 or 64 Hz. Finally, an artifact removal algorithm based on different methods such as multi-channel Wiener filtering (MWF) (Somers et al., 2018), or independent component analysis (ICA) (Hyvarinen and Oja, 2000) is used to remove different artifacts (e.g., eye-blink, neck movement). The majority of the studies we reviewed provided their models with EEG signals pre-processed as stated above. However, two of them engineered more specific features from EEG, such as a latent representation optimized through the training of an AE (Bollens et al., 2022), or source-spatial feature images (Tian and Ma, 2020). #### 2.4.2 Speech From the raw speech signal, it is common to investigate the neural tracking of different features of speech. Most studies we report here used acoustic features, such as the speech envelope (e.g, Ciccarelli et al., 2019; Su et al., 2021; de Taillez et al., 2020; Lu et al., 2021; Xu et al., 2022b,a), or the Mel spectrogram (e.g., Krishna et al., 2020, 2021a,b; Monesi et al., 2021; Kuruvila et al., 2021). A study even used the fundamental frequency of the voice (Puffay et al., 2022). To investigate the processing of different speech units in the brain, higher-level speech features at the level of the sentence (Motomura et al., 2020; Sakthi et al., 2021), word (Monesi et al., 2021) or phonemes (Sakthi et al., 2021; Monesi et al., 2021) were also used. As mentioned by Crosse et al. (2021), these preprocessing steps should be conducted on the entire dataset or on the training, validation and test sets prior to segmenting data to avoid introducing artifacts that could affect the model's performance. ### Data segmentation The training paradigm is crucial and should be carefully chosen to avoid biases and overfitting. In the gathered studies, two methods were employed: cross-validation (e.g., Ciccarelli et al., 2019), and regular training, validation, and test sets split (e.g., Monesi et al., 2021; de Taillez et al., 2020). Cross-validation performs multiple training iterations with different portions of the dataset used for training and validation, while regular training uses a unique training and validation sets. In AAD tasks, trials are defined as a period of time during which a subject attends to a target speaker, and are labelled accordingly (e.g., left/right). The split can therefore be done in different manners, notably within (e.g., Lu et al., 2021) and between trials (e.g., Thornton et al., 2022a). The split choice can lead to overfitting, as discussed in Section 3.2. ### Evaluation metric Although various intermediate metrics can be used to select the attended speaker or locus, the reported metric is common in the multiple sound sources studies we report. The attention decoding (i.e., speaker identity or direction classification) accuracy is used in all studies. The only metric difference is the chance level defined by the number of sound sources. For single sound source paradigms, various metrics are employed. Classification metrics are used, such as the match-mismatch accuracy (Monesi et al., 2020, 2021; Accou et al., 2021b, a), subject-classification accuracy (Bollens et al., 2022) or sentence classification accuracy (Motomura et al., 2020). For regression studies, the main metric we noted is Pearson correlation (Katthi et al., 2020; Katthi and Ganapathy, 2021; Thornton et al., 2022a), while one research group used root-mean-squared error (RMSE) and Mel cepstral distortion (MCD) (Krishna et al., 2020, 2021a, b). The use of multiple metrics is problematic when comparing model performances across studies. For instance, even when the same metric is used, it does not guarantee a fair comparison (see Section 3.4). ## 3 Overfitting, interpretation of results, recommendations ### Preamble In our own practice with auditory EEG we noticed how easily the deep learning models overfit to specific trials, subjects or datasets. This is mainly due to the relatively small amount of data typically available, compared to other domains such as image or speech recognition. A very careful selection of the test set is therefore needed, and the results of a number of the studies reviewed above may be overly optimistic. In the following experiments, we demonstrate how this can happen and propose a number of good practices to avoid overfitting and how to calculate results on a sufficiently independent test set. We first introduce two different datasets which we use for respectively single speech source or multiple speech source tasks. #### 3.1.1 Single speech source (N=1) dataset For the single speech source experiments, namely subsections 3.4, 3.5 and 3.6 below, we select a publicly available dataset (1). We selected 48 subjects from a dataset that now consists of EEG data from 85 normal hearing subjects while they listened to 10 unique stories of roughly 14 minutes each, narrated in Flemish (Dutch). We use the LSTM-based model proposed by Monesi et al. (2020) trained on the match-mismatch task defined in Section 6. An exception is made for subsection 3.4 which uses linear decoders as correlation analyses require a regression task rather than a match-mismatch task. We use linear decoders in this subsection as they face the same issue as DNNs but are computationally less expensive. #### 3.1.2 Multiple speech sources (N\(>\)1) dataset For the multiple-speech source experiment, namely subsection 3.2 below, we use the publicly available dataset from Das et al. (2019). It contains data from 16 subjects. In total, there are 4 Dutch stimuli (i.e., stories), spoken by male speakers, of 12 minutes each. Each stimulus is split up into 2 parts of 6 minutes and all stimuli are played twice, alternating the attended story. Each subject listens to 8 trials of 6 minutes. We use the CNN model proposed by Vandecappelle et al. (2021) to conduct our experiments. The architecture of this model is depicted in Figure 1. Figure 1: **Architecture of the CNN model (Vandecappelle et al., 2021).** The input has dimensions \(64\times T\), representing the number of EEG channels and the segment length. Five \(64\times 17\) spatio-temporal filters are shifted over the input matrix. ### Selection of training, validation and test sets When training neural networks, the split of the dataset into a training, validation, and test partition is an important aspect. In a two-competing speaker scenario, the task of the model is usually to predict which one of the two speakers is the attended one and which one is the unattended one. When recording the EEG, the measurement is usually spread out over multiple trials. In each trial, the subjects have to pay attention to one of the two speakers. Then, in the next trial, they have to pay attention to another speaker, to generate a balanced dataset. Translating this into output labels, means that there is usually one label per trial, e.g., _left speaker_ or _right speaker_. A common way to split these datasets up into training/validation and testing sets is to split each trial into a training, validation and test sets (Zakeri and Gervanchizadeh, 2021; Lu et al., 2021; Xu et al., 2022, 2022; Su et al., 2021; Shree et al., 2016; Ciccarelli et al., 2019; de Taillez et al., 2020; Vandecappelle et al., 2021). While this might not seem problematic, this potentially allows models to overfit the training data. With only one label per trial (left/right), the model might learn to identify the trial from which the segment of EEG was taken, rather than solving the auditory attention task. If the validation and test set are taken from within the same trial, they have the same correct label, and information from the training set can leak into the validation and test set. This leads to models that seemingly perform great, but are unable to generalize and do not score well on unseen trials. To prevent this, we propose to always use held-out trials for the test set (Kuruvila et al., 2021; Thornton et al., 2022; Hosseini et al., 2021; Tian and Ma, 2020). If the dataset contains 10 trials, 8 could be used for training, 1 for validation, and 1 for testing. Since the trial used for testing is never seen by the model before, it cannot rely on identifying which trial the EEG segment is taken from and has to learn to identify the underlying speaker information. To demonstrate the necessity of this split, we conducted experiments with two different splits of the dataset and show that this leads to substantially different results. We use the model proposed by Vandecappelle et al. (2021) and follow the training procedure, using two different splits of the dataset. In the first experiment, we split each trial of 6 minutes into an 80:10:10 training/validation/test partition. In the second experiment, we implement a 4-fold cross-validation scheme. Each fold contains unique stories, ensuring that the stories seen in training do not occur in the test set. Looking at Table 4, we divide the folds as (trial1, trial5), (trial2, trial6), (trial3, trial7) and (trial4, trial8). The results of both experiments can be seen in Figure 2. The average accuracy of the first experiment for segments with a window length of 1 second is 76.25 %, while the average accuracy of the second experiment does not exceed 51.10 %, showing the need for a cross-validation scheme when applying deep learning models to the auditory attention paradigm. The leave-one-story+speaker-out method was tested by Vandecappelle et al. (2021), and strong overfitting effects were found when within-trial splitting was performed. Overall, out of 13 articles gathered with multiple sound source paradigms, only 4 performed a between-trial split, which as shown in this section, biases the model performance. ### Benchmarking model evaluation using public datasets Publicly available datasets include (1) for multiple speech sources (AAD): (Das et al. (2019); Fuglasang et al. (2017)), and (2) for a single speech source: Fuglasang et al. (2017), Broderick et al. (2018), Weissbart et al. (2022), and. While we are grateful \begin{table} \begin{tabular}{l l l l} \hline Trial & Left stimulus & Right stimulus & Attended side \\ \hline 1 & story1, part1 & Story2, part1 & Left \\ 2 & story2, part2 & Story1, part2 & Right \\ 3 & story3, part1 & Story3, part1 & Left \\ 4 & story4, part2 & Story4, part2 & Right \\ 5 & story2, part1 & Story1, part1 & Left \\ 6 & story1, part2 & Story2, part2 & Right \\ 7 & story4, part1 & Story3, part1 & Left \\ 8 & story3, part2 & Story4, part2 & Right \\ \hline \end{tabular} \end{table} Table 4: Example division for the Das2019 dataset for 1 subject. Between subjects, the attended direction is alternated. Figure 2: Results for training the model from Vandecappelle et al. (2021) using different sets of training/test. Box plots shown over 18 subjects. Each point in the boxplot corresponds to the auditory attention detection accuracy for one subject, averaged over all segments. Split within trials: each trial of 6 minutes is split into 80/10/10 training/validation/test. Split between trials: out of the 8 trials per subject, use 6 for training, 1 for validation and 1 for test. to the authors for making this data available, unfortunately, as EEG data collection is expensive and time-consuming, the total amount of data up to now has remained relatively small in the context of deep learning: 60 hours for AAD, and 30 hours for single sound source paradigm. The lack of a larger public dataset makes it difficult to benchmark the models. Moreover, training and evaluating on a specific dataset can result in overfitting and lack of generalizability. We recently made available a much larger dataset Bollens et al. (2023a) of 85 subjects, with a total of approximately 200 hours of data. While this remains a very small dataset compared to those available in the fields of automatic image and speech recognition, we believe it is a substantial step forward, and hope that it paves the way towards standardized benchmarking such as demonstrated in the recent IEEE Auditory EEG Challenge (Bollens et al., 2023b). A potential point of improvement for most of the papers from this review is to _additionally_ evaluate the developed architectures on multiple datasets recorded with various EEG devices (e.g., different number and location of electrodes) and experimental set-ups (e.g., different signal-to-noise ratios, inserted phones or speakers). To illustrate good practices, some generalization experiments were conducted in Accou et al. (2023): the authors trained a model on their dataset and they evaluated it on a publicly available dataset (Fuglsang et al., 2017). Among articles gathered in this study, only 9 out of 29 (Monesi et al., 2020, 2021; Accou et al., 2021, 2021; Bollens et al., 2022; Su et al., 2021; Kuruvila et al., 2021; Vandecappelle et al., 2021; Puffay et al., 2023) involved the use of a publicly available dataset, and only one attempted to evaluate generalization to another dataset (Thornton et al., 2022b). Ideally, we recommend to evaluate trained model on multiple publicly available datasets, to ensure generalization capabilities of a model. Considering the above-stated issues, the solution to share data publicly seems straightforward. However, sharing EEG data is complicated due to their biological nature. In many countries, the participants have to agree explicitly to their data being shared (anonymized/pseudoinmyzed) in a publicly available dataset. Therefore, we encourage the research groups to work towards establishing a common dataset to facilitate model comparison. This will be a huge time gain and be a good control for possible pitfalls in recordings, preprocessing, or model evaluation. As a comparison, most deep learning models in ASR are evaluated on shared datasets (e.g., Librispeech ASR corpus from Panayotov et al. (2015)) and with common error measures such as word error rate. ### Interpretation of correlation scores When decoding continuous speech features such as the envelope from EEG, decoding quality is often estimated by correlating the reconstructed speech envelope with the presented stimulus envelope. In several papers we collected (Reddy Katthi and Ganapathy, 2021; Katthi and Ganapathy, 2021; Thornton et al., 2022b; Sakthi et al., 2019), a correlation metric is reported as a metric of the performance of the model being used. While correlation metrics are important for interpretation and possible applications (e.g. hearing tests), they depend on the training, evaluation and architecture of a model, the experimental paradigm, and the nature, quality, size and preprocessing of the datasets used. Standard statistical tests for correlations are ill-equipped to deal with non-independent sample data, such as (low-pass filtered) EEG and speech envelopes (Combrisson and Jerbi, 2015; Crosse et al., 2021), as estimated correlations between distant segments can be high by chance. Therefore, an appropriate null distribution has to be constructed to detect whether a model can effectively use neural data to decode speech from EEG (or predict EEG channels from a speech feature). For the encoding/decoding case, Crosse et al. (2021) proposed to use a permutation test using randomly (circular) shifted versions of the predicted data with regards to the actual data to estimate the null distribution. Among all the papers we gathered performing a R/P task, only Thornton et al. (2022b) used this method. The percentiles of this null distribution can be used to measure the significance of the results. An example of the proposed permutation test is visualized for a linear decoder with an integration window of 250 ms in Figure 3 (a). The decoder was trained on data of a single (representative) recording of the 48-subject dataset in 6-fold cross-validation. Both EEG and speech envelope data were filtered between 0.5-4 Hz with an eighth order Butterworth filter. The decoder was evaluated on 1 minute windows with 80% overlap. The null distribution was constructed using 100 permutations of circular shifted speech envelopes, similar to the approach suggested by Crosse et al. (2021). The mean of the predicted correlation scores (0.137 Pearson correlation) is greater than the 95th percentile of the null distribution (0.099 Pearson correlation), showing significance at \(\alpha\)=0.05. To illustrate the effect of preprocessing and evaluation paradigm, the same model is trained and evaluated on the same recording, but with a broadband speech envelope Figure 3: Null and actual (prediction) distributions for a linear decoder with a 250 ms integration window, trained in 6-fold cross validation. For (a) EEG and speech data was filtered between 0.5-4Hz and evaluation was performed on 1 minute windows with 90% overlap. For (b) EEG was filtered between 0.5-4Hz and the speech envelope highpass filtered above 0.5Hz. Evaluation was performed on 5 second windows with 80% overlap. The purple lines represent the mean of the null and actual distribution respectively. The red striped line represents the 95-percentile of the null distribution. Note that while the same model is used in (a) and (b), and the mean correlation score in (b) is higher than (a) (0.146 vs. 0.142 respectively), only (a) is statistically significant. (highpass filtered with an order 8 Butterworth filter, using 0.5 Hz as a cutoff frequency) and evaluated on 5 seconds windows with 80% overlap. The null distribution was again calculated using permutations of circular shifted speech envelopes. While the mean correlation scores of the predictions increased from 0.137 to 0.146 Pearson correlation, due to the increased variance of the null distribution (from 0.002 to 0.039), the 95th percentile of the null distribution has also increased from 0.099 to 0.327 Pearson correlation, rendering the obtained correlation scores not significantly different from the null distribution. This example illustrates why null distributions should be constructed methodically for each obtained result to ensure significance. While this example used a simple linear model for clarity and computational efficiency, the general methodology is also applicable to deep learning models (even without retraining the model) and forward modeling. Using the permutation of random circular shifts has a few drawbacks. As mentioned in the study of Crosse et al. (2021), discontinuities might appear at the end/beginning 'of the recording, possibly leading to an inappropriate null distribution. If the recording is sufficiently long however, this risk decreases. Secondly, sufficient permutations have to be computed to obtain an accurate estimate of the percentile of the null distribution. When comparing models based on prediction quality, the choice of preprocessing techniques and datasets should be taken into account. For example, studies commonly filter EEG data into separate bands (e.g., delta band [0.5-4 Hz], theta band [4-8 Hz], etc.) These bands have been linked to different processing stages (e.g., Etard and Reichenbach (2019)). When filtering data, caution has to be taken when filtering the target signal (i.e., EEG in forward models, speech features in backward models), as this directly influences the difficulty of the task (e.g., a narrowly bandpassed low-frequency target signal is easier to predict than a broadband target signal), possibly making the task trivial. This also complicates using correlation scores as a metric for general model performance, as some models might perform well using broadband EEG/stimuli features (e.g., Accou et al. (2021a)), while others might benefit from more narrowband features (e.g., linear decoders Vanthornhout et al. (2018)). Finally, auditory EEG datasets are often recorded with varying equipment, varying methodologies and different languages of both stimuli and listeners, which can influence the obtained correlation scores and thus make correlation scores unfit for comparison of model performance across datasets. Our recommendations are as follows: Firstly, construct an appropriate null distribution for each experimental result, and compare it to the correlations between the predicted and original signal. Secondly, when comparing models based on correlation scores of predictions, one must be aware of the influence of external factors (preprocessing, dataset choice, training/evaluation paradigm,...) on the obtained correlation values and interpret the obtained correlation scores with caution. We also identified studies that used MSE and MCD as a reconstruction (or prediction) performance metric (Krishna et al., 2020, 2021a,b), which does not enable direct comparison with correlation values. We therefore also recommend to provide multiple metrics including Pearson correlation to allow comparison with other studies. A recent study also implemented a model predicting the EEG signal from speech for both a match and a mismatch segment, in order to get an accuracy value from a forward model (Puffay et al., 2023), opening the path for a mapping between evaluation metrics. ### Model generalization to unseen subjects Subject-specific models sometimes have an performance advantage over subject-independent models as they can be fine-tuned to idiosyncrasies of a given subject and are not required to generalize to other subjects. However, subject-independent models are particularly attractive as they do not require training data for new subjects and much larger datasets an can be used to train them. Across subjects, the EEG cap placement can vary and so does the brain activity. Training models on multiple subjects enables the model to learn these differences. That remark also applies to different EEG systems with different densities and locations of electrodes, or experiment protocols. The performance of subject-independent models, especially on subjects not seen during training, depends on the training data. To illustrate this, the LSTM model of Monesi et al. (2020) is trained on 1 up to 28 subjects of the 48-subject dataset, and evaluated on the test set of the 20 remaining subjects. The results are displayed in Figure 4. With this analysis, we show that given the model and the collected dataset, the performance seems to reach a plateau, as the standard deviation of the last 10 medians (from subject number 18 until subject number 28) is less than 1%. Among the 29 studies we gathered, 7 used a subject-independent training paradigm. If one has a limited amount of data per subject, we recommend to use subject-independent training. One can still fine-tune a subject-independent model (i.e. train a subject-independent model on all subjects, keep its weights and train it on the subject of interest before evaluation) to boost its performance on a given subject. Please note that fine-tuning is likely be more efficient if the subject fine-tuned on belongs to a similar group to subjects used for prior training (e.g., healthy young normal-hearing). ### Negative samples selection in MM task When training models on the MM task, the choice of the mismatched segments (negative samples) is important to make sure the model can generalize well. The main idea is that the negative samples should be what is called "hard negatives" in the deep learning and contrastive learning literature (van den Oord et al., 2018). The negative samples should be challenging enough such that it forces the model to learn to relate neural responses to the positive samples (here the matched segments) rather than only learning to distinguish between positive and negative samples. For example, if we sample the mismatched segments from white noise or any signal that has a very different distribution than that of the positive samples, then a model, with enough parameters and capacity, will not learn the relation between EEG and speech but it will rather learn that there is a difference between positive and negative speech samples. It is shown on speech data that taking negative samples from the same stimulus or same speaker yields the best accuracy in a phone classification task (Robinson et al., 2021). This is in line with the theory of using hard negatives to train contrastive models as mentioned above. As a result, to train a model in match-mismatch that can relate EEG to speech, it is better to sample mismatched segments from the same speech stimulus (i.e. story) the matched segments are drawn from, such that we have a similar distribution for the matched and mismatched segments. In addition, we recommend designing the training in such a way that the mismatched segments also appear as matched segments with other EEG segments (see Figure 6), such that the only way to determine whether a candidate speech segment is matched or mismatched is to use the corresponding EEG segment. In our work (Monesi et al., 2020; Accou et al., 2021), we have used mismatched segments from the same speech stimulus. More specifically, we selected the mismatched segment 1 second after the end of the matched segment. We have also tried taking mismatched segments from the past instead of the future, which led to the same match-mismatch classification performance. In our setup, we make sure that the mismatched segments are temporally close enough to the matched segments. But most importantly, we make sure that a mismatched segment is also a matched segment with another EEG segment as mentioned above. Finally, we report the results of two experiments to illustrate the importance of the above-mentioned points while training the LSTM-based model in the MM task. To support the choice of a 1 s shift as a robust hard negative, we conducted the following experiment: we train our LSTM model on 48 subjects from the dataset with Figure 4: The LSTM model of Monesi et al. (2020) trained on 1-28 subjects of the 48 subject dataset, and evaluated on the test-set of the remaining 20 subjects. Each point in the boxplot corresponds to the match-mismatch accuracy for one subject, averaged over segments. matched and mismatched segments selected with a 1 s shift. We then evaluate our model under two conditions. First on a test set generated with the same 1 s shift for the 48 subjects, second with a given shift between 1 s and 20 s selected randomly for each subject. We show the results in Figure 5a. We observe no significant difference in accuracy between the random and the 1 s shift. This result alleviates all doubts about the possibility for the model to learn from the serial correlation structure of the data over the 1 s span, causing a lack of generalization to other shifts. In addition, we observe a slight increase of the random shift condition, suggesting that larger shifts facilitates the task for the model. We believe that taking systematically large enough shifts would lead to a significant performance increase. In a second experiment, we designed our match-mismatched segments in a way that violated the setup in Figure 6 such that the mismatched segments were never matched segments exactly. More specifically, we used 65 time samples (instead of 64) as a space between end of the matched and start of the mismatched segment in combination with using a window shift of 64 time samples (one second). As a result, matched segments had overlap with mismatched segments but they were never exactly mismatched segments with other EEG segments, thus resulting in two different sets for matched and mismatched segments. Note that other spacing lengths between end of matched and start of the mismatched segment would also result in mismatched segments never exactly being matched segments with other EEG segments. More generally, if the sum of the window length and the spacing is divisible by the window shift, then mismatched segments will also appear as matched segments (our recommended setup for training). We used the 48-subjects dataset where subjects listened to 8 stories. Each recording was split into training, validation, and test sets using 80%, 10%, and 10% ratios, respectively. The training set comprised 40% from the start and 40% from end of the recording and the remaining 20% was further split into validation and test sets. As shown in Figure 5b, the model performs poorly when mismatched segments are never matched segments. Note that the dataset has only around 2.5 hours of unique speech. As a result, the model succeeds in remembering the matched and the mismatched speech segments (of the training set, the training accuracy is around 90%) instead of relating them to EEG. In a third experiment, we took our mismatched speech candidates from another stimulus (story). The other stimulus was randomly chosen from a set of 7 stories available for the subject. We compare the results with our proposed default setup where we take mismatched speech candidates from the same stimulus. For each training scenario, we evaluated the trained model on both of the setups. When we choose mismatched segments from another stimulus, the model does not generalize well (52% classification accuracy) to unseen data and it even performs poorly (53% classification accuracy) on the same setup it is trained on. On the other hand, we observe that the model trained in our recommended setup, in which mismatched speech segments are taken from the same stimulus, performs well on the default setup that is trained (84% classification accuracy) as well as on the new setup (84% classification accuracy). This implies that the model has learned to find the relation between EEG and the stimulus. Figure 5: **Match-mismatch accuracy following different training set-ups.** (a) Match-mismatch accuracy of a LSTM model as a function of the mismatched shift. Our LSTM model was trained on the training and validation set of our 48 subjects dataset with a shift of 1 s between the matched and mismatched segments. This model was evaluated on the test set of the same subjects with a 1 s shift and a randomly picked shift (between 1 s and 20 s) per subject. ; (b) Classification accuracy of the LSTM-based model in the match-mismatch task. Box plots are shown over 48 subjects. Our recommended setup: our proposed training setup where a mismatched segment is also a matched segment with another EEG segment. Mismatched speech is never matched speech: in this setup, a mismatched speech segment will never be exactly a matched segment. As a result, some speech segments will only be matched and others only mismatched. ## 4 Conclusions We gave an overview of the methods to relate EEG to continuous speech using deep learning models. Although many different network types have been implemented across studies, there is no consensus on which one gives the best performance. Performance is difficult to compare across studies as most research groups use their own dataset (e.g., EEG device, participants) and training paradigms. As we suspected many cases of overfitting, we suggested guidelines to make the performance evaluation less biased, and more comparable across studies. The first point addressed the importance of the training, validation and test set selection. We demonstrated with an experiment that in multiple speech sources paradigms, the split must not be done within trials (i.e., the subjects have to pay attention to one of the two speakers during a defined amount of time) but between them. Some studies we reviewed have done such a split and show implausibly high decoding accuracies (Lu et al., 2021; Su et al., 2021, e.g.,), possibly remembering each trial's label when the split is done within trials. We then addressed the need to use and share public datasets to encourage researchers to improve models and have a common general evaluation benchmark to do so. Gathering diverse data is also necessary to make models more generalizable across devices or experimental setups. We propose to proceed similarly to ASR and computer vision research by gathering large and diverse public datasets rather than working separately on small personal datasets. While correlation metrics are important for interpretation and possible applications (e.g. hearing tests), they depend on the training, evaluation and architecture of a model, the experimental paradigm, and the nature, quality, size and preprocessing of the datasets used. It is necessary to construct an appropriate null distribution for each experimental result to see if a model performs significantly better than chance. When comparing models based on correlation scores of predictions, researchers should be aware of the influence of external factors (preprocessing, dataset choice, training/evaluation paradigm,...) on the obtained correlation values and interpret the obtained correlation scores with caution. Figure 6: Illustration of the choice of mismatched segments in the match-mismatch task. Two important points to consider: 1. The mismatched segments are taken from the same sequence. 2. The mismatched segments are also matched segments (and vice versa) depending on the EEG segment. For example, segment 2 speech is the mismatched candidate with segment1 EEG but it will be a matched speech candidate with segment2 EEG. Subject-independent models are very convenient because, when trained on a sufficient amount of data, they can cope with dataset diversity due to, e.g., EEG devices, protocols, brain anatomy or speech content. Although in certain cases (e.g., hearing aid device), an individual's good performance prevails over an ability to generalize, deep learning models require lots of data, which is not clinically ideal to collect from individual subject. We therefore recommend to use subject-independent models when the amount of data is limited. For pratical applications, we need deep learning models to generalize and researchers to test their ability to do so, notably by evaluating models on other datasets or ensuring they were trained on enough data to reach their optimal performance. As an example experiment, we characterized an LSTM model's performance as a function of the number of subjects included in the training. Finally, we highlight the importance of the negative sample selection in a match-mismatch task. With such temporally auto-correlated signals, the difficulty level of a match-mismatch task is also defined through the negative sample selection. Hence, two important characteristics of the negative sample selection are that the mismatched segment is taken from the same speech segment and that each mismatch speech segment is also a matched speech segment with another EEG segment. These two points constrain the model to use the EEG data provided to the model, ensuring the model cannot find the matched segment solely from the speech data. ## 5 Acknowledgements Funding was provided by the KU Leuven Special Research Fund C24/18/099 (C2 project to Tom Francart and Hugo Van hamme), FWO fellowships to Bernd Accou (1S89622N), Corentin Puffay (1S49823N), Lies Bollens (1SB1423N) and Jonas Vanthornhout (1290821N).
2303.06199
Turning Strengths into Weaknesses: A Certified Robustness Inspired Attack Framework against Graph Neural Networks
Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph learning tasks. However, recent studies show that GNNs are vulnerable to both test-time evasion and training-time poisoning attacks that perturb the graph structure. While existing attack methods have shown promising attack performance, we would like to design an attack framework to further enhance the performance. In particular, our attack framework is inspired by certified robustness, which was originally used by defenders to defend against adversarial attacks. We are the first, from the attacker perspective, to leverage its properties to better attack GNNs. Specifically, we first derive nodes' certified perturbation sizes against graph evasion and poisoning attacks based on randomized smoothing, respectively. A larger certified perturbation size of a node indicates this node is theoretically more robust to graph perturbations. Such a property motivates us to focus more on nodes with smaller certified perturbation sizes, as they are easier to be attacked after graph perturbations. Accordingly, we design a certified robustness inspired attack loss, when incorporated into (any) existing attacks, produces our certified robustness inspired attack counterpart. We apply our framework to the existing attacks and results show it can significantly enhance the existing base attacks' performance.
Binghui Wang, Meng Pang, Yun Dong
2023-03-10T20:32:09Z
http://arxiv.org/abs/2303.06199v1
# Turning Strengths into Weaknesses: A Certified Robustness Inspired ###### Abstract Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph learning tasks. However, recent studies show that GNNs are vulnerable to both test-time evasion and training-time poisoning attacks that perturb the graph structure. While existing attack methods have shown promising attack performance, we would like to design an attack framework to further enhance the performance. In particular, our attack framework is inspired by certified robustness, which was originally used by defenders to defend against adversarial attacks. We are the first, from the attacker perspective, to leverage its properties to better attack GNNs. Specifically, we first derive nodes' certified perturbation sizes against graph evasion and poisoning attacks based on randomized smoothing, respectively. A larger certified perturbation size of a node indicates this node is _theoretically_ more robust to graph perturbations. Such a property motivates us to focus more on nodes with smaller certified perturbation sizes, as they are easier to be attacked after graph perturbations. Accordingly, we design a certified robustness inspired attack loss, when incorporated into (any) existing attacks, produces our certified robustness inspired attack counterpart. We apply our framework to the existing attacks and results show it can significantly enhance the existing base attacks' performance. ## 1 Introduction Learning with graphs, such as social networks, citation networks, chemical networks, has attracted significant attention recently. Among many methods, graph neural networks (GNNs) [14, 33, 38, 41, 44] have achieved state-of-the-art performance in graph related tasks such as node classification, graph classification, and link prediction. However, recent studies [8, 19, 20, 23, 30, 34, 36, 37, 39, 40, 50, 51] show that GNNs are vulnerable to both test-time graph evasion attacks and training-time graph poisoning attacks1. Take GNNs for node classification as an instance, graph evasion attacks mean that, given a learnt GNN model and a (clean) graph, an attacker carefully perturbs the graph structure (i.e., inject new edges to or remove the existing edges from the graph) such that as many testing nodes as possible are misclassified by the GNN model. Whereas, graph poisoning attacks mean that, given a GNN algorithm and a graph, an attacker carefully perturbs the graph structure in the training phase, such that the learnt GNN model misclassifies as many testing nodes as possible in the testing phase. While existing methods have shown promising attack performance, we want to ask: Can we design a general _attack framework_ that can further enhance both the existing graph evasion and poisoning attacks to GNNs? The answer is yes. Footnote 1: We mainly consider the graph structure attack in the paper, as it is more effective than the feature attack. However, our attack framework can be easily extended to the feature attack. We design an attack framework inspired by certified robustness. Certified robustness was originally used by _defenders_ to guarantee the robustness of classification models against evasion attacks. Generally speaking, a testing example (e.g., an image or a node) with a better certified robustness guarantee indicates this example is _theoretically_ more robust to adversarial (e.g., pixel or graph) perturbations. While certified robustness is mainly derived for doing the good, _attackers_, on the other hand, can also leverage its property to do the bad. For instance, when an attacker knows the certified robustness of nodes in a graph, he can base on nodes' certified robustness to _reversely_ reveal the vulnerable region of the graph and leverage this vulnerability to design better attacks. We are inspired by such property of certified robustness and design the first certified robustness inspired attacks to GNNs. Our attack framework consists of three parts: i) Inspired by the state-of-the-art randomized smoothing based certified robustness against _evasion attacks_ to image models [7, 28] and GNN models [35], we first propose to generalize randomized smoothing and derive the node's cer tified perturbation size against graph _poisoning attacks_ to GNNs. Particularly, a larger certified perturbation size of a node indicates this node is _theoretically_ more robust to adversarial graph perturbations. In other words, an attacker needs to perturb more edges during the training phase in order to make this node wrongly predicted by the learnt GNN model. This property inspires us to focus more on disrupting nodes with relatively smaller certified perturbation sizes under a given perturbation budget. ii) We design a certified robustness inspired attack loss. Specifically, we modify the classic node-wise loss by assigning each node a weight based on its certified perturbation size--A node with a larger/smaller certified perturbation size will be assigned a smaller/larger weight. In doing so, losses for nodes with smaller certified perturbation sizes will be enlarged, and most of the perturbation budget will be automatically allocated to perturb these nodes. Thus, more nodes will be misclassified with the given perturbation budget. iii) We design the certified robustness inspired attack framework to generate adversarial graph perturbations to GNNs, based on our certified robustness inspired attack loss. We emphasize that, as our new attack loss only modifies the existing attack loss with certified perturbation size defined node weights, any existing graph evasion or poisoning attack method can be used as the base attack in our framework. We apply our certified robustness inspired attack framework to the state-of-the-art graph evasion and poisoning attacks [40, 51] to GNNs. Evaluation results on multiple benchmark datasets show our attack framework can substantially enhance the attack performance of the base attacks. Our contributions are as follows: * We propose a certified robustness inspired attack framework to GNNs. Our framework can be plugged into any existing graph evasion and poisoning attacks. * To our best knowledge, we are the first work to use certified robustness for an attack purpose. * Evaluation results validate the effectiveness of our attack framework when applied to the existing attacks to GNNs. ## 2 Background and Preliminaries ### Graph Neural Networks (GNNs) Let \(G=(\mathcal{V},\mathcal{E})\) be a graph, where \(u\in\mathcal{V}\) is a node, \((u,v)\in\mathcal{E}\) is an edge between \(u\) and \(v\). Let \(\mathbf{A}\in\{0,1\}^{|\mathcal{V}|\times|\mathcal{V}|}\) be the adjacency matrix. _As \(\mathbf{A}\) contains all graph structure information, we will interchangeably use \(\mathbf{A}\) and \(G\) to indicate the graph in the paper._ We mainly consider GNNs for node classification. Each node \(u\in\mathcal{V}\) has a label \(y_{u}\) from a label set \(\mathcal{Y}\). Let \(\mathcal{V}_{Tr}\) and \(\mathcal{V}_{Te}\) be the set of training nodes and testing nodes, respectively. Given a GNN algorithm \(\mathcal{A}\), which takes the graph \(G(\mathbf{A})\) and training nodes \(\mathcal{V}_{Tr}\) as an input and produces a node classifier \(f_{\theta}\) parameterized by \(\theta\), i.e., \(f_{\theta}=\mathcal{A}(\mathbf{A},\mathcal{V}_{Tr})\). The node classifier \(f_{\theta}\) inputs \(G(\mathbf{A})\) and outputs labels for all nodes, i.e., \(f_{\theta}:\mathbf{A}\rightarrow\mathcal{Y}^{|\mathcal{V}|}\). To learn \(f_{\theta}\), a common way is to minimize a loss function \(\mathcal{L}\) defined on the training nodes \(\mathcal{V}_{Tr}\) and the graph \(G(\mathbf{A})\) as follows: \[\min_{\theta}\mathcal{L}(f_{\theta},\mathbf{A},\mathcal{V}_{Tr})=\sum_{u\in \mathcal{V}_{Tr}}\ell(f_{\theta}(\mathbf{A};u),y_{u}), \tag{1}\] where \(f_{\theta}(\mathbf{A};u)\) is the predicted label of a node \(u\). After learning \(f_{\theta^{*}}\), a testing node \(v\in\mathcal{V}_{Te}\) is then predicted a label as \(\hat{y}_{v}=f_{\theta^{*}}(\mathbf{A};v)\). ### Adversarial Attacks to GNNs We denote by \(\delta\in\{0,1\}^{|\mathcal{V}|\times|\mathcal{V}|}\) the adversarial _graph perturbation_, where \(\delta_{s,t}=1\) (or \(0\)) means the attacker perturbs (or keeps) the edge status between a node pair \((s,t)\). Moreover, we denote \(\mathbf{A}\oplus\delta\) as the perturbed graph, with \(\oplus\) the element-wise XOR operator. For instance, if there is an (or no) edge between \((u,v)\), i.e., \(A_{uv}=1\) (or \(A_{uv}=0\)), perturbing this edge status (i.e, \(\delta_{u,v}=1\)) means removing the edge (or injecting a new edge), i.e., \(A_{u,v}\oplus\delta_{u,v}=0\) (or \(A_{u,v}\oplus\delta_{u,v}=1\)) to the graph. We assume an attacker has a perturbation budget \(\Delta\), i.e., \(\|\delta\|_{0}\leq\Delta\), meaning at most \(\Delta\) number of edges can be perturbed by the attacker. **Graph evasion attacks to GNNs.** In graph evasion attacks, given a learnt node classifier \(f_{\theta^{*}}\), an attacker carefully crafts a graph perturbation \(\delta\) to the graph \(G\) such that \(f_{\theta^{*}}\) predicts nodes' labels using the perturbed graph \(\mathbf{A}\oplus\delta\) as the attacker desires. For instance, an attacker desires as many testing nodes as possible to be misclassified by \(f_{\theta^{*}}\) (called _untargeted attack_) under the perturbation budget \(\Delta\). Formally, an attacker aims to maximize the following 0-1 (_attack loss_): \[\max_{\delta}\sum_{v\in\mathcal{V}_{Te}}\mathbf{1}[f_{\theta^{*}}(\mathbf{A} \oplus\delta;v)\neq y_{v}],\text{s.t.}\ ||\delta||_{0}\leq\Delta, \tag{2}\] where \(\mathbf{1}[\cdot]\) is an indicator function, whose value is 1 if the condition satisfies and 0, otherwise. The above problem is challenging to solve in that the indicator function is hard to be optimized. In practice, an attacker will solve an alternative optimize problem as below: \[\max_{\delta}\sum_{v\in\mathcal{V}_{Te}}\ell(f_{\theta^{*}}(\mathbf{A}\oplus \delta;v),y_{v}),\,\text{s.t.}\ ||\delta||_{0}\leq\Delta. \tag{3}\] For instance, [40] design the state-of-the-art PGD evasion attack by solving Equation 3. **Graph poisoning attacks to GNNs.** In graph poisoning attacks, an attacker specifies a GNN algorithm \(\mathcal{A}\) and carefully perturbs the graph \(G\) with a graph perturbation \(\delta\) in the training phase, such that the learnt node classifier \(f_{\theta^{*}}\) misclassifies as many testing nodes as possible on the perturbed graph \(\mathbf{A}\oplus\delta\) in the testing phase. Formally, it solves the following bilevel optimization problem: \[\max_{\delta}\sum_{v\in\mathcal{V}_{Te}}\mathbf{1}[f_{\theta^{*}}(\mathbf{A} \oplus\delta;v)\neq y_{v}], \tag{4}\] \[\text{s.t.}\ \theta^{*}=\arg\min_{\theta}\sum_{u\in\mathcal{V}_{Tr}} \mathbf{1}[f_{\theta^{*}}(\mathbf{A}\oplus\delta;u)\neq y_{u}],\,||\delta||_{0 }\leq\Delta,\] where the inner optimization problem is learning the node classifier \(f_{\theta^{*}}\) on the perturbed graph \(\mathbf{A}\oplus\delta\) with training nodes \(\mathcal{V}_{Tr}\), while the outer optimization problem is learning to generate the graph perturbation \(\delta\) to maximally misclassify testing nodes \(\mathcal{V}_{Te}\) with the learnt node classifier \(f_{\theta^{*}}\). In practice, the testing nodes \(\mathcal{V}_{Te}\)'s labels are unavailable during training, and thus we cannot directly optimize Equation 4. In addition, the indicator function in Equation 4 is hard to optimize. A common strategy to address this issue is by instead maximizing the loss on the _training nodes_\(\mathcal{V}_{Tr}\)[40, 51] and using an alternative continuous loss. Specifically, it solves the following alternative bilevel optimization problem: \[\max_{\delta}\sum_{v\in\mathcal{V}_{Tr}}\ell(f_{\theta^{*}}( \mathbf{A}\oplus\delta;v),y_{v}), \tag{5}\] \[\text{s.t. }\theta^{*}=\arg\min_{\theta}\sum_{u\in\mathcal{V}_{Tr} }\ell(f_{\theta}(\mathbf{A}\oplus\delta;u),y_{u}),\,||\delta||_{0}\leq\Delta.\] This is based on the intuition that if a node classifier misclassifies a large number of training nodes, then it generalizes poorly and thus is also very likely to misclassify a large number of testing nodes. ### Certified Robustness to Graph Evasion Attacks We introduce certified robustness achieved via the state-of-the-art randomized smoothing [17, 15, 7]. Randomized smoothing was originally designed to build certified robust machine learning classifiers against evasion attacks. It is applicable to any classifier and scalable to large models, e.g., deep neural networks. Here, we introduce randomized smoothing that defends against graph evasion attacks to GNNs [35]. It consists of the following three steps. **Constructing a smoothed node classifier.** Given a base node classifier \(f\), a graph \(G\), and a testing node \(u\) with label \(y_{u}\), randomized smoothing builds a _smoothed node classifier_\(g\) via adding a random noise matrix \(\epsilon\) to \(G\). Formally, \[g(\mathbf{A};u)=\arg\max_{c\in\mathcal{Y}}\text{Pr}(f(\mathbf{A}\oplus\epsilon; u)=c), \tag{6}\] where \(\text{Pr}(f(\mathbf{A}\oplus\epsilon;u)=c)\) is the probability that the base node classifier \(f\) predicts label \(c\) on the noisy graph \(\mathbf{A}\oplus\epsilon\) and \(g(\mathbf{A};u)\) is the predicted label for \(u\) by the smoothed node classifier \(g\). \(\epsilon\) has the following probability distribution in the binary space \(\{0,1\}^{|\mathcal{V}|\times|\mathcal{V}|}\): \[\text{Pr}(\epsilon_{s,t}=0)=\beta,\,\text{Pr}(\epsilon_{s,t}=1)=1-\beta,\, \forall s,t\in\mathcal{V}. \tag{7}\] Equation 7 means that for each pair of nodes \((s,t)\) in the graph, we keep its edge status (i.e., \(A_{s,t}\)) with probability \(\beta\) and change its edge status with probability \(1-\beta\). **Deriving the certified robustness of graph evasion attacks to GNNs.** Suppose \(g(\mathbf{A};u)=y_{u}\), meaning that the smoothed node classifier \(g\) correctly predicts \(u\). Then, \(g\) provably predicts the correct label for \(u\) once the graph perturbation \(\delta\) is bounded. Formally [35]: \[g(\mathbf{A}\oplus\delta;u)=y_{u},\forall||\delta||_{0}\leq K(\underline{p_{y _{u}}}), \tag{8}\] where \(\underline{p_{y_{u}}}\leq\text{Pr}(f(\mathbf{A}\oplus\epsilon;u)=y_{u})\) is a lower bound of the probability that \(f\) predicts the correct label \(y_{u}\) on the noisy graph \(\mathbf{A}\oplus\epsilon\). \(K(\underline{p_{y_{u}}})\) is called node \(u\)'s _certified perturbation size_, indicating that \(g\) provably predicts the correct label when an attacker _arbitrarily_ perturbs (at most) \(K(\underline{p_{y_{u}}})\) edge status in the graph \(G\). _In other words, if a node has a larger certified perturbation size, then it is certifiably more robust to adversarial graph perturbation._ **Computing the certified perturbation size in practice.** Note that \(K(\underline{p_{y_{u}}})\) is (positively) related to \(\underline{p_{y_{u}}}\), which can be estimated via the Monte Carlo algorithm [35, 7]. Specifically, given a node classifier \(f\), a graph \(G(\mathbf{A})\), and a testing node \(u\), we first sample \(N\) random noise matrices \(\epsilon^{1},\cdots,\epsilon^{N}\) from the noise distribution defined in Equation 7 and add each noise matrix \(\epsilon^{j}\) to the graph \(G\) to construct \(N\) noisy graphs \(\mathbf{A}\oplus\epsilon^{1},\cdots,\mathbf{A}\oplus\epsilon^{N}\). Then, we use the node classifier \(f\) to predict \(u\)'s label on the \(N\) noisy graphs and compute the frequency of each label \(c\), i.e., \(N_{c}=\sum_{j=1}^{N}\mathbb{I}(f(\mathbf{A}\oplus\epsilon^{j},,u)=c)\) for \(c\in\mathcal{Y}\). Then, we can estimate \(\underline{p_{y_{u}}}\) as \[\underline{p_{y_{u}}}=B(\alpha;N_{y_{u}},N-N_{y_{u}}+1), \tag{9}\] where \(1-\alpha\) is the confidence level and \(B(\alpha;a,b)\) is the \(\alpha\)-th quantile of Beta distribution with shape parameters \(a\) and \(b\). With \(\underline{p_{y_{u}}}\), we can compute \(K(\underline{p_{y_{u}}})\), and details of computing \(K(\underline{p_{y_{u}}})\) can been seen in [35]. ## 3 Certified Robustness to Graph Poisoning Attacks via Randomized Smoothing Existing randomized smoothing mainly certifies robustness of _evasion attacks_. In this section, we generalize it and derive certified robustness of graph poisoning attacks. Our key idea is to extend randomized smoothing from the _classifier_ perspective to a general _function_ perspective. In particular, we will build a base function, a smoothed function, and then adapt randomized smoothing to certify robustness to poisoning attacks using the smoothed function. Such certified robustness guides us to design more effective graph poisoning attacks, as shown in Section 4. **Building a base function.** Suppose we have a graph \(G(\mathbf{A})\), training nodes \(\mathcal{V}_{Tr}\), and a GNN algorithm \(\mathcal{A}\) that takes the graph and training nodes as an input and learns a node classifier \(f\), i.e., \(f=\mathcal{A}(\mathbf{A},\mathcal{V}_{Tr})\). We use the learnt \(f\) to predict the label for a testing node \(v\). Then, we can integrate the entire process of training the node classifier \(f\) and testing the node \(v\) as a function \(\bar{f}(\mathbf{A},\mathcal{V}_{Tr};v)\). In other words, the function \(\bar{f}\) is the composition of learning the node classifier \(f\) and predicting a node \(v\). We view \(\check{f}\) as the base function. **Constructing a smoothed function.** In graph poisoning attacks, an attacker aims to perturb the graph in the training phase. To apply randomized smoothing, we first add a random noise matrix \(\epsilon\) to the graph, where each entry \(\epsilon_{s,t}\) is drawn from a discrete distribution, e.g., defined in Equation 7. As we add random noise \(\epsilon\) to the graph \(G\), the output of the base function \(\tilde{f}\) is also random. Then, inspired by Equation 6, we define the smoothed function \(\tilde{g}\) as follows: \[\tilde{g}(\mathbf{A},\mathcal{V}_{Tr};v)=\arg\max_{c\in\mathcal{Y}}\text{Pr} (\tilde{f}(\mathbf{A}\oplus\epsilon,\mathcal{V}_{Tr};v)=c), \tag{10}\] where \(\text{Pr}(\tilde{f}(\mathbf{A}\oplus\epsilon,\mathcal{V}_{Tr};v)=c)\) is the probability that \(v\) is predicted to be a label \(c\) by a GNN model trained on a noisy graph \(\mathbf{A}\oplus\epsilon\) using training nodes \(\mathcal{V}_{Tr}\). \(\tilde{g}(\mathbf{A},\mathcal{V}_{Tr};v)\) is the predicted label for \(v\) by the smoothed function \(\tilde{g}\). **Deriving the certified robustness of graph poisoning attacks to GNNs.** An attacker adds an adversarial graph perturbation \(\delta\) to the graph \(G(\mathbf{A})\) to produce a perturbed graph \(\mathbf{A}\oplus\delta\), where \(\delta_{s,t}\) is the perturbation added to change the edge status of the node pair \((s,t)\) in the graph \(G\) during training. Then, we can leverage the results in Equation 8 to derive the certified perturbation size against graph poisoning attacks. Specifically, we have: \[\tilde{g}(\mathbf{A}\oplus\delta,\mathcal{V}_{Tr};v)=y_{v},\;\forall||\delta ||_{0}\leq K(\underline{p_{y_{v}}}), \tag{11}\] where \(p_{y_{v}}\leq\text{Pr}(\tilde{f}(\mathbf{A}\oplus\epsilon,\mathcal{V}_{Tr};v) =y_{v})\) is a lower bound probability. Our result means the smoothed function \(\tilde{g}\) provably predicts the correct label for \(v\) when (at most) \(K(p_{y_{v}})\) edge statuses in the graph are _arbitrarily_ poisoned by an attacker _in the training phase_. **Computing the certified perturbation size in practice.** Given a GNN algorithm \(\mathcal{A}\), a graph \(G(\mathbf{A})\), training nodes \(\mathcal{V}_{Tr}\), a discrete noise distribution defined in Equation 7, and a node \(v\), we first sample \(N\) random noise matrices \(\epsilon^{1},\cdots,\epsilon^{N}\) from the discrete noise distribution and add each noise to the graph \(G(\mathbf{A})\) to construct \(N\) noisy graphs \(\mathbf{A}\oplus\epsilon^{1},\cdots,\mathbf{A}\oplus\epsilon^{N}\). Then, we train \(N\) node classifiers \(\tilde{f}^{1}=\mathcal{A}(\mathbf{A}\oplus\epsilon^{1},\mathcal{V}_{Tr}), \cdots,\tilde{f}^{N}=\mathcal{A}(\mathbf{A}\oplus\epsilon^{N},\mathcal{V}_{Tr})\). We use each of the \(N\) node classifiers to predict \(v\)'s label and compute the frequency of each label \(c\), i.e., \(N_{c}=\sum_{j=1}^{N}\mathbb{I}(\tilde{f}^{j}(\mathbf{A}\oplus\epsilon^{j}, \mathcal{V}_{Tr};v)=c)\) for \(c\in\mathcal{Y}\). Finally, we estimate \(\underline{p_{y_{v}}}\) using Equation 9 and use it to calculate the certified perturbation size, following [35]. Note the trained \(N\) node classifiers is re-used to predict node labels and compute certified perturbation size for different nodes. ## 4 Certified Robustness Inspired Attack Framework against GNNs In this section, we will design our attack framework to GNNs inspired by certified robustness. Our attack framework can be seamlessly plugged into the existing graph evasion and poisoning attacks to design more effective attacks. ### Motivation and Observation Certified robustness, more specifically, certified perturbation size derived in Section 2.3 and Section 3, was used by _defenders_ to defend GNN models against attacks. On the other hand, from the _attacker_ perspective, he can leverage the properties of certified robustness to better attack GNN models. Specifically, certified perturbation size of a node characterizes the extent to which the GNN model _provably_ and accurately predicts this node against the worst-case graph perturbation. An attacker can base on nodes' certified perturbation sizes to _reversely_ reveal the vulnerable region of the graph and leverage this vulnerability to design better attacks. In particular, we have the following observation that reveals the _inverse_ relationship between a node's certified perturbation size and the perturbation associated with this node when designing the attack. _Observation 1:_ _A node with a larger (smaller) certified perturbation size should be disrupted with a smaller (larger) number of perturbed edges._ If a node has a larger (smaller) certified perturbation size, it means this node is more (less) robust to graph perturbations. To misclassify this node, an attacker should allocate a larger (smaller) number of perturbed edges. Thus, to design more effective attacks (i.e., misclassify more nodes) with a perturbation budget, an attacker should avoid disrupting nodes with larger certified perturbation sizes, but focus on nodes with smaller certified perturbation sizes. Based on the above observation, our attack needs to solve three correlated problems: i) How to obtain the node's certified perturbation size for both graph evasion and poisoning attacks? ii) How to allocate the perturbation budget in order to disrupt the nodes with smaller certified perturbation sizes? iii) How to generate the adversarial graph perturbation for both evasion and poisoning attacks? To address i), we adopt the derived node's certified perturbation size against graph evasion attacks (Section 2.3) and graph poisoning attacks (Section 3). To address ii), we design a certified robustness inspired loss, by maximizing which an attacker will put more effort into disrupting nodes with smaller certified perturbation sizes. To address iii), we design a certified robustness inspired attack framework, where any existing graph evasion/poisoning attacks to GNNs can be adopted as the base attack in our framework. ### Certified Robustness Inspired Loss Design Suppose we have obtained nodes' certified perturbation sizes. To perform a more effective attack, a naive solution is that the attacker sorts all nodes' certified perturbation sizes in an ascending order, and then carefully perturbs the edges to misclassify the sorted nodes one-by-one until reaching the perturbation budget. However, this solution is both computationally intensive--as it needs to solve an optimization problem for each node; and suboptimal--as all nodes and the associated edges collectively make predictions and perturbing an edge could affect predictions of many nodes. We design a certified perturbation size inspired loss that assists to _automatically_ seeking the "ideal" edges to be perturbed for both evasion attacks and poisoning attacks. Particularly, we notice that the loss function of evasion attacks in Equation 3 or poisoning attacks in Equation 5 is defined per node. Then, we propose to modify the loss function in Equation 3 or Equation 5 by assigning each node with a weight and multiplying each node loss with the corresponding weight, where the node weight has a strong connection with the node's certified perturbation size. Formally, we design the certified perturbation size inspired loss as follows: \[\mathcal{L}_{CR}(f_{\theta},\mathbf{A},\mathcal{V}_{T})=\sum_{u\in\mathcal{V} _{T}}w(u)\cdot\ell(f_{\theta}(\mathbf{A};u),y_{u}), \tag{12}\] where \(\mathcal{V}_{T}=\mathcal{V}_{Te}\) for evasion attacks and \(\mathcal{V}_{T}=\mathcal{V}_{Tr}\) for poisoning attacks; and \(w(u)\) is the weight of the node \(u\). Note that when setting all nodes with an equal weight, our certified perturbation size inspired loss reduces to the conventional loss in Equation 3 or Equation 5. Next, we show the _inverse_ relationship between the node's certified perturbation size and the assigned weight. _Observation 2: A node with a larger (smaller) certified perturbation size is assigned a smaller (larger) weight._ As shown in **Observation 1**, we should disrupt more nodes with smaller certified perturbation sizes, as these nodes are more vulnerable. In other words, we should put more weights on nodes with smaller certified perturbation sizes to enlarge these nodes' losses--making these nodes easier to be misclassified with graph perturbations. In contrast, we should put smaller weights on nodes with larger certified perturbation sizes, in order to save the usage of the perturbation budget. Formally, we assign the node weight such that \(w(u)\sim 1/K(\underline{p_{y_{u}}})\). There are many ways to assign node weights satisfying the inverse relationship. In this paper, for instance, we propose to define node weights as \[w(u)=\frac{1}{1+\exp(a\cdot K(\underline{p_{y_{u}}}))}, \tag{13}\] where \(a\) is a tunable hyperparameter. We can observe that the node weight is exponentially decreased as the node's certified perturbation size increases. Such a property ensures that the majority of perturbed edges are used for disrupting nodes with smaller certified perturbation sizes (See Figures 3) when performing the attack. ### Certified Robustness Inspired Attack Design Based on the derived certified perturbation size and our certified robustness inspired loss, we now propose to generate graph perturbations against GNNs with both graph evasion and poisoning attacks. **Certified robustness inspired graph evasion attacks to generate graph perturbations.** We can choose any graph evasion attack to GNNs as the base evasion attack. In particular, given the attack loss from any existing evasion attack, we only need to modify the loss by multiplying it with our certification perturbation sizes defined node weights. For instance, we can use the PGD attack [40] as the base evasion attack. We replace its attack loss by our certified robustness inspired loss \(\mathcal{L}_{CR}\) in Equation 12. Then, we have our certified robustness inspired PGD (CR-PGD) evasion attack that iteratively generates graph perturbations as follows: \[\delta=\text{Proj}_{\mathbb{B}}(\delta+\eta\cdot\nabla_{\delta}\mathcal{L}_{ CR}(f_{\theta},\mathbf{A}\oplus\delta,\mathcal{V}_{Te})), \tag{14}\] where \(\eta\) is the learning rate in PGD, \(\mathbb{B}=\{\delta:\mathbf{1}^{T}\delta\leq\Delta,\delta\in[0,1]^{|\mathcal{ V}|\times|\mathcal{V}|}\}\) is the allowable perturbation set, and \[\text{Proj}_{\mathbb{B}}(\mathbf{a})=\begin{cases}\Pi_{[0,1]}( \mathbf{a}-\mu\mathbf{1}),&\text{if }\mathbf{1}^{T}\Pi_{[0,1]}(\mathbf{a}-\mu\mathbf{1})= \Delta,\\ \Pi_{[0,1]}(\mathbf{a}),&\text{if }\mathbf{1}^{T}\Pi_{[0,1]}(\mathbf{a})\leq \Delta,\end{cases} \tag{15}\] where \(\mu>0\), \(\Pi_{[0,1]}(x)=x\) if \(x\in[0,1]\), 0 if \(x<0\), and 1 if \(x>1\). The final graph perturbation is used to perform the evasion attack. **Certified robustness inspired graph poisoning attacks to generate graph perturbations.** Likewise, we can choose any graph poisoning attack to GNNs as the base poisoning attack. Given the bilevel loss from any existing poisoning attack, we simply modify each loss by multiplying it with our certified perturbation sizes' defined node weights. Specifically, we have \[\max_{\delta}\mathcal{L}_{CR}(f_{\theta^{*}},\mathbf{A}\oplus \delta,\mathcal{V}_{Tr}),\] (16) s.t. \[\theta^{*}=\arg\min_{\theta}\mathcal{L}_{CR}(f_{\theta},\mathbf{A}\oplus \delta,\mathcal{V}_{Tr}),\,||\delta||_{0}\leq\Delta, \tag{17}\] where \(\mathcal{L}_{CR}(f_{\theta},\mathbf{A}\oplus\delta,\mathcal{V}_{Tr})=\sum_{v \in\mathcal{V}_{Tr}}w(v)\cdot\ell(f_{\theta}(\mathbf{A}\oplus\delta,v)\,y_{ v})\). Then, solving Equation 16 and Equation 17 produces the poisoning attack graph perturbations with our framework. Algorithm 1 and Algorithm 2 in Appendix show two instances of applying our CR inspired attack framework to the PGD evasion attack and Minmax [40] poisoning graph, respectively. To save time, we calculate nodes' certified perturbation sizes per \(INT\) iterations. Then, comparing with PGD, the computational overhead of our CR-PGD is calculating the node's certified perturbation size with a set of \(N\) sampled noises every \(INT\) iterations, which only involves making predictions on \(N\) noisy matrices and is efficient. Note that the predictions are independent and can be also parallelized. Comparing with Minmax, the computational overhead of our CR-Minmax is to independently train (a small number of) \(N\) models every \(INT\) iterations, which can be implemented in parallel. ## 5 Experiments ### Setup **Datasets and GNN models.** Following [40, 51], we evaluate our attacks on benchmark graph datasets, i.e., Cora, Citeseer [29], and BlogCataLogs [27]. Table 3 in Appendix shows basic statistics of these graphs. We choose Graph Convolutional Network (GCN) [14] as the targeted GNN model, also following [40, 51]. **Base attack methods.** For graph evasion attacks, we choose the PGD attack [40]2 that uses the cross-entropy loss and CW loss [3] as the base attack methods, and denote the two attacks as CE-PGD and CW-PGD, respectively. For graph poisoning attacks, we choose the Minmax attack [40] and MetaTrain attack [51]3 as the base attack methods. We apply our CR inspired attack framework to these evasion and poisoning attacks and denote them as CR-CE-PGD, CR-CW-PGD, CR-Minmax, and CR-MetaTrain, respectively. All attacks are implemented in PyTorch and run on a Linux server with 96 core 3.0GHz CPU, 768GB RAM, and 8 Nvidia A100 GPUs. Footnote 2: [https://github.com/KaidiXu/GCN_ADV_Train](https://github.com/KaidiXu/GCN_ADV_Train) Footnote 3: [https://www.kdd.in.tum.de/gnn-meta-attack](https://www.kdd.in.tum.de/gnn-meta-attack) **Training and testing.** Following [51], we split the datasets into 10% training nodes, 10% validation nodes, and 80% testing nodes. The validation nodes are used to tune the hyperparameters, and the testing nodes are used to evaluate the attack performance. We repeat all attacks on 5 different splits of the training/evaluation/testing nodes and report the mean attack accuracy on testing nodes, i.e., fraction of testing nodes are misclassified after the attack. **Parameter settings.** Without otherwise mentioned, we set the perturbation budget \(\Delta\) as 20% of the total number of edges in a graph (before attack). We set the parameter \(\beta=0.999\) in the noise distribution Equation 7, the confidence level \(1-\alpha=0.9\), the number of samples \(N\) in Monte Carlo sampling to calculate node's certified perturbation size is set to be 200 and 20 in evasion attacks and poisoning attacks, respectively, and \(a=1\) in Equation 13. The number of iterations \(T\) is 100 and 10, and the interval is set to be \(INT=10\) and \(INT=2\) in evasion attacks and poisoning attacks, respectively. The other hyperparameters in CE-PGD, CW-PGD, Minmax, and MetaTrain are selected based on their source code, and we set equal values in our CR inspired attack counterparts. We also study the impact of the important hyperparameters that could affect our attack performance: \(\Delta\), \(N\), \(1-\alpha\), \(\beta\), and \(a\). When studying the impact of a hyperparameter, we fix the other hyperparameters to be their default values. ### Attack Results **Our attack framework is effective.** Figure 1 and Figure 2 show the evasion attack accuracy and poisoning attack accuracy of the base attacks and those with our attack framework vs. perturbation budget, respectively. We can observe that _our certified robustness inspired attack framework can enhance the base attack performance in all datasets_. For instance, when attacking GCN on Cora and the perturbation ratio is \(20\%\), our CR-CE-PGD and CR-CW-PGD have a relative \(7.0\%\) and \(5.6\%\) gain over the CE-PGD and CW-PGD evasion attacks. Moreover, CR-Minmax and CR-MetaTrain have a relative \(12.2\%\) and \(10.3\%\) gain over the Minmax and MetaTrain poisoning attacks. These results demonstrate that the node's certified robustness can indeed guide our attack framework to find the more vulnerable region in the graph to be perturbed, which helps to better allocate the perturbation budget, and thus makes the base attacks with our attack framework misclassify more nodes. Figure 1: Evasion attack accuracy vs. perturbation budget. Figure 2: Poisoning attack accuracy vs. perturbation budget. To further understand the effectiveness of our framework, we visualize the distribution of the perturbed edges vs. node's certified perturbation size. Specifically, we first obtain the perturbed edges via the base attacks and our CR inspired attacks, and calculate testing/training nodes' certified perturbation sizes for evasion/poisoning attacks, respectively. Then we plot the distribution of the perturbed edges vs node's certified perturbation size. Specifically, if a perturbed edge is connected with a testing/training node in the evasion/poisoning attack, then we map this perturbed edge to this node's certified perturbation size. Our intuition is that a perturbed edge affects its connected node the most. Figure 3 shows the results on Citeseer (We observe that the conclusions on the other datasets are similar). We can see that a majority number of the perturbed edges connect with testing/training nodes that have relatively smaller certified perturbation sizes in our CR inspired attacks. In contrast, a significant number of the perturbed edges in the base attacks connect with nodes with relatively larger certified perturbation sizes. Hence, under a fixed perturbation budget, our attacks can misclassify more nodes. **Comparing with other weight design strategies.** Recall that our weight design is based on node's certified robustness: nodes less provably robust to graph perturbations are assigned larger weights, in order to enlarge these node attack losses. Here, we consider three other possible strategies to design node weights that aim to _empirically_ capture this property: 1) **Random**, where we uniformly assign node weights between \([0,1]\) at random; 2) **Node degree**, where a node with a smaller degree might be less robust to graph perturbations, and we set a larger weight. Following our weight design, we set \(w_{\text{deg}}(u)=\frac{1}{1+\exp(a\cdot\text{deg}(u))}\); 3) **Node centrality**[25], where a node with a smaller centrality might be less robust to graph perturbations, and we set a larger weight. Similarly, we set \(w_{\text{cen}}(u)=\frac{1}{1+\exp(a\cdot\text{cen}(u))}\). As a baseline, we also consider no node weights. Table 1 shows the attack results by applying these weight design strategies to the existing graph evasion and poisoning attacks. We have the following observations: 1) **Random** obtains the attack performance even worse than **No weight**'s. This indicates an inappropriate weight design can be harmful to the attack. 2) Node **Degree** and **Centrality** perform slightly better than **No weight**. One possible reason is that nodes with larger degree and centrality are empirically more robust to perturbations, which are also observed in previous works, e.g., [34, 50]. 3) Our weight design strategy performs the best. This is because our weight design _intrinsically_ captures nodes' certified robustness and thus yields more effective attacks. **Ablation study.** In this experiment, we study the impact of hyperparameters: \(\beta\) in Equation 7, confidence level \(1-\alpha\) in Equation 9, and \(N\) in Equation 9, \(a\) in Equation 13, as well the running time vs. \(N\). Figure 4 shows the results of \(\beta\), \(1-\alpha\), and \(N\), and running time vs. \(N\) on our attack. We observe that: 1) Our attack is not sensitive to \(\beta\); 2) Our attack slightly becomes worse as the confidence level \(1-\alpha\) increases. Such an observation can guide an attacker to set a relatively small \(\beta\) in practice. 3) Our attack becomes better as \(N\) increases, but already works well with a relatively smaller \(N\). From this observation, an attacker can choose a small \(N\) in practice to save the time and cost when performing the attack. 4) Running time does not increase too much with \(N\) on the evasion attacks and is linear to \(N\) on poisoning attacks, consistent with our analysis in Section 4.3. Table 2 shows the impact of \(a\). We see the performances are stable across different \(a\). This is largely because our weight design already ensures the node weight is inversely and exponentially to the node's certified perturbation size. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Dataset** & **Method** & **CW-FGD** & **CE-FGD** & **Mimmax** & **MetaTrain** \\ \hline \multirow{3}{*}{**Cora**} & **No weight** & 0.74 & 0.71 & 0.62 & 0.68 \\ \cline{2-6} & **Random** & 0.77 & 0.75 & 0.65 & 0.72 \\ \cline{2-6} & **Degree** & 0.72 & 0.70 & 0.61 & 0.66 \\ \cline{2-6} & **Contality** & 0.73 & 0.70 & 0.60 & 0.66 \\ \cline{2-6} & **Ours** & **0.70** & **0.60** & **0.55** & **0.62** \\ \hline \hline \multirow{3}{*}{**Citeseer**} & **No weight** & 0.64 & 0.63 & 0.63 & 0.61 \\ \cline{2-6} & **Random** & 0.66 & 0.66 & 0.68 & 0.64 \\ \cline{2-6} & **Degree** & 0.64 & 0.61 & 0.60 & 0.59 \\ \cline{2-6} & **Contality** & 0.64 & 0.62 & 0.60 & 0.38 \\ \cline{2-6} & **Ours** & **0.60** & **0.60** & **0.57** & **0.52** \\ \hline \hline \multirow{3}{*}{**B.C.Log**} & **No weight** & 0.48 & 0.51 & 0.53 & 0.31 \\ \cline{2-6} & **Random** & 0.54 & 0.53 & 0.40 & 0.35 \\ \cline{2-6} & **Degree** & 0.46 & 0.30 & 0.32 & 0.28 \\ \cline{2-6} & **Contality** & 0.47 & 0.49 & 0.32 & 0.27 \\ \cline{2-6} & **Ours** & **0.44** & **0.46** & **0.29** & **0.24** \\ \hline \end{tabular} \end{table} Table 1: Attack performance with different weight design. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Dataset** & \(a\) & **CR-CW-FGD** & **CR-CE-FGD** & **CR-Mimmax** & **CR-MetaTrain** \\ \hline \multirow{3}{*}{**Cora**} & \(0.5\) & 0.70 & 0.67 & 0.55 & 0.62 \\ \cline{2-6} & \(1\) & 0.70 & 0.60 & 0.53 & 0.62 \\ \cline{2-6} & \(2\) & 0.70 & 0.66 & 0.54 & 0.62 \\ \hline \hline \multirow{3}{*}{**Citeseer**} & \(0.5\) & 0.60 & 0.61 & 0.58 & 0.54 \\ \cline{2-6} & \(1\) & 0.60 & 0.50 & 0.57 & 0.52 \\ \cline{2-6} & \(2\) & 0.60 & 0.59 & 0.57 & 0.53 \\ \hline \hline \multirow{3}{*}{**B.C.Log**} & \(0.5\) & 0.44 & 0.47 & 0.31 & 0.25 \\ \cline{2-6} & \(1\) & 0.44 & 0.46 & 0.29 & 0.24 \\ \cline{2-6} & \(2\) & 0.44 & 0.46 & 0.29 & 0.24 \\ \hline \end{tabular} \end{table} Table 2: Attack performance with different \(a\). Figure 3: Distribution of the perturbed edges vs. node’s certified perturbation size ## 6 Discussion **Evaluations on other GNNs.** We mainly follow existing attacks [40, 51], which only evaluate GCN. Here, we also test SGC [38] on Cora and results show our CR-based GNN attacks also have a 6%-12% gain over the base attacks. This validates our strategy is generic to design better attacks. **Transferability between different GNNs.** We evaluate the transferability of the graph perturbations generated by our 4 CR-based attacks on GCN to SGC on Cora, when the attack budget is 15. Accuracy on SGC under the 4 attacks are: 73%, 76%, 66%, and 67%, while accuracy on GCN are 71%, 73%, 63%, and 65%, respectively. This implies a promising transferability between GCN and SGC. **Defenses against our attacks.** Almost all existing empirical defenses [10, 11, 13, 39, 45, 48, 49] are ineffective to adaptive attacks [24]. We adopt adversarial training [22], which is the only known effective empirical defense. Specifically, we first generate graph perturbations for target nodes via our attack and use the perturbed graph to retrain GNN with true node labels. The trained GNN is used for evaluation. We test on Cora and show this defense is effective to some extent, but has a nonnegligible utility loss. For instance, when budget=15, the accuracy under the CR-CW-PGD (CR-CE-PGD) attack increases from 73% (71%) to 76% (73%), but the normal accuracy reduces from 84% to 73% (72%). ## 7 Related Work **Attacking graph neural networks (GNNs).** We classify the existing attacks to GNNs as evasion attacks [8, 20, 21, 23, 36, 39, 50] and poisoning attacks [8, 19, 30, 40, 50, 51, 46, 50]. E.g., Xu et al. [40] proposed an untargeted PGD graph evasion attack to the GCN. The PGD attack leverages first-order optimization and generates discrete graph perturbations via convexly relaxing the binary graph structure, and obtains the state-of-the-art attack performance. Regarding graph poisoning attacks, Zugner et al. [51] proposed a graph poisoning attack, called Metattack, that perturbs the whole graph based on meta-learning. Our attack framework can be seamlessly plugged into these graph evasion and poisoning attacks and enhance their attack performance. **Attacking other graph-based methods.** Besides attacking GNNs, other adversarial attacks against graph data include attacking graph-based clustering [6], graph-based collective classification [34, 32], graph embedding [1, 4, 5, 9], community detection [18], graph matching [47], etc. For instance, Chen et al. [6] proposed a practical attack against spectral clustering, which is a well-known graph-based clustering method. Wang and Gong [34] designed an attack to the collective classification method, called linearized belief propagation, by modifying the graph structure. **Certified robustness and randomized smoothing.** Randomized smoothing [15, 16, 17, 14, 42, 7] was the first method to obtain certified robustness of large models and achieved state-of-the-art performance. For instance, Cohen et al. [7] leveraged the Neyman-Pearson Lemma [26] to obtain a tight \(l_{2}\) certified robustness for randomized smoothing with Gaussian noise on normally trained image models. Salman et al. [28] improved the certified robustness by combining the design of an adaptive attack against smoothed soft image classifiers and adversarial training on the attacked classifiers. [12, 35], and [2] applied randomized smoothing in the graph domain and derived certified robustness for community detection and node/graph classifications methods against graph perturbations. In this paper, we use randomized smoothing to design better attacks against GNNs. ## 8 Conclusion We study graph evasion and poisoning attacks to GNNs and propose a novel attack framework motivated by certified robustness. We are the first work that uses certified robustness for an attack purpose. In particular, we first derive the node's certified perturbation size, by extending randomized smoothing from the classifier perspective to a general function perspective. Based on it, we design certified robustness inspired node weights, which can be seamlessly plugged into the existing graph perturbation attacks' loss and produce our certified robustness inspired attack loss and attack framework. Evaluations on multiple datasets demonstrate that existing attacks' performance can be significantly enhanced by applying our attack framework. **Acknowledgments.** This work was supported by Wang's startup funding, the Cisco Research Award, and the National Science Foundation under grant No. 2216926. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding agencies. Figure 4: Impact of (a) \(\beta\), (b) \(1-\alpha\), (c) \(N\) (# in bracket in x-axis is for poisoning attacks), and (d) running time vs. \(N\) on Citeseer. Note that (c): “No evasion attack” and “No poisoning attack” curves are overlapped; (d) \(INT\)=10 (2) for our evasion (poisoning) attacks.
2303.05490
On the Expressiveness and Generalization of Hypergraph Neural Networks
This extended abstract describes a framework for analyzing the expressiveness, learning, and (structural) generalization of hypergraph neural networks (HyperGNNs). Specifically, we focus on how HyperGNNs can learn from finite datasets and generalize structurally to graph reasoning problems of arbitrary input sizes. Our first contribution is a fine-grained analysis of the expressiveness of HyperGNNs, that is, the set of functions that they can realize. Our result is a hierarchy of problems they can solve, defined in terms of various hyperparameters such as depths and edge arities. Next, we analyze the learning properties of these neural networks, especially focusing on how they can be trained on a finite set of small graphs and generalize to larger graphs, which we term structural generalization. Our theoretical results are further supported by the empirical results.
Zhezheng Luo, Jiayuan Mao, Joshua B. Tenenbaum, Leslie Pack Kaelbling
2023-03-09T18:42:18Z
http://arxiv.org/abs/2303.05490v1
# On the Expressiveness and Generalization of ###### Abstract This extended abstract describes a framework for analyzing the expressiveness, learning, and (structural) generalization of hypergraph neural networks (Hyper-GNNs). Specifically, we focus on how HyperGNNs can learn from finite datasets and generalize structurally to graph reasoning problems of arbitrary input sizes. Our first contribution is a fine-grained analysis of the expressiveness of HyperGNNs, that is, the set of functions that they can realize. Our result is a hierarchy of problems they can solve, defined in terms of various hyperparameters such as depths and edge entries. Next, we analyze the learning properties of these neural networks, especially focusing on how they can be trained on a finite set of small graphs and generalize to larger graphs, which we term structural generalization. Our theoretical results are further supported by the empirical results. ## 1 Introduction Reasoning over graph-structured data is an important task in many applications, including molecule analysis, social network modeling, and knowledge graph reasoning [1; 2; 3]. While we have seen great success of various relational neural networks, such as Graph Neural Networks [GNNs; 4] and Neural Logical Machines [NLM; 5] in a variety of applications [6; 7; 8], we do not yet have a full understanding of how different design parameters, such as the depth of the neural network, affects the expressiveness of these models, or how effectively these models generalize from limited data. This paper analyzes the _expressiveness_ and _generalization_ of relational neural networks applied to _hypergraphs_, which are graphs with edges connecting more than two nodes. Literature has shown that even when the inputs and outputs of models have only unary and binary relations, allowing intermediate hyperedge representations increases the expressiveness [9; 10]. In this paper, we further formally show the "if and only if" conditions for the expressive power with respect to the edge arity. That is, \(k\)-ary hyper-graph neural networks are sufficient and necessary for realizing FOC-\(k\), a fragment of first-order logic with counting quantification which involves at most \(k\) variables. This is a helpful result because now we can determine whether a specific hypergraph neural network can solve a problem by understanding what form of logic formula can represent the solution to this problem. Next, we formally described the relationship between expressiveness and non-constant-depth networks. We state a conjecture about the "depth hierarchy," and connect the potential proof of this conjecture to the distributed computing literature. Furthermore, we prove, under certain assumptions, it is possible to train a hypergraph neural networks on a finite set of small graphs, and it will generalize to arbitrarily large graphs. This ability results from the weight-sharing nature of hypergraph neural networks. We hope our work can serve as a foundation for designing hypergraph neural networks: to solve a specific problem, what arity do you need? What depth do you need? Will my model have structural generalization (i.e., to larger graphs)? Our theoretical results are further supported by experiments, for empirical demonstrations. ## 2 Hypergraph Reasoning Problems and Hypergraph Neural Networks A _hypergraph representation_\(G\) is a tuple \((V,X)\), where \(V\) is a set of entities (nodes), and \(X\) is a set of _hypergraph representation functions_. Specifically, \(X=\{X_{0},X_{1},X_{2},\cdots,X_{k}\}\), where \(X_{j}:(v_{1},v_{2},\cdots,v_{j})\rightarrow\mathcal{S}\) is a function mapping every tuple of \(j\) nodes to a value. We call \(j\) the _arity_ of the hyperedge and \(k\) is the max arity of input hyperedges. The range \(\mathcal{S}\) can be any set of discrete labels that describes relation type, or a scalar number (e.g., the length of an edge), or a vector. We will use the arity 0 representation \(X_{0}(\emptyset)\rightarrow\mathcal{S}\) to represent any global properties of the graph. A _graph reasoning function_\(f\) is a mapping from a hypergraph representation \(G=(V,X)\) to another hyperedge representation function \(Y\) on \(V\). As concrete examples, asking whether a graph is fully connected is a graph classification problem, where the output \(Y=\{Y_{0}\}\) and \(Y_{0}(\emptyset)\rightarrow\mathcal{S}^{\prime}=\{0,1\}\) is a global label; finding the set of disconnected subgraphs of size \(k\) is a \(k\)-ary hyperedge classification problem, where the output \(Y=\{Y_{k}\}\) is a label for each \(k\)-ary hyperedges. There are two main motivations and constructions of a neural network applied to graph reasoning problems: message-passing-based and first-order-logic-inspired. Both approaches construct the computation graph layer by layer. The input is the features of nodes and hyperedges, while the output is the per-node or per-edge prediction of desired properties, depending on the task. In a nutshell, within each layer, _message-passing-based_ hypergraph neural networks, Higher-Order GNNs [11], perform message passing between each hyperedge and its neighbours. Specifically, we say the j-th neighbour set of a hyperedge \(u=(x_{1},x_{2},\cdots,x_{i})\) of arity \(i\) is \(N_{j}(u)=\{(x_{1},x_{2},\cdots,x_{j-1},r,x_{j+1},\cdots,x_{i})\}\), where \(r\in V\). Then, the all neighbours of node \(u\) is the union of all \(N_{j}\)'s, where \(j=1,2,\cdots,i\). On the other hand, first-order-logic-inspired hypergraph neural networks consider building neural networks that can emulate first logic formulas. Neural Logic Machines [NLM; 5] are defined in terms of a set of input hyperedges; each hyperedge of arity \(k\) is represented by a vector of (possibly real) values obtained by applying all of the k-ary predicates in the domain to the tuple of vertices it connects. Each layer in an NLM learns to apply a linear transformation with nonlinear activation and quantification operators (analogous to the for all \(\forall\) and exists \(\exists\) quantifiers in first-order logic), on these values. It is easy to prove, by construction, that given a sufficient number of layers and maximum arity, NLMs can learn to realize any first-order-logic formula. For readers who are not familiar with HO-GNNs [11] and NLMs [5], we include a mathematical summary of their computation graph in Appendix A. Our analysis starts from the following theorem. **Theorem 2.1**.: HO-GNNs [11] are equivalent to NLMs in terms of expressiveness. Specifically, a \(B\)-ary HO-GNN is equivalent to an NLM applied to \(B+1\)-ary hyperedges. Proofs are in Appendix A.3. Given Theorem 2.1, we can focus on just one single type of hypergraph neural network. Specifically, we will focus on Neural Logic Machines [NLM; 5] because its architecture naturally aligns with first-order logic formula structures, whilst will aid some of our analysis. An NLM is characterized by hyperparameters \(D\) (depth), and \(B\) maximum arity. We are going to assume that \(B\) is a constant, but \(D\) can be dependent on the size of the input graph. We will use NLM[\(D\), \(B\)] to denote an NLM family with depth \(D\) and max arity \(B\). Other parameters such as the width of neural networks affects the precise details of what functions can be realized, as it does in a regular neural network, but does not affect the analyses in this extended abstract. Furthermore, we will be focusing on neural networks with bounded precision, and briefly discuss how our results generalize to unbounded precision cases. ## 3 Expressiveness of Relational Neural Networks We start from a formal definition of hypergraph neural network expressiveness. **Definition 3.1** (Expressiveness).: We say a model family \(\mathcal{M}_{1}\) is _at least expressive as \(\mathcal{M}_{2}\)_, written as \(\mathcal{M}_{1}\succcurlyeq\mathcal{M}_{2}\), if for all \(M_{2}\in\mathcal{M}_{2}\), there exists \(M_{1}\in\mathcal{M}_{1}\) such that \(M_{1}\) can realize \(M_{2}\). A model family \(\mathcal{M}_{1}\) is _more expressive than \(\mathcal{M}_{2}\)_, written as \(\mathcal{M}_{1}\succ\mathcal{M}_{2}\), if \(\mathcal{M}_{1}\succcurlyeq\mathcal{M}_{2}\) and \(\exists M_{1}\in\mathcal{M}_{1}\), \(\forall M_{2}\in\mathcal{M}_{2}\), \(M_{2}\) can not realize \(M_{1}\). **Arity Hierarchy** We first aim to quantify how the maximum arity \(B\) of the network's representation affects its expressiveness and find that, in short, even if the inputs and outputs of neural networks are of low arity, the higher the maximum arity for intermediate layers, the more expressive the NLM is. **Corollary 3.1** (Arity Hierarchy).: For any maximum arity \(B\), there exists a depth \(D^{*}\) such that: \(\forall D\geq D^{*}\), NLM[\(D\), \(B+1\)] is more expressive than NLM[\(D\), \(B\)]. This theorem applies to both fixed-precision and unbounded-precision networks. Here, by fixed-precision, we mean that the results of intermediate layers (tensors) are constant-sized (e.g., \(W\) bits per entry). Practical GNNs are all fixed-precision because real number types in modern computers have finite precision. _Proof sketch:_ Our proof slightly extends the proof of Morris et al. [11]. First, the set of graphs distinguishable by NLM[\(D\), \(B\)] is bounded by graphs distinguishable by a \(D\)-round order-\(B\) Weisfeiler-Leman test [12]. If models in NLM[\(D\), \(B\)] cannot generate different outputs for two distinct hypergraphs \(G_{1}\) and \(G_{2}\), but there exists \(M\in\) NLM[\(D,B+1\)] that _can_ generate different outputs for \(G_{1}\) and \(G_{2}\), then we can construct a graph classification function \(f\) that NLM[\(D\), \(B+1\)] (with some fixed precision) can realize but NLM[\(D\), \(B\)] (even with unbounded precision) cannot.* The full proof is described in Appendix B.1. Footnote *: Note that the arity hierarchy is applied to fixed-precision and unbounded-precision separately. For example, NLM[\(D\), \(B\)] with unbounded precision is incomparable with NLM[\(D\), \(B+1\)] with fixed precision. It is also important to quantify the minimum arity for realizing certain graph reasoning functions. **Corollary 3.2** (FOL realization bounds).: Let \(\text{FOC}_{B}\) denote a fragment of first order logic with at most \(B\) variables, extended with counting quantifiers of the form \(\exists^{\geq n}\phi\), which state that there are at least \(n\) nodes satisfying formula \(\phi\)[13]. * (Upper Bound) Any function \(f\) in \(\text{FOC}_{B}\) can be realized by NLM[\(D\), \(B\)] for some \(D\). * (Lower Bound) There exists a function \(f\in\text{FOC}_{B}\) such that for all \(D\), \(f\) cannot be realized by NLM[\(D\),\(B-1\)]. _Proof:_ The upper bound part of the claim has been proved by Barcelo et al. [14] for \(B=2\). The results generalize easily to arbitrary \(B\) because the counting quantifiers can be realized by sum aggregation. The lower bound part can be proved by applying Section 5 of [13], in which they show that \(\text{FOC}_{B}\) is equivalent to a \((B-1)\)-dimensional WL test in distinguishing non-isomorphic graphs. Given that NLM[\(D\), \(B-1\)] is equivalent to the \((B-2)\)-dimensional WL test of graph isomorphism, there must be an FOL\({}_{B}\) formula that distinguishes two non-isomorphic graphs that NLM[\(D\), \(B-1\)] cannot. Hence, FOL\({}_{B}\) cannot be realized by NLM[\(\cdot\), \(B-1\)]. **Depth Hierarchy** We now study the dependence of the expressiveness of NLMs on depth \(D\). Neural networks are generally defined to have a fixed depth, but allowing them to have a depth that is dependent on the number of nodes \(n=|V|\) in the graph, in many cases, can substantially increase their expressive power [15, see also Theorem 3.4 and Appendix B for examples]. In the following, we define a _depth hierarchy_ by analogy to the _time hierarchy_ in computational complexity theory [16], and we extend our notation to let NLM[\(O(f(n)),B\)] denote the class of adaptive-depth NLMs in which the growth-rate of depth \(D\) is bounded by \(O(f(n))\). **Conjecture 3.3** (Depth hierarchy).: For any maximum arity \(B\), for any two functions \(f\) and \(g\), if \(g(n)=o(f(n)/\log n)\), that is, \(f\) grows logarithmically more quickly than \(g\), then fixed-precision NLM[\(O(f(n)),B\)] is more expressive than fixed-precision NLM[\(O(g(n)),B\)]. There is a closely related result for the _congested clique_ model in distributed computing, where [17] proved that \(\text{CLIQUE}(g(n))\subsetneqq\text{CLIQUE}(f(n))\) if \(g(n)=o(f(n))\). This result does not have the \(\log n\) gap because the congested clique model allows \(\log n\) bits to transmit between nodes at each iteration, while fixed-precision NLM allows only a constant number of bits. The reason why the result on congested clique can not be applied to fixed-precision NLMs is that congested clique assumes unbounded precision representation for each individual node. However, Conjecture 3.3 is not true for NLMs with unbounded precision, because there is an upper bound depth \(O(n^{B-1})\) for a model's expressiveness power (see appendix B.2 for a formal statement and the proof). That is, an unbounded-precision NLM can not achieve stronger expressiveness by increasing its depth beyond \(O(n^{B-1})\). It is important to point out that, to realize a specific graph reasoning function, NLMs with different maximum arity \(B\) may require different depth \(D\). Furer [18] provides a general construction for problems that higher-dimensional NLMs can solve in asymptotically smaller depth than lower-dimensional NLMs. In the following we give a concrete example for computing _S-T Connectivity-\(k\)_, which asks whether there is a path of nodes from \(S\) and \(T\) in a graph, with length \(\leq k\). **Theorem 3.4** (S-T Connectivity-\(k\) with Different Max Arity).: For any function \(f(k)\), if \(f(k)=o(k)\), NLM[\(O(f(k))\), \(2\)] cannot realize S-T Connectivity-\(k\). That is, S-T Connectivity-\(k\) requires depth at least \(O(k)\) for a relational neural network with an maximum arity of \(B=2\). However, S-T Connectivity-\(k\)_can_ be realized by NLM[\(O(\log k)\), \(3\)]. Proof sketch.: For any integer \(k\), we can construct a graph with two chains of length \(k\), so that if we mark two of the four ends as \(S\) or \(T\), any NLM\([k-1,2]\) cannot tell whether \(S\) and \(T\) are on the same chain. The full proof is described in Appendix B.3. There are many important graph reasoning tasks that do not have known depth lower bounds, including all-pair connectivity and shortest distance [19, 20]. In Appendix B.3, we discuss the concrete complexity bounds for a series of graph reasoning problems. ## 4 Learning and Generalization in Relational Neural Networks Given our understanding of what functions can be realized by NLMs, we move on to the problems of learning them: Can we effectively learn a NLMs to solve a desired task given a sufficient number of input-output examples? In this paper, we show that applying _enumerative training_ with examples up to some fixed graph size can ensure that the trained neural network will generalize to all graphs _larger_ than those appearing in the training set. A critical determinant of the generalization ability for NLMs is the aggregation function. Specifically, Xu et al. [21] have shown that using _sum_ as the aggregation function provides maximum expressiveness for graph neural networks. However, sum aggregation cannot be implemented in fixed-precision models, because as the graph size \(n\) increases, the range of the sum aggregation also increases. **Definition 4.1** (Fixed-precision aggregation function).: An aggregation function is _fixed precision_ if it maps from any finite _set_ of inputs with values drawn from _finite domains_ to a _fixed finite_ set of possible output values; that is, the cardinality of the range of the function cannot grow with the number of elements in the input set. Two useful fixed-precision aggregation functions are _max_, which computes the dimension-wise maximum over the set of input values, and _fixed-precision mean_, which approximates the dimension-wise mean to a fixed decimal place. In order to focus on structural generalization in this section, we consider an _enumerative_ training paradigm. When the input hypergraph representation domain \(\mathcal{S}\) is a finite set, we can enumerate the set \(\mathcal{G}_{\leq N}\) of all possible input hypergraph representations of size bounded by \(N\). We first enumerate all graph sizes \(n\leq N\); for each \(n\), we enumerate all possible values assigned to the hyperedges in the input. Given training size \(N\), we enumerate all inputs in \(\mathcal{G}_{\leq N}\), associate with each one the corresponding ground-truth output representation, and train the model with these input-output pairs. This has much stronger data requirements than the standard sampling-based training mechanisms in machine learning. In practice, this can be approximated well when the input domain \(\mathcal{S}\) is small and the input data distribution is approximately uniformly distributed. The enumerative learning setting is studied by the _language identification in the limit_ community [22], in which it is called _complete presentation_. This is an interesting learning setting because even if the domain for each individual hyperedge representation is finite, as the graph size can go arbitrarily large, the number of possible inputs is enumerable but unbounded. **Theorem 4.1** (Fixed-precision generalization under complete presentation).: For any hypergraph reasoning function \(f\), if it can be realized by a fixed-precision relational neural network model \(\mathcal{M}\), then there exists an integer \(N\), such that if we train the model with complete presentation on all input hypergraph representations with size smaller than \(N\), \(\mathcal{G}_{\leq N}\), then for all \(M\in\mathcal{M}\), \[\sum_{G\in\mathcal{G}_{\leq N}}1[M(G)\neq f(G)]=0\implies\forall G\in\mathcal{ G}_{\infty}:M(G)=f(G).\] That is, as long as \(M\) fits all training examples, it will generalize to all possible hypergraphs in \(\mathcal{G}_{\infty}\). Proof.: The key observation is that for any fixed vector representation length \(W\), there are only a finite number of distinctive models in a fixed-precision NLM family, _independent of the graph size \(n\)_. Let \(W_{b}\) be the number of bits in each intermediate representation of a fixed-precision NLM. There are at most \((2^{W_{b}})^{2^{W_{b}}}\) different mappings from inputs to outputs. Hence, if \(N\) is sufficiently large to enumerate all input hypergraphs, we can always identify the correct model in the hypothesis space. Our results are related to the _algorithmic alignment_ approach [23, 24]. In contrast to their Probably Approximately Correct (PAC) Learning bounds for sample efficiency, our expressiveness results directly quantifies whether a hypergraph neural network can be trained to realize a specific function. ## 5 Related Work Solving problems on graphs of arbitrary size is studied in many fields. NLMs can be viewed as circuit families with constrained architecture. In distributed computation, the congested clique model can be viewed as 2-arity NLMs, where nodes have identities as extra information. Common graph problems including sub-structure detection[25, 26] and connectivity[19] are studied for lower bounds in terms of depth, width and communication. This has been connected to GNNs for deriving expressiveness bounds [27]. Studies have been conducted on the expressiveness of GNNs and their variants. Xu et al. [21] provide an illuminating characterization of GNN expressiveness in terms of the WL graph isomorphism test. Azizian and Lelarge [9] analyze the expressiveness of higher-order Folklore GNNs by connecting them with high-dimensional WL-tests. We have the similar results in the arity hierarchy. Barcelo et al. [14] reviewed GNNs from the logical perspective and rigorously refined their logical expressiveness with respect to fragments of first-order logic. Dong et al. [5] proposed Neural Logical Machines (NLMs) to reason about higher-order relations, and showed that increasing order inreases expressiveness. It is also possible to gain expressiveness using unbounded computation time, as shown by the work of Dehghani et al. [15] on dynamic halting in transformers. It is interesting that GNNs may generalize to larger graphs. Xu et al. [23, 24] have studied the notion of _algorithmic alignment_ to quantify such structural generalization. Dong et al. [5] provided empirical results showing that NLMs generalize to much larger graphs on certain tasks. Buffelli et al. [28] introduced a regularization technique to improve GNNs' generalization to larger graphs and demonstrated its effectiveness empirically. In Xu et al. [23], they analyzed and compared the sample complexity of Graph Neural Networks. This is different from our notion of expressiveness for realizing functions. In Xu et al. [24], they showed emperically on some problems (e.g., MaxDegree, Shortest Path, and n-body problem) that algorithm alignment helps GNNs to extrapolate, and theoretically proved the improvement by algorithm alignment on the Max-Degree problem. In this extended abstract, instead of focusing on computing specific graph problems, we analyzed how GNNs can extrapolate to larger graphs in a general case, based on the assumption of fixed precision computation. ## 6 Conclusion In this extended abstract, we have shown the substantial increase of expressive power due to higher-arity relations and increasing depth, and have characterized very powerful structural generalization from training on small graphs to performance on larger ones. All theoretical results are further supported by the empirical results, discussed in Appendix C. Although many questions remain open about the overall generalization capacity of these models in continuous and noisy domains, we believe this work has shed some light on their utility and potential for application in a variety of problems. **Acknowledgement.** We thank anonymous reviewers for their comments. This work is in part supported by ONR MURI N00014-16-1-2007, the Center for Brain, Minds, and Machines (CBMM, funded by NSF STC award CCF-1231216), NSF grant 2214177, AFOSR grant FA9550-22-1-0249, ONR grant N00014-18-1-2847, the MIT Quest for Intelligence, MIT-IBM Watson Lab. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of our sponsors.
2310.03873
Neuromorphic Robust Framework for Concurrent Estimation and Control in Dynamical Systems using Spiking Neural Networks
Concurrent estimation and control of robotic systems remains an ongoing challenge, where controllers rely on data extracted from states/parameters riddled with uncertainties and noises. Framework suitability hinges on task complexity and computational constraints, demanding a balance between computational efficiency and mission-critical accuracy. This study leverages recent advancements in neuromorphic computing, particularly spiking neural networks (SNNs), for estimation and control applications. Our presented framework employs a recurrent network of leaky integrate-and-fire (LIF) neurons, mimicking a linear quadratic regulator (LQR) through a robust filtering strategy, a modified sliding innovation filter (MSIF). Benefiting from both the robustness of MSIF and the computational efficiency of SNN, our framework customizes SNN weight matrices to match the desired system model without requiring training. Additionally, the network employs a biologically plausible firing rule similar to predictive coding. In the presence of uncertainties, we compare the SNN-LQR-MSIF with non-spiking LQR-MSIF and the optimal linear quadratic Gaussian (LQG) strategy. Evaluation across a workbench linear problem and a satellite rendezvous maneuver, implementing the Clohessy-Wiltshire (CW) model in space robotics, demonstrates that the SNN-LQR-MSIF achieves acceptable performance in computational efficiency, robustness, and accuracy. This positions it as a promising solution for addressing dynamic systems' concurrent estimation and control challenges in dynamic systems.
Reza Ahmadvand, Sarah Safura Sharif, Yaser Mike Banad
2023-10-05T20:05:47Z
http://arxiv.org/abs/2310.03873v1
Neuromorphic Robust Framework for Concurrent Estimation and Control in Dynamical Systems using Spiking Neural Networks ###### Abstract Concurrent estimation and control of robotic systems remains an ongoing challenge, where controllers rely on data extracted from states/parameters riddled with uncertainties and noises. Framework suitability hinges on task complexity and computational constraints, demanding a balance between computational efficiency and mission-critical accuracy. This study leverages recent advancements in neuromorphic computing, particularly spiking neural networks (SNNs), for estimation and control applications. Our presented framework employs a recurrent network of leaky integrate-and-fire (LIF) neurons, mimicking a linear quadratic regulator (LQR) through a robust filtering strategy--modified sliding innovation filter (MSIF). Benefiting from both the robustness of MSIF and the computational efficiency of SNN, our framework customizes SNN weight matrices to match the desired system model without requiring training. Additionally, the network employs a biologically plausible firing rule similar to predictive coding. In the presence of uncertainties, we compare the SNN-LQR-MSIF with non-spiking LQR-MSIF and the optimal linear quadratic Gaussian (LQG) strategy. Evaluation across a workbench linear problem and a satellite rendezvous maneuver, implementing the Clohessy-Wiltshire (CW) model in space robotics, demonstrates that the SNN-LQR-MSIF achieves acceptable performance in computational efficiency, robustness, and accuracy. This positions it as a promising solution for addressing concurrent estimation and control challenges in dynamic systems. Neuromorphic computing, Spiking neural network, Modified sliding innovation filter, Linear quadratic Gaussian, Satellite rendezvous maneuver. ## I Introduction As the design and implementation of robotic manipulators/systems undertaking diverse real-world tasks grow more ambitious, the importance of computational efficiency, reliability, and accuracy escalates. Currently, all the implemented controllers rely heavily on the provision of accurate information about the system states/parameters obtained through various types of sensors, a task that often proves elusive due to the multifaceted uncertainties inherent to robotic systems. These uncertainties encompass environmental instability, unmodeled dynamics, and sensor noises, all of which can lead to data degradation, ultimately impacting controller performance. Furthermore, in some scenarios, obtaining complete measurements of all the states and parameters that describe the dynamics remains an impractical endeavor. Consequently, the ability to perform estimation simultaneously with control operations is paramount for ensuring the safe and accurate manipulation of robotic systems [1, 2]. In light of the constraints imposed by computing resources and energy consumption, the development of concurrent estimation and control frameworks that excel in computational efficiency, robustness, and accuracy becomes an imperative endeavor. The linear quadratic Gaussian (LQG) which is a popular and optimal framework for simultaneous estimation and control of linear dynamical systems, has found widespread adoption across various domains such as robotic manipulators [3], robot control [4], robot path planning [5], and satellite control [6]. However, the LQG framework is not without its limitations. The LQG framework is a linear quadratic regulator (LQR) that works based on the state feedback provided by the Kalman filter (KF) [7]. When confronted with uncertain dynamic models, its performance diminishes, and in the presence of external disturbances, it is not robust enough [8]. In such circumstances, the KF employed in conjunction with LQR control falls short of providing accurate information about system states/parameters. Consequently, the demonstrated limitations of the LQG underscore the pressing need for the development of a framework grounded in robust estimation principles. In this study, we introduce a novel framework, LQR-MSIF, which combines the LQR controller with a recently introduced robust filtering strategy known as modified sliding innovation filter (MSIF) [9, 10]. The LQR-MSIF leverages the robustness of the MSIF filter in processing signals obtained from measurement systems. The MSIF represents an evolution of the sliding innovation filter (SIF), which belongs to the family of variable structure filters (VSF) [11], and also it can be considered as a new generation of smooth variable structure filter (SVSF) [12]. Importantly, unlike the KF family, which prioritizes frameworks founded on minimal estimation error, the VSF family of algorithms has been developed based on guaranteed stability in the presence of bounded modeling uncertainties and external disturbances [13]. Additionally, considering the recent advancements in neuromorphic computing tools, including spiking neural networks (SNN), and their applications in robotics control and estimation [10, 14], as well as the spike coding theories [15], we present a pioneering approach. In this study, to introduce a framework that comprehensively addresses the aforementioned limitations, we translate the LQR-MSIF into a neuromorphic SNN-based framework, in which the firing rule derived from the predicted error of the network concerning the estimated state vector, constituting a manifestation of predictive coding [15]. This theory posits that the brain perpetually constructs and enhances a'mental model' of its surrounding environment, serving the critical function of anticipating sensory input signals, which are subsequently compared with the actual sensory inputs received. As the concept of representation learning gains increasing prominence, predictive coding theory has found vibrant application and exploration within the realms of biologically inspired neural networks, such as SNN. The adoption of SNNs mitigates the computational efficiency challenges associated with this problem [16]. Owing to their minimal computational burden and inherent scalability, SNNs offer significant advantages over traditional non-spiking computing methods [17]. SNNs represent the third generation of neural networks, taking inspiration from the human brain, where neurons communicate using electrical pulses called spikes. SNNs leverage neural circuits composed of neurons and synapses, communicating via encoded data through spikes in an asynchronous fashion [17, 18, 19, 20, 21]. The asynchronous in spiking fashion characterized by event-driven processing [10], stands in contrast to traditional Artificial Neural Networks (ANNs), which operate synchronously or, in other words, are time-driven. Studies [22] demonstrate that, for equivalent tasks, SNNs are 6 to 8 times more energy efficient than ANNs with an acceptable trade-off in accuracy [23]. Moreover, the inherent scalability of SNNs enhances their reliability, particularly under the condition of neuron silencing, where neuron loss is compensated for by an increase in the spiking rate of remaining neurons [18]. Thus, to harness the advantage of SNNs for the simultaneous robust estimation and control, here, we integrate the methods proposed in prior studies [10], and [14] to develop the previously mentioned SNN-LQR-MSIF framework, anticipating substantial advantages. Subsequently, we assess the performance of the proposed SNN-LQR-MSIF framework through a series of evaluations. Initially, we apply it to a linear workbench problem, followed by its application to the intricate task of satellite rendezvous in circular orbit, a critical maneuver in space robotic applications such as on-orbit servicing and refueling [24], We then compare the SNN-LQR-MSIF with its non-spiking counterpart, LQR-MSIF, and the standard LQG under various sources of uncertainty, including modeling uncertainty, measurement outliers, and neuron silencing. For the proposed framework, our findings revealed an acceptable performance in terms of curacy, and robustness while it outperforms the traditional frameworks in terms of computational efficiency This paper is organized as follows. Section 2 provides an overview of related works and contributions. Then, the preliminaries, underlying theories, and the proposed framework for addressing the problem of concurrent robust estimation and control in linear dynamical systems are presented in Section 3. Next, Section 4 provides numerical simulations and discussions of the results, while Section 5 serves as the conclusion of the paper. ## II Related Works and Contributions In this section, an overview of recent related works, and our contributions have been presented separately. ### _Related works_ This section offers a concise overview of recent works related to the problem of concurrent estimation and control. In [14], Yamazaki _et al_, proposed an SNN-based framework for concurrent estimation and control, employing a combination of the Luenberger observer and LQR controller. They applied their method to scenarios involving a spring-mass-damper (SMD) system and a Cartpole system, evaluating its performance in terms of accuracy and similarity to its non-spiking counterpart. They also explored the robustness of their network in handling neuron silencing. While their results were promising, their framework had limitations, notably the need to design both controller and observer gains for each problem. Additionally, since they used the Luenberger observer, their framework inherited the observer limitations related to modeling uncertainties and external disturbances, which were not thoroughly assessed for robustness. To address these limitations, a novel SNN-based KF was proposed in [10] for optimal estimation of linear dynamical systems. In addition to performing the optimal estimation, this approach eliminated the need for observer gain design, simplifying the process. To enhance robustness against modeling uncertainties and external disturbances, a robust SNN-based estimation framework based on MSIF was introduced. Comparative assessments involving traditional KF and MSIF demonstrated acceptable performance for the SNN-based frameworks in terms of similarity to non-spiking strategies, robustness, and accuracy. However, the previous study did not investigate concurrent estimation and control scenarios, which is the primary focus of this research. Additionally, none of the aforementioned methods utilized biologically inspired firing rules for their network. ### _Contributions_ The contributions of our research are as follows: * **Development of SNN-LQR-MSIF:** We introduce a robust SNN-based framework for concurrent estimation and control of linear dynamical systems, named SNN-LQR-MSIF. This framework leverages previously proposed methods in [10] and [14]. * **Biologically Plausible Firing Rule:** In order to have control over the spike distribution in the network and prevent excessive spiking for a part of the network or a neuron, we implement a biologically plausible firing rule based on the concept of predictive coding concept [15], enhancing the biological relevance of our network. * **Robustness and Accuracy Assessment:** We comprehensively investigate the performance of our method in scenarios subjected to modeling uncertainties, measurement outliers, and neuron silencing, evaluating robustness and accuracy compared to its non-spiking counterpart LQR-MSIF and the traditional LQG. We also analyze spiking patterns to demonstrate computational efficiency. * **Application to Satellite Rendezvous:** We apply the SNN-LQR-MSIF to a real-world scenario involving concurrent estimation and control of satellite rendezvous, a novel application for this type of neuromorphic framework. We compare its performance with that of LQR-MSIF and LQG. ## III Theory In this section, we provide essential preliminaries, followed by an outline of the study's outcomes. The linear dynamical system and measurement package considered in this study are defined by the following equations: \[\dot{\mathbf{x}} =A\mathbf{x}+B\mathbf{u}+\mathbf{w} \tag{1}\] \[\mathbf{z} =\mathbf{Cx}+\mathbf{d} \tag{2}\] Here, \(\mathbf{x}\in R^{n_{\mathbf{x}}}\) refers to the state vector, \(\mathbf{u}\in R^{n_{\mathbf{u}}}\) is the input vector, \(\mathbf{z}\in R^{n_{\mathbf{z}}}\) is the measurement vector. \(A\in R^{n_{\mathbf{x}}\times n_{\mathbf{x}}}\), and \(B\in R^{n_{\mathbf{x}}\times n_{\mathbf{u}}}\) denote the dynamic transition and input matrices, respectively, while \(\mathbf{C}\in R^{n_{\mathbf{x}}\times n_{\mathbf{x}}}\) is the measurement matrix. \(\mathbf{w}\) and \(\mathbf{d}\) represent the zero-mean Gaussian white noise with covariance matrices \(Q\), and \(R\), respectively. Figure 1 depicts the traditional block diagram of a concurrent estimation and control loop in conventional dynamical systems. This diagram reveals that both the estimator and controller employ sequential algorithms, resembling the logic of traditional von Neumann computer architectures. ### _Spiking neural networks (SNN)_ In this section, we present a brief overview of implementing an SNN, including its firing rule. To design a network composed of recurrent leaky integrate-and-fire (LIF) neurons capable of approximating the temporal variation of a parameter like \(\mathbf{x}\) as expressed in Eq. (1), we need to implement the following equation [14]: \[\dot{\mathbf{\sigma}}=\ -\lambda\mathbf{\sigma}+D^{T}(\dot{\mathbf{x}}+\lambda\mathbf{x})-D^{T}D \mathbf{s} \tag{3}\] Here, \(\mathbf{\sigma}\in R^{N}\) refers to the neuron membrane potential vector, \(\lambda\) is a decay or leak term considered on the membrane potential of the neurons, \(D\in R^{n_{\mathbf{x}}\times N}\) is the random fixed decoding matrix containing the neurons' output kernel, and \(\mathbf{s}\in R^{N}\) is the emitted spike population of the neurons in each time step. Further, according to spike coding network theories [14, 15], the introduced network of LIF neurons can reproduce the temporal variation of \(\mathbf{x}\) under two assumptions. First, we should be able to estimate \(\mathbf{x}\) from neural activity using the following rule: \[\mathbf{\hat{x}}=D\mathbf{r} \tag{4}\] Here, \(\mathbf{r}\in R^{N}\) represents the filtered spike trains, which have slower dynamics compared to \(\mathbf{s}\in R^{N}\). The dynamics of the filtered spike trains are provided by: \[\dot{\mathbf{r}}=-\lambda\mathbf{r}+\mathbf{s} \tag{5}\] The second assumption is that the network minimizes the cumulative error between the true value of \(\mathbf{x}\) and the estimated \(\mathbf{\hat{x}}\), leveraging optimization on the spike times not by changing the output kernel values \(D\). So, the network minimizes the cumulative error between the state and its estimate while limiting computational cost by controlling spike occurrence. To achieve this, it minimizes the following cost function [15]: \[\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small {\small{\small{\small{\small{\small{\small{\small \small{\small{\small{\small{\small \,}}}}}}{}{}{}{}{}{}{{}{{}{{{{{{{{{{{{{{{{{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \ \ \ \} \ \} \} \ \} \ \} \ \} \ \ \ \} SNNs with the robustness of the MSIF. The equations governing SNN-MSIF are as follows [10]: \[\dot{\mathbf{\sigma}}=\ -\lambda\mathbf{\sigma}+F\mathbf{u}(t)+\varOmega_{s}\mathbf{r}+ \varOmega_{f}\mathbf{s} \tag{8}\] \[+\varOmega_{g}\mathbf{r}+F_{k}\mathbf{z}+\mathbf{\eta}\] where: \[F=D^{T}B (9)\] \[\varOmega_{s}=D^{T}(A+\lambda I)D (10)\] \[\varOmega_{f}=-(D^{T}D+\mu\lambda^{2}I) \tag{11}\] Here, \(\lambda\) represents the leak rate for the membrane potential, and \(F\) encodes the control input to a set of spikes that is readable for the network. \(\varOmega_{s}\) and \(\varOmega_{f}\) are synaptic weights for slow and fast connections, respectively. While slow connections typically govern the implementation of the desired system dynamics, in this context, they are chiefly responsible for executing the linear dynamics of the MSIF estimator. Conversely, fast connections play a pivotal role in achieving an even distribution of the spikes across the network. Consequently, the primary contributors to the _a-priori_ prediction phase of the estimation process are the second three terms in Eq. (3). In contrast, the subsequent two terms, which are influenced by \(\varOmega_{k}\), and \(F_{k}\), adapt dynamically during the estimation process, and are tasked with handling the measurement-update or _a-posteriori_ phase of the estimation. Here, \(\varOmega_{k}\) imparts the dynamics of the update component, while \(F_{k}\) furnishes the SNN with an encoded measurement vector. To update these weight matrices the following expressions, need to be used: \[\varOmega_{k} =\ -D^{T}(C^{+}sat(diag(P^{xx})/\delta))CD \tag{12}\] \[F_{k} =D^{T}(C^{+}sat(diag(P^{xx})/\delta)) \tag{13}\] Here, \(P^{xx}\) represents the innovation covariance matrix, and \(\delta\) is the sliding boundary layer, a tuning parameter. To update \(P^{xx}\), the following equations are used: \[P^{xx}=P\mathbf{CP}^{T}+R \tag{14}\] \[\dot{P}=AP+PA^{T}+Q-PC^{T}R^{-1}CP \tag{15}\] The final term \(\mathbf{\eta}\), accounts for zero-mean Gaussian noise, simulating the stochastic nature of the neural activity in biological neural circuits. The weight matrices are analytically designed to capture MSIF dynamics, allowing the estimation of a fully observable linear dynamical system with partially noisy state measurements via a network of recurrent LIF neurons. Utilizing the framework presented in this section for concurrent estimation concurrently with the conventional control methods results in the system depicted in Fig. 2. The figure illustrates how the conventional non-spiking estimator in Fig. 1 has been replaced by an SNN designed to function as an estimator. Instead of employing sequential estimation algorithms, this SNN-based approach capitalizes on the advantages of SNNs, including computational efficiency, highly parallel computing, and scalability. However, as shown in Fig. 2, estimation and control tasks are still conducted sequentially. ### _SNN-based concurrent estimation and control_ This section extends SNN-MSIF to a network capable of concurrently performing state estimation and control of linear dynamical systems. As introduced in [10], for the derivation of the SNN-MSIF, which implements the linear dynamics of an estimator, the SNN should be able to mimic the following dynamics: \[\dot{\mathbf{\chi}}=A\mathbf{\hat{x}}-B\mathbf{u}+K_{KF}(\mathbf{z}-\mathbf{\hat{z}}) \tag{16}\] Here, to go further and add the control to the above-mentioned dynamics; \(\mathbf{u}=-K_{c}(\mathbf{x}-\mathbf{x}^{D})\) is considered as the control input, So, the network should emulate the following linear system of equations: \[\dot{\mathbf{\chi}}=A\mathbf{\hat{x}}-BK_{c}(\mathbf{\hat{x}}\ -\mathbf{x}^{D})+K_{KF}(\mathbf{z}-\mathbf{ \hat{z}}) \tag{17}\] where \(x^{D}\)denotes the desired state. To extend the previously introduced network, the control rule \(\mathbf{u}\) is substituted into Eq. (8), resulting in the following network equation: \[\dot{\mathbf{\sigma}}=\ -\lambda\mathbf{\sigma}-FK_{c}(\mathbf{\hat{x}}\ -\mathbf{x}^{D})+ \varOmega_{s}\mathbf{r}+\varOmega_{f}\mathbf{s}\] (18) \[\ where: \[\Omega_{c} =-D^{T}BK_{c}D \tag{22}\] \[\bar{\Omega} =D^{T}BK_{c}\bar{D}\] (23) \[\bar{\Omega}_{f} =-(\bar{D}^{T}\bar{D}+\mu\lambda^{2}I) \tag{24}\] Here, \(\Omega_{c}\) somehow represents the slow connections for implementing the control input of the desired system. \(\bar{\Omega}\), and \(\bar{\Omega}_{f}\) represent the slow and fast synaptic weights for various connections respectively. parallel with other connections, these weights are responsible for implementing the dynamics of the desired state for the controller and Eq. (18) represents the membrane potential dynamics of a recurrent SNN of LIF neurons, capable of concurrently performing state estimation and control of linear dynamical systems. While the controller gain \(K_{c}\) must be designed for the considered system, this framework operates without requiring any learning by the network. Furthermore, although we implemented optimal LQR control in this study, the controller gain can be independently designed using any arbitrary approach. Finally, to extract the control input vector for the external plant from the spike populations, the following equation is employed: \[\mathbf{u}=D_{u}\mathbf{r} \tag{25}\] where: \[D_{u}=-K_{c}(D-\bar{D}) \tag{26}\] The above matrix can be used for decoding the control input from the neural activity inside the network. In summary, the proposed framework concurrently estimates the state vector \(\mathbf{x}\) from a noisy partial measurement vector \(\mathbf{z}\) and provides control input for the considered system. Fig. 3 illustrates the block diagram of the framework presented in this section. Fig. 3 demonstrates that for this framework, both the blocks of estimator and controller from Fig. 1 and Fig. 2 have been replaced by a single SNN. This represents an extension of the framework, leveraging the advantages of SNNs. Furthermore, the computations required for state estimation and control input have been parallelized. Consequently, implementing this framework can significantly reduce computational costs, allowing more complex tasks to be performed even with limited computing resources. Additionally, owing to the scalability of SNNs, if a part of the implemented network becomes damaged or loses some neurons, the process continues by increasing the spiking rate of the remaining neurons, as demonstrated in the next section. ## IV Numerical Simulations In this section, we first apply the proposed framework to a linear workbench problem and conduct various performance evaluations in terms of robustness, accuracy, and computational efficiency, in comparison with the well-established methods LQG and LQR-MSIF. Subsequently, we extend the analysis of the SNN-LQR-MSIF to a practical scenario involving the concurrent estimation and control of satellite rendezvous maneuvers. ### _Case study 1: Linear workbench problem_ Here, we initiate our investigation by applying the introduced framework to the following linear dynamical system: \[\dot{\mathbf{x}}=\begin{bmatrix}0&0\\ 0&1\end{bmatrix}\mathbf{x}+\begin{bmatrix}0\\ 1\end{bmatrix}\mathbf{u}+\mathbf{w} \tag{27}\] \[\mathbf{z}=[1&0]\mathbf{x}+\mathbf{v} \tag{28}\] where: \[\mathbf{u}=-K_{c}\mathbf{x} \tag{29}\] Simulations have been performed over a 10-second period with a time step of 0.01, employing the numerical values provided in TABLE I. Initially, we evaluated the applicability of the proposed framework in comparison with its non-spiking counterparts, LQG, and LQR-MSIF, by simulating a deterministic system without uncertainties. Next, we assessed the performance and effectiveness of the proposed framework by introducing various sources of uncertainties and disturbances. In line with real-world scenarios, where exact decoding matrices are typically unknown, we defined the decoding matrices \(D\) and \(\bar{D}\) using random samples from zero-mean Gaussian distributions with covariances of 0.25 and 1/300, respectively. Fig. 4 displays time histories of controlled states and estimation errors within \(\pm 3\sigma\) bounds obtained from SNN-LQR-SIF in comparison with LQG, and LQR-MSIF. Fig. 4(a) illustrates that the state \(\mathbf{x}_{1}\) converges to zero after \(t=5\)s, showcasing similar performance between the proposed \begin{table} \begin{tabular}{l l} \hline \hline Parameter & Value \\ \hline \(x_{0}\) & [10,1] \\ \(\hat{x}_{0}\) & [10,1] \\ \(K_{c}\) & [1, 1.7321] \\ \(Q_{c}\) & \(I\) \\ \(R_{c}\) & \(I\) \\ \(Q\) & \(I/1000\) \\ \(R\) & \(I/100\) \\ \(N\) & \(250\) \\ \(\lambda\) & \(0.01\) \\ \(\mu\) & \(0.005\) \\ \(\nu\) & \(0.005\) \\ \(\delta_{MSIF}\) & \(0.005\) \\ \hline \hline \end{tabular} \end{table} TABLE I: LINEAR SYSTEM SIMULATION PARAMETERS Fig. 3: Block diagram of SNN-based concurrent estimation and control loop. framework and its non-spiking counterparts, LQG and LQR-MSIF. Fig. 4(b) indicates that the state \(x_{2}\) converges to zero around \(t=6\)s, again showing consistent performance between the proposed framework and non-spiking methods. Fig. 4(c) demonstrates that all considered strategies remain stable, with errors staying within the prescribed bounds. Notably, the error obtained from KF deviates further from zero before converging around \(t=3\)s, while the errors from SNN-MSIF and MSIF exhibit faster convergence with smaller deviations. Fig. 4(d) confirms the stability of all estimation methods, with SNN-LQR-MSIF showing nearly identical performance to non-spiking KF and MSIF. Further, to gain more intuitive insights into the tuning parameters of the firing rule, namely \(\mu\), and \(\nu\) and their impacts on control accuracy, we conducted a sensitivity analysis. as depicted in Fig. 5, utilizing a colored map to show the variations of normalized average error, this analysis reveals that parameters tuning directly affects control accuracy, and depending on the specific system proper parameter sets can be identified by trial and error. The preferred parameter set used throughout our simulations is \(\mu=0.005\) and \(\nu=0.005\) marked with a white circle in the figure. The percentage of emitted spikes by the neurons compared to all possible spikes is also shown in the figure by a number on the figure for each set of \(\mu\) and \(\nu\). It can be observed that decreasing \(\nu\) leads to a higher percentage of spikes compared to possible spikes for each \(\mu\). This highlights a trade-off between accuracy and computational efficiency that can be an important factor in the tuning procedure of the network firing rule and confirms the previously mentioned matter about the tuning of \(\nu\) that controls the number of spikes. Furthermore, we evaluated the robustness of SNN-LQR-MSIF against modeling uncertainties by introducing a 20% error in the dynamic transition matrix \(\tilde{A}=0.8A\). Simulation results in the presence of modeling uncertainty were compared with LQG and LQR-MSIF, as presented in Fig. 6. Fig. 6(a) shows that in the presence of uncertainty, the SNN-based framework for the state \(x_{1}\) deviates from non-spiking LQG and LQR-MSIF. However, SNN-LQR-MSIF exhibits superior performance, converging to zero at approximately \(t=4\)s and completely converging by \(t=6\)s. In contrast, non-spiking frameworks yield matching results converging to zero at \(t=7\)s. Fig. 6(b) demonstrates that state \(x_{2}\) exhibit similar deviation from non-spiking methods, particularly with a slightly greater overshoot and error until \(t=4\)s. However, after \(t=4\)s, SNN-LQR-MSIF displays faster convergence, a minor overshoot, and eventual convergence to zero after \(t=8\)s. In summary, these findings indicate that the proposed SNN-based framework exhibits commendable robustness in handling modeling uncertainties or external disturbances compared to non-spiking methods. Fig. 6(c) illustrates the results for the state \(x_{1}\), showcasing the performance of SNN-LQR-MSIF comparable to that of LQR-MSIF. Initially, both methods exhibit an error trend that diverges over time, exceeding the bound around \(t=1.5\)s but returning within the bound by \(t=4\)s. Eventually, both methods achieve stable estimation, converging to zero around \(t=6\)s and \(t=8\)s for SNN-MSIF and MSIF, respectively. Meanwhile, the error from KF deviates entirely and its error has returned to the bound in almost \(t=8\)s and finally, it converged to zero at \(t=10\)s. Notably, at \(t=6\)s, KF exhibits an error that is approximately 20 times greater than the error obtained for the proposed SNN-LQR-MSIF is almost near to zero. In Fig. 6(d), the results for the state \(x_{2}\) show nearly identical performance between SNN-MSIF and MSIF, both maintaining stability in their estimations throughout the considered period. Conversely, the error from KF deviates similarly to what occurred with the state \(x_{1}\). The obtained error for KF has exceeded the bound and has risen continually until almost \(t=2.5\)s reaches its maximum that is about 102 times greater than the obtained error for MSIF Figure 4: Controlled states and estimation errors within \(\pm 3\sigma\) bounds (a) controlled state \(x_{1}\), (b) controlled state \(x_{2}\), (c) estimation error of \(x_{1}\), (d) estimation error of \(x_{2}\) Figure 5: Colored map analysis of normalized average error obtained from various sets of \(\mu\) and \(\nu\). Additionally, compared to all possible spikes for each set of \(\mu\) and \(\nu\) the number of emitted spikes in percent is presented. and SNN-MSIF is also approximately near to zero. Hence, it is evident that SNN-MSIF outperforms MSIF by faster convergence to zero in the presence of uncertainty, and it outperforms KF in terms of estimation stability. An important challenge in robust navigation and control systems is handling measurement outliers, which can arise from sensor faults or external disturbances in the working environment. Therefore, to assess the framework's robustness in such scenarios, unmodeled measurement outliers were introduced into the system at \(t=3\)s, \(t=5\)s, and \(t=6\)s. To simulate the presence of measurement outliers, the measurement system noise was multiplied by a factor of 500 at these time points. Fig. 7 presents a comparison of results for controlled states and estimation errors within \(\pm 3\sigma\) bounds obtained from various frameworks in the presence of measurement outliers. Fig. 7(a) displays the time history of the state \(x_{1}\). It demonstrates that the presence of measurement outliers causes slight deviations in the results obtained from the SNN-based framework between \(t=3\)s, and \(t=7\)s. However, the framework successfully regulates the error, ultimately converging to results obtained from non-spiking methods. Fig. 7(b) demonstrates the same behavior for the state \(x_{2}\). Results from the SNN-based framework show minor deviations compared to non-spiking methods between \(t=3\)s, and \(t=7\)s, indicating that, although more sensitive to measurement outliers, the SNN-based methods continue to control the states effectively. Fig. 7(c) presents the obtained errors for the state \(x_{1}\), which exhibit significant deviations at the points of outlier injection. However, for all considered filters, these deviations are followed by rapid convergence to zero, confirming the filters' stability. Moreover, the error from SNN-MSIF is considerably smaller, especially compared to KF which exceeds the bound on all points. In Fig. 7(d), we investigate the error for the state \(x_{2}\) which reveals when KF experiences abrupt deviation and its error exceeds the bound at the points of outlier injection, whereas SNN-MSIF and MSIF remain stable throughout the simulation. Thus, SNN-MSIF exhibits superior robustness in such situations. Fig. 8 illustrates the spiking pattern of the network achieved by the SNN-LQR-MSIF approach when confronted with measurement outliers. In Fig. 8(a), we present the spiking pattern recorded in the presence of measurement outliers. It is evident that just right before the points of outlier injections (at time steps 300, 400, and 600), most neurons are in standby mode, emitting a few spikes. However, after the introduction of outliers, a substantial portion of neurons (around 40%) become activated to handle the injected disturbances, which are rejected within just 2-3 time steps. The neural activity then decreases, demonstrating that the network effectively overcomes external disturbances or unmodeled dynamics by increasing neural activity or computational cost without failing in the assigned task. Moreover, Fig. 8(b) reveals the temporal variation of active neurons in percent, emphasizing the sudden change in the population of active neurons at the designated time steps. The population rises to nearly 40% to overcome the negative impacts of injected outliers on the system. Fig. 8: Spiking pattern and temporal variation of active neurons population obtained from SNN-LQR-MSIF, (a) spiking pattern, (b) temporal variation of active neurons Fig. 6: Controlled states and estimation errors within \(\pm 3\sigma\) bounds for uncertain model \(\tilde{A}=0.8A\), (a) controlled state \(x_{1}\), (b) controlled state \(x_{2}\), (c) estimation error of \(x_{1}\), (d) estimation error of \(x_{2}\) Fig. 7: Controlled states and estimation errors within \(\pm 3\sigma\) bounds for measurement outlier (a) controlled state \(x_{1}\), (b) controlled state \(x_{2}\), (c) estimation error of \(x_{1}\), (d) estimation error of \(x_{2}\) Finally, to assess the proposed framework's performance in situations where some neurons may become silent, several simulations were conducted with varying numbers of neurons, ranging from \(N=50\) to \(N=400\) in the step of 50 neurons. Fig. 9 presents the average overall network error in the controlled states after \(t=6\)s (where the errors almost converged to zero) versus the number of neurons. In region 1, a significant error divergence to infinity is observed (the solid line which shows the error variation became almost vertical at the edge of region 1) while this error is abruptly decreased at \(N=100\). This corresponds to the minimum number of neurons that the proposed framework requires to function effectively. Below this threshold, active neurons cannot provide sufficient neural activity to perform the necessary computations. An increase in the number of neurons within region 2 results in a gentle reduction in error. The minimum error can be observed at the optimal number of neurons at \(N=250\). In contrast, region 3 shows that an increase in the number of neurons degrades accuracy due to unstable spiking patterns with excessive neural activity. Overall, the proposed framework exhibits remarkable robustness in handling measurement outliers and effectively adapts to situations with varying numbers of neurons, provided a minimum neuron threshold is maintained. These findings support the framework's suitability for robust navigation and control systems in real-world scenarios. Further studies on spiking patterns are provided in [10]. ### _Case study 2: Satellite rendezvous maneuver_ This section is initiated by the presentation of the mathematical model for the satellite rendezvous maneuver. Subsequently, the design of the LQR controller is expounded upon. Lastly, the simulation results are provided. The rendezvous problem involves maneuvering two distinct satellites, the chaser, and the target. As depicted in Fig. 10, the chaser satellite approaches the target in orbit. To derive the equations of relative motion, we consider the following equation in the Earth-centered inertial frame (ECI) [25]. \[\mathbf{s}=\mathbf{r}_{c}-\mathbf{r}_{t} \tag{30}\] Here, \(\mathbf{r}_{c}\) and \(\mathbf{r}_{t}\) represent the position vectors of the chaser and target, respectively. The relative acceleration is described by the following expression: \[\bar{\mathbf{s}}=\bar{\mathbf{r}}_{c}-\bar{\mathbf{r}}_{t} \tag{31}\] Meanwhile, considering the circular orbit, the gravitational force in ECI is expressed as: \[f_{g}(\mathbf{r})=-\mu_{earth}\frac{m}{r^{3}}\mathbf{r} \tag{32}\] Here, \(\mu_{earth}\) signifies the Earth's gravitational parameter, \(m\) denotes spacecraft mass, and \(\mathbf{r}\), and \(\mathbf{r}\) represent the spacecraft position vector and its magnitude, respectively. Importantly, the absolute motion of both the chaser and target in the ECI frame can be separately formulated as follows: \[f_{g}(\mathbf{r}_{t})=\bar{\mathbf{r}}_{t}=-\frac{\mu_{earth}}{r_{t}^{3 }}\mathbf{r}_{t} \tag{33}\] \[f_{g}(\mathbf{r}_{c})=\bar{\mathbf{r}}_{c}=-\frac{\mu_{earth}}{r_{c}^{3 }}\mathbf{r}_{c} \tag{34}\] The above equations represent normalized forms of Eq. (32), divided by the spacecraft mass. To formulate suitable equations for controller design, it is advantageous to represent relative motion in the target frame, a non-inertial reference frame rotating with the angular velocity, \(\mathbf{\omega}\). \[\begin{split}\frac{{d^{*}}^{2}\mathbf{s}^{*}}{dt^{2}}& +\mathbf{\omega}\times(\mathbf{\omega}\times\mathbf{s})+2\mathbf{\omega}\times \frac{{d^{*}}\mathbf{s}^{*}}{dt}\\ &+\frac{d\mathbf{\omega}}{dt}\times\mathbf{s}^{*}+\frac{\mu_{earth}}{r^ {3}}M\mathbf{s}^{*}=\mathbf{f}\end{split} \tag{35}\] Here, \(\mathbf{s}\) denotes relative distance, \(M\), and \(\mathbf{f}\) refer to Earth's mass and external forces, respectively, and the asterisk (*) denotes parameters in the target frame. The linearized form of Eq. (35) in the target frame, known as the Clohessy-Wiltshire Fig. 10: Schematic of rendezvous maneuver Fig. 9: Averaged network error versus number of neurons (because of the huge divergence of error in region 1, the solid line became almost vertical at the edge of region 1) (CW) equations, is expressed as [19]: \[\ddot{x}-2n\dot{z}=f_{x} \tag{36}\] \[\ddot{y}+n^{2}\dot{y}=f_{y}\] (37) \[\ddot{z}+2n\dot{x}-2n^{2}\dot{z}=f_{x} \tag{38}\] where: \[n=\sqrt{\frac{n_{earth}}{R_{o}^{3}}} \tag{39}\] Here, \(R_{o}\) represents the orbital radius of the target spacecraft, and \(n\) is the mean motion. To design the LQR controller, we begin by defining the state and input vectors as \(\mathbf{x}=[x,y,z,\dot{x},\dot{y},\dot{z}]^{T}\), and \(\mathbf{u}=[f_{x},f_{y},f_{z}]\), respectively. Subsequently, we derive the state space form of CW equations, expressed as: \[\dot{\mathbf{x}}=A\mathbf{x}+B\mathbf{u} \tag{40}\] where: \[A=\begin{bmatrix}0&0&0&1&0&0\\ 0&0&0&1&0\\ 0&0&0&0&0&1\\ 0&0&0&0&2n\\ 0&0&0&0&-n^{2}&0\\ 0&0&0&-2n&0&2n^{2}\end{bmatrix}:B=\begin{bmatrix}0&0&0\\ 0&0&0\\ 0&0&0&0\\ 1&0&0\\ 0&1&0\\ 0&0&1\end{bmatrix} \tag{41}\] In general, for the controllable pair of \((A,B)\), the control law for the LQR controller is given by [26]: \[\mathbf{u}=\ -K_{LQR}\mathbf{\hat{x}} \tag{42}\] Here, the symbol \(\mathbf{\widehat{\ }}\), denotes an estimated parameter. The controller gain \(K_{LQR}\) is designed to minimize the following cost function: \[J_{c}=\ \int_{0}^{\infty}(\mathbf{x}^{T}Q_{c}\mathbf{x}\ +\mathbf{u}^{T}R_{c}\mathbf{u})dt \tag{43}\] The weight matrices \(Q_{c}\) and \(R_{c}\) are determined through trial and error, with conditions \(Q_{c}>0\) and \(R_{c}\geq 0\) satisfied. The controller gain \(K_{LQR}\) is calculated using the following equation: \[K_{LQR}=R^{-1}B^{T}S \tag{44}\] where \(S\) is the unique positive semidefinite solution of the algebraic Riccati equation: \[A^{T}S+SA-SBR^{-1}B^{T}S+Q\ =0 \tag{45}\] It is important to note that due to the linearity and time-invariance of the considered system (LTI), the gain matrix \(K_{LQR}\) is computed offline and does not require updating during the maneuver. Moreover, based on the separation principle of linear systems theory, the obtained gain can be incorporated into our presented network without imposing any condition on the estimator. The simulations in this section are conducted using the numerical values provided in TABLE 2, with a time duration of 360 seconds and a time step of 0.1. Additionally, the decoding matrices \(D\) and \(\overline{D}\) are defined using random samples from zero-mean Gaussian distributions with covariances of 1/50, and 1/2500, respectively. Fig. 11 presents a comparison between SNN-LQR-MSIF and non-spiking LQG and LQR-MSIF in the context of the rendezvous maneuver problem. Each element of the system's state vector is individually compared. The results demonstrate that all considered frameworks successfully control the states, with errors smoothly converging to zero. Moreover, it is evident that the proposed SNN-based framework exhibits similar performance in controlling the states, aligning with the results obtained from the optimal non-spiking framework LQG. Notably, for states z, and \(v_{z}\), some discrepancies are observed. For state z, the SNN-LQR-MSIF exhibits a slightly greater overshoot compared to non-spiking LQG and LQR-MSIF, but ultimately successfully controls the state error to zero. Furthermore, for state \(v_{z}\) the result from SNN-LQR-MSIF exhibits minor deviation from non-spiking frameworks between \(t=100\)s and \(t=200\)s. To provide quantitative insight into this comparison, averaged errors obtained from different methods after \(t=300\)s are presented in TABLE 3. \begin{table} \begin{tabular}{l l} \hline Parameter & Value \\ \hline \(\mathbf{r_{o}}\) (\(m\)) & \([70,30,-5]^{T}\) \\ \(\mathbf{v_{0}}\) (\(m/s\)) & \([-1.7,-0.9,0.25]^{T}\) \\ \(\mathbf{x}_{0}\) & \([\mathbf{r_{o}},\mathbf{v_{0}}]^{T}\) \\ \(\mathbf{\hat{x}}_{0}\) & \(\mathbf{x}_{0}\) \\ \(Q_{c}\) & \((1e-6)I_{6}\) \\ \(R_{c}\) & \(I_{3}\) \\ \(Q\) & \((1e-12)I_{6}\) \\ \(R\) & \((1e-2)I_{2}\) \\ \(N\) & \(350\) \\ \(\lambda\) & \(0.001\) \\ \(\mu\) & \(1\) \\ \(\nu\) & \(0.0001\) \\ \(\delta_{MSIF}\) & \(0.005\) \\ \hline \end{tabular} \end{table} TABLE 2PARAMETERS FOR SATELLITE REDEZVOUS Fig. 11: Controlled states for satellite rendezvous obtained from various frameworks in normal condition. The results reveal that non-spiking methods deliver consistent accuracy, and the SNN-based method demonstrates acceptable accuracy. In summary, compared to traditional non-spiking frameworks like LQG and LQR-MSIF, the achieved results for controlled states affirm the acceptable performance of SNN-LQR-MSIF for the problem of satellite rendezvous, a critical maneuver in space robotic applications. To assess the computational efficiency of the SNN-based framework relative to conventional artificial neural networks (ANNs), we delve into the spiking pattern generated by the designed SNN, as showcased in Fig 12(a). This vividly illustrates the network's efficient execution of its task. Upon closer examination, as depicted in Fig. 12(b), during the initial 2000 time-steps (before \(t=100\)s), when state-vector errors are sizable, the network exhibits heightened neural activity, with approximately 20% of neurons being active. Subsequently, the population of active neurons gently declines and remains relatively constant, with minor fluctuations hovering around 5% for the remainder of the simulation. In essence, the network accomplishes its task while utilizing a mere 2.4% of possible spikes over the entire simulation duration, in stark contrast to traditional ANNs that consume 100% of potential spikes. This underscores the computational efficiency of SNN-LQR-MSIF in simultaneously handling estimation and control for satellite rendezvous. Moving on to assess the robustness of the SNN-LQR-MSIF against modeling uncertainties, we introduce a 10% error into the dynamic transition matrix \(\hat{A}=0.9A\) used within the framework. Fig. 13 demonstrates the results for controlled states using aforementioned strategies. This figure underscores that SNN-LQR-MSIF exhibits higher sensitivity to modeling uncertainties compared to non-spiking strategies. However, it also presents that SNN-LQR-MSIF effectively control the system, with all the errors gracefully converging to zero. Furthermore, TABLE 4 presents averaged errors obtained from controlled states after \(t=300\)s, reaffirming the findings depicted in Fig. 13. To further evaluate the robustness of SNN-LQR-MSIF against external disturbances, such as instability in the working environment, we introduce measurement outliers. This scenario is configured so that unmodeled measurement outliers are injected into the system at \(t=100\)s, \(t=150\)s, and \(t=200\)s. Notably, to introduce the outliers at these time steps, the measurement system noise is scaled by a factor of 200. Fig. 14 illustrates the results for various frameworks in this scenario. Similar to modeling uncertainties, it reveals that the SNN-LQR-MSIF is more sensitive to measurement outliers compared to non-spiking strategies. However, it effectively maintains control, with all errors converging to zero. Corresponding averaged errors from the controlled states after \(t=300\)s is presented in TABLE 5, thus reinforcing the insights gleaned from the data depicted in Fig. 14. Fig. 15 provides insight into the spiking pattern of SNN-LQR-MSIF in the presence of measurement outliers. In Fig. 15(a), the network reacts to disturbances by increasing the number of active neurons, rapidly rejecting disturbances in just 2-3 time steps. Fig. 15(b) quantifies this by depicting the \begin{table} \begin{tabular}{l c c c} \hline \hline State & LQG & LQR-MSIF & SNN-LQR-MSIF \\ \hline \(x(m)\) & 0.0223 & 0.0222 & 0.3059 \\ \(y(m)\) & 0.0058 & 0.0057 & 0.4001 \\ \(z(m)\) & 0.0049 & 0.0049 & 0.0082 \\ \(v_{x}(m/s)\) & 0.0012 & 0.0012 & 0.0030 \\ \(v_{y}(m/s)\) & 0.0005 & 0.0005 & 0.0001 \\ \(v_{z}(m/s)\) & 0.0005 & 0.0005 & 0.0035 \\ \hline \hline \end{tabular} \end{table} TABLE IV: AVERAGED ERROR FOR DIFFERENT METHODS – UNCERTAIN MODEL Fig. 12: Spiking pattern and temporal variation of active neurons population obtained from SNN-LQR-MSIF for satellite rendezvous maneuver, (a) spiking pattern, (b) temporal variation of active neurons. Fig. 13: Controlled states for satellite rendezvous maneuver obtained from various frameworks for uncertain model. \begin{table} \begin{tabular}{l c c c} \hline \hline State & KF & LQR-MSIF & SNN-LQR-MSIF \\ \hline \(x(m)\) & 0.0223 & 0.0222 & 0.3924 \\ \(y(m)\) & 0.0057 & 0.0057 & 0.3626 \\ \(z(m)\) & 0.0048 & 0.0048 & 0.0936 \\ \(v_{x}(m/s)\) & 0.0012 & 0.0012 & 0.0018 \\ \(v_{y}(m/s)\) & 0.0005 & 0.0005 & 0.0002 \\ \(v_{z}(m/s)\) & 0.0005 & 0.0005 & 0.0030 \\ \hline \hline \end{tabular} \end{table} TABLE III: AVERAGED ERROR FOR DIFFERENT METHODS variation in the population of active neurons in percentage terms. The figure highlights a significant increase in the proportion of active neurons, rising from approximately 10% to nearly 50%. Finally, the results obtained in this section affirm that the framework proposed in this study demonstrates computational efficiency for such problems. Compared to traditional computing strategies like LQR-MSIF and LQG, it exhibits good and comparable performance in terms of robustness and accuracy. ## V Conclusion In this presented study, we delved into the crucial challenges of concurrent estimation and control within dynamical systems, underscoring its paramount importance. As the complexity and safety considerations associated with mission-critical tasks continue to intensify, the demand for computationally efficient and dependable strategies has become increasingly imperative. Moreover, in the real-world application landscape, rif with uncertainties such as environmental instability, external disturbances, external disturbances, and unmodeled dynamics, the call for robust solutions capable of navigating these challenges is resounding. To answer this call, we introduced a novel approach grounded in biologically plausible principles. Our framework harnessed the potential of a recurrent spiking neural network (SNN), composed of leaky integrate-and-fire neurons, bearing resemblance to a linear quadratic regulator (LQR) enriched by the insights of a modified sliding innovation filter (MSIF). This innovative amalgamation endowed the SNN-LQR-MSIF with the robustness inherited from the MSIF, while concurrently infusing it with computational efficiency and scalability inherent in SNNs. Importantly, the elimination of the need for extensive training, owing to spike coding theories, empowered the design of SNN weight matrices grounded in the dynamic model of the target system. In the face of a diverse array of uncertainties, including modeling imprecision, unmodeled measurement outliers, and occasional neuron silencing, we conducted a thorough comparative analysis. The SNN-LQR-MSIF underwent meticulous evaluation, alongside its non-spiking counterpart, the LQR-MSIF, and the well-established optimal approach, linear quadratic Gaussian (LQG).This evaluation spanned both linear benchmark problems and the satellite rendezvous maneuver, a mission-critical task within the realm of space robotics. The results of our investigation underscored the SNN-LQR-MSIF's commendable performance. It demonstrated competitive advantages in terms of computational efficiency, reliability, and accuracy, positioning it as a promising solution for addressing concurrent estimation and control challenges. Looking forward, we envisage the development of learning-based concurrent robust estimation and control frameworks, leveraging the capabilities of SNNs and predictive coding. These endeavors represent exciting prospects for future research in this domain, further enhancing the state-of-the-art in dynamical system control and estimation. ## VI Conflict of interest statement The authors declare that they do not possess any conflicts of interest pertinent related to this research. This study was executed with the utmost objectivity and impartiality, and the results articulated herein stem from a meticulous and unbiased scrutiny and comprehension of the data. The authors maintain that they harbor no financial or personal affiliations with individuals or entities that could conceivably introduce bias into the findings or exert influence over the conclusions drawn from this study. Fig. 14: Controlled states for satellite rendezvous maneuver obtained from various frameworks subjected to measurement outlier. Fig. 15: Spiking pattern and temporal variation of active neurons population obtained from SNN-LQR-MSIF for satellite rendezvous maneuver subjected to measurement outlier, (a) spiking pattern, (b) temporal variation of active neurons.
2306.13438
Artificial Neural Network Prediction of COVID-19 Daily Infection Count
It is well known that the confirmed COVID-19 infection is only a fraction of the true fraction. In this paper we use an artificial neural network to learn the connection between the confirmed infection count, the testing data, and the true infection count. The true infection count in the training set is obtained by backcasting from the death count and the infection fatality ratio (IFR). Multiple factors are taken into consideration in the estimation of IFR. We also calibrate the recovered true COVID-19 case count with an SEIR model.
Ning Jiang, Charles Kolozsvary, Yao Li
2023-06-23T11:06:36Z
http://arxiv.org/abs/2306.13438v1
# Artificial Neural Network Prediction of COVID-19 Daily Infection Count ###### Abstract It is well known that the confirmed COVID-19 infection is only a fraction of the true fraction. In this paper we use an artificial neural network to learn the connection between the confirmed infection count, the testing data, and the true infection count. The true infection count in the training set is obtained by backcasting from the death count and the infection fatality ratio (IFR). Multiple factors are taken into consideration in the estimation of IFR. We also calibrate the recovered true COVID-19 case count with an SEIR model. **Keywords:** Covid-19 case count, artificial neural network, backcasting method, SEIR model **MSC Classification:** 92D30, 68T07, 65Z05 ## 1 Introduction Since 2020, COVID-19 has infected the majority of the global population, causing nearly 7 million deaths worldwide and enormous economic losses [1]. While the severity of SARS-CoV-2 has significantly decreased due to the circulation of less virulent variants and a hybrid immunity resulting from vaccination and natural infection, COVID-19 still poses a significant threat to high-risk groups. Over the past three years, numerous new variants with high fitness in immune escape and transmission have emerged [2]. As of 2023, the major threat from COVID-19 stems from the potential emergence of new and possibly more virulent variants. This underscores the importance of collecting data, improving its quality, assimilating it with models, and monitoring the circulation of SARS-CoV-2 variants. Public health agencies must stay informed about the current COVID-19 situation, including the variant composition, the number of COVID-19-related hospitalizations and deaths, as well as the percentages of people who are susceptible, recently exposed, contagious, and recently recovered from COVID-19. Since the beginning of the COVID-19 pandemic, it has been well-known that the daily confirmed cases reported by healthcare agencies only represent a small proportion of the true daily infection count [3]. Increasing testing efforts can effectively reduce the ratio of unconfirmed infection cases. The World Health Organization (WHO) recommends that the test positivity rate should be between 3% and 12%. However, determining the true infection count solely from the test positivity rate is challenging, as the distribution of people undergoing COVID-19 tests is not uniform across the population. The lack of an accurate daily infection count significantly impacts both data quality and modeling efforts. It is worth mentioning that due to the lack of reliable daily infection counts, many modeling efforts resort to using death counts to infer model parameters. However, a fundamental assumption of a large class of compartment models is that the probability distribution for an individual transitioning from one compartment to another follows an exponential distribution. These compartment ODE models represent the infinite volume limit of continuous-time Markov chains (CTMC) that describe individual infections. The jump times of a CTMC must adhere to an exponential distribution because of the Markov property. However, as discussed in this paper and numerous other literature sources [4, 5, 6], the time from a confirmed case to a confirmed death significantly deviates from an exponential distribution. Therefore, assuming a linear transition rate from the infected population (I) to deaths (D) is problematic and leads to substantial deviations of the model from the real world. The problem of lacking high-quality data has worsened due to two main reasons. First, in 2020 and 2021, the infection fatality ratio (IFR) could be estimated by combining COVID-19-related death counts with serological surveys [7, 8, 9]. However, this method is no longer viable as nearly everyone in the world has either been infected or vaccinated, and a significant proportion of individuals have experienced multiple infections. The challenge of distinguishing between individuals who "die with COVID-19" and those who "die from COVID-19" further complicates the picture. It is evident that hybrid immunity (from infection and vaccination) and improvements in treatment have significantly reduced the IFR, but obtaining an accurate estimate has become much more difficult today. Secondly, in 2020 and 2021, most individuals who tested positive were recorded and reported by state public health agencies. However, since the spring of 2022, an increasing number of people have been conducting self-tests at home using home antigen tests, and these positive test results are no longer reported to public health agencies. Consequently, since the spring of 2022, we have become increasingly uncertain about the number of new infections occurring each day. To address this issue, we propose the use of an artificial neural network trained with data from the period when death counts and the infection fatality ratio (IFR) were more reliable, in order to predict the current state of COVID-19 circulation. Specifically, we enable the artificial neural network to learn the relationship between testing data, population density, daily new confirmed case counts, and daily true case counts. Once the neural network approximates this relationship, we can utilize it to predict the true COVID-19 case count when the IFR estimate is less dependable. In addition to making predictions, the neural network can also help us understand the connection between testing data and case counts, enabling a better understanding of how many tests are necessary to limit the undercounting factor (the ratio of true cases to confirmed cases) within a certain range. Undoubtedly, the most crucial step in preparing the training data is estimating the true daily case count of COVID-19. We set March 1st, 2022, as the cutoff date due to the following three reasons: (1) Before that day, most infections were initial infections, whereas second infections became more common after mid-2022 with the rise of the Omicron BA.4/5 variant. (2) At that time, oral antiviral treatment was not widely accessible enough to significantly impact the IFR. (3) The availability of home antigen tests was limited, and they did not have a substantial effect on the testing data [10]. During that period, most individuals who tested positive were recorded by state public health agencies. Our method of recovering the daily infection count is referred to as "backcasting" [11, 12],, which relies on the death count and the IFR. This involves the following steps: First, we estimate the distribution of the time delay from a confirmed case to a confirmed death, which is a deconvolution problem requiring regularization techniques. Next, we employ well-recognized published data to estimate the time series of the IFR for each state, taking into account factors such as age distribution of cases, treatment improvements, vaccination rates, and changes in variants. The IFR data primarily come from sources like the Institute for Health Metrics and Evaluation (IHME) [7], which estimate the IFR of all age groups and all countries/states based on death counts and 5131 seroprevalence surveys. Vaccination data, variant data, and age distribution of cases are mainly obtained from the Centers for Disease Control and Prevention (CDC) [13, 14, 15, 16]. Finally, we calibrate the baseline IFR for each state using the findings in previous studies [7, 17], which combine modeling and serological surveys. After recovering the true daily infection count, we train an artificial neural network to uncover the relationships among true cases, confirmed cases, and testing data. The training of the neural network is inspired by the physical-informed neural network (PINN) method [18]. Since the available data only cover a relatively small region of the entire domain, the neural network exhibits limited generalization power. This limitation hampers its ability to make accurate predictions or investigate the relationship between true/confirmed cases and testing data. To overcome this challenge, we incorporate artificially generated input data and use the derivatives of the output with respect to the input data to enhance the training process. The underlying idea is that the true case count should increase with the confirmed case count and decrease with the testing volume. This concept introduces a regularization term that can be applied throughout the entire domain, as it does not rely on the output data (recovered true case count). We refer to this technique as "biology-informed regularization." Our results demonstrate that this regularization significantly improves the generalization ability of the neural network. In addition to the neural network predictions, we utilize the recovered daily infection count to fit an SEIR (Susceptible-Exposed-Infectious-Recovered) model. This model fitting takes into account factors such as vaccination rates and changes in variants. As discussed in Section 6, we recover two key components: (i) a time series of the infection rate and (ii) the impact of variant changes on the parameters of the SEIR model. The neural network prediction provides us with the current infection count based on testing data, while an SEIR model that is well-fitted with updated COVID-19 data enables predictions about potential future scenarios, particularly in the event of the emergence of a new variant from anywhere in the world. The paper is organized as follows. Section 2 introduces the artificial network prediction and explores the relationship between confirmed cases, testing data, and true cases. The generation of the training set, or the recovery of the daily infection count, is examined in Sections 3 and 4. Section 5 focuses on the training of the neural network. Section 6 addresses the fitting of the SEIR model, considering factors such as vaccination and variant changes. Finally, Section 7 presents the conclusion of the study. ## 2 Artificial neural network for daily infection Estimating the true daily infection cases can be viewed as a nonlinear sampling problem. Individuals undergo COVID-19 testing due to various reasons, such as experiencing symptoms or having close contact with confirmed cases. Additionally, routine testing is conducted in schools and workplaces. However, it is important to note that the total number of daily COVID-19 tests only represents a small proportion of the overall population, consisting of those who have a higher likelihood of testing positive. Furthermore, the risk of infection within the tested population compared to the untested population is nonlinearly influenced by numerous factors. As a result, traditional statistical estimation methods are not well-suited for this scenario. As mentioned in the introduction, the primary concept presented in this paper involves utilizing an artificial neural network to learn a function: \[I_{t}\approx f(I_{c},\boldsymbol{\lambda},\boldsymbol{\theta})\,.\] Here, \(I_{t}\) represents the daily true infection count, \(I_{c}\) represents the daily confirmed infection count, \(\boldsymbol{\lambda}\) encompasses various parameters such as testing volume, testing rate, population density, mobility, and wastewater viral RNA concentration, and \(\boldsymbol{\theta}\) represents the neural network parameters. By employing this function \(f\), we can better comprehend the relationship between the undercounting factor (the ratio of daily true infections to daily confirmed infections) and other associated parameters. This understanding will aid public health agencies in obtaining insights into the current state of the COVID-19 pandemic. In this paper, we conduct various tests to select a parameter set \(\boldsymbol{\lambda}=(\lambda_{1},\lambda_{2},\lambda_{3})\) consisting of testing volume, testing rate per capita, and local population density. In order to facilitate the training of the neural network, certain transformations are necessary to centralize and normalize the distribution of the training set. For more detailed information, please refer to Section 5. Our primary focus lies in the estimation of the true infection count \(I_{t}\), which is extensively discussed in Sections 3 and 4. The training set utilized for this purpose includes the true daily infection count, confirmed daily infection count, testing volume, testing rate per capita, and population density data from 50 states plus Washington DC, spanning from February 29, 2020, to March 31, 2022. Upon completion of the neural network training, we derive a function \(f\) that describes the relationship between the undercounting factor, testing effort, and local population density. To assess the performance of the neural network predictions, we conducted testing using data from March 1st, 2022, to July 1st, 2022, specifically focusing on Massachusetts, New York, and Texas. Notably, this period witnessed a surge in the SARS-COV-2 variant Omicron BA.2 across the United States. In Figure 1, we present a comparison between the predicted true case count and the confirmed case count during this timeframe. The predicted true daily new infections greatly surpass the number of confirmed cases. The undercount factor is notably higher in Texas, primarily due to its lower testing rate per capita in comparison to Massachusetts and New York. One of the primary objectives of the neural network approximation is to uncover the relationship between the recovered true cases, confirmed cases, and testing effort. To explore this relationship, we establish a baseline using the testing data and daily new confirmed case data from Massachusetts, New York, and Texas on November 15th, 2020. Next, we investigate different scenarios by modifying the testing volume by a factor of \(x_{1}\) from the baseline and changing the confirmed cases by a factor of \(x_{2}\) from the baseline. The values of \(x_{1}\) and \(x_{2}\) are selected from a \(100\times 100\) equi-spaced 2D grid within the range of \([0.5,1.5]\times[0.5,1.5]\). We generate a total of \(10^{4}\) new scenarios, which are then inputted into the artificial neural network predictor. The results are illustrated in Figure 2. As anticipated, the predicted true cases increase with the confirmed case count and decrease with the testing volume. To gain a clearer understanding of the relationship between the recovered true cases, confirmed cases, and testing efforts, we conduct an analysis considering two distinct scenarios. In scenario A, 40,000 tests are performed, while in scenario B, 100,000 Figure 1: Comparison of predicted true case and confirmed case between March 1st, 2022 and July 1st, 2022. Three panels are for Massachusetts, New York, and Texas, respectively. tests are conducted within a state. The number of positive cases (i.e., confirmed cases) is varied from 500 to 10,000. We examine the recovered true case count in ten different states: California, Florida, Indiana, Massachusetts, New Jersey, New York, Ohio, Pennsylvania, Texas, and Vermont. The results are depicted in Figure 3. The findings reveal that, for the first nine states, under the same testing effort, the recovered case count exhibits a super-linear growth pattern in relation to the confirmed case count. For instance, if 10,000 positive cases are detected out of 40,000 tests, the recovered true case count is approximately four times higher than the confirmed case count. As the testing effort increases and 10,000 positive cases are identified out of 100,000 tests, the recovered true case count is only about 2-3 times higher than the confirmed case count. The last panel of Figure 3 demonstrates a scenario where the neural network fails to accurately predict the true recovered cases. This discrepancy arises because the highest recorded daily test count in Vermont is only 12,000. As a result, the two scenarios tested here are significantly different from the training set that the neural network has learned from. Another interesting observation is that even with sufficient testing, the recovered true case count does not approach zero when the confirmed case count reaches zero. This can be attributed to a combination of factors, including the nature of COVID-19 testing and potential data artifacts. On one hand, regardless of the number of tests conducted by a state, cases in certain underserved communities and remote areas may remain undetected. Consequently, this leads to a non-zero extrapolation of the predicted true case count. On the other hand, there is a possibility of over-counting deaths that are not actually caused by COVID-19 as COVID-19 deaths. Although the impact of these over-counted deaths on the overall pandemic is relatively small, they can contribute a non-negligible proportion to the reported death count during periods when the case count is very low (such as in spring 2021 and spring 2022). Since the true case count is derived from the daily death count in our analysis, this factor may also inflate the estimated daily case count when the daily confirmed case count is low. A well-constructed training set plays a crucial role in the success of a neural network prediction project. In our case, obtaining data on daily confirmed cases, testing volume, testing rate, and population density is relatively straightforward, as this information can be sourced from public health agencies or online databases such as the JHU database. However, the true daily new cases, which serve as the output of the Figure 2: Predicted case with varying confirmed case and testing volume. Three panels are for Massachusetts, New York, and Texas, respectively. artificial neural network, are unknown. Therefore, a significant portion of our efforts in generating the training set is focused on recovering the true daily new cases, a topic that will be extensively discussed in the next two sections. The basic concept behind recovering the daily new cases is that \[\text{daily new case }*\text{delay}\times\text{IFR}=\text{daily new death}\,,\] where IFR represents the infection fatality ratio (IFR) and \(*\) denotes convolution. Hence, our strategy involves first utilizing the daily new confirmed case count and the daily new death count to infer the distribution of the delay from a confirmed COVID-19 case to a confirmed COVID-19 death. Subsequently, we utilize information regarding age composition and vaccination rates to estimate the time series of the IFR. ## 3 Generation of training set I: Backcasting from daily death count ### Data processing It is widely recognized that COVID-19 data reporting is subject to various factors, including weekend effects, holiday effects, noise, and potential human errors in reporting. As a result, the initial step in our analysis involves preprocessing the daily confirmed cases and reported deaths. This data processing procedure comprises the following five steps: 1. Correcting human errors. In many states, the COVID-19 data is affected by artificial data backlogs, where the case count and/or death count of multiple days are reported on a single day. Through extensive testing, we have identified irregular reporting patterns by identifying days where the reported case/death count is at least twice as high as the average of the previous 8 days. The excess cases/deaths resulting from backlogs are then uniformly redistributed over the previous \(L\) days. Figure 3: Predicted case with varying confirmed case and two fixed testing volume scenarios. TC 40k and TC 100k mean recovered true case with 40000 and 100000 daily tests respectively. CC means confirmed true case. The value of \(L\) is determined in such a way that the redistribution for each day does not exceed 65% of the average daily cases/deaths. 2. Removing weekend factor. This step is done by taking the 7-day average. 3. Removing holiday factor. COVID-19 data reporting during Thanksgiving and Christmas/New Year periods is highly irregular due to reporting delays during these holidays. To address this issue, we employ a linear function to bridge the data before and after a specific time window. The discrepancy between the actual reported data and the linear function is then offset by redistributing the corresponding cases/deaths from the day immediately following this time window. Since the training set includes only four holidays, all time windows are manually adjusted to effectively mitigate the impact of the holiday factor. 4. Smoothing data. We use the LOESS regression method [19] to smooth the data. The smoothing window extends from 7 days before to 28 days after each data point. 5. Addressing negative fluctuations: Following the LOESS smoothing process, there is a possibility of the initial phase of case/death fluctuating below zero. To resolve this issue, we utilize an exponential function to fit the 7-day average data of the first \(L\) days (starting on January 22, 2020) for each state. Here, \(L\) represents the date of the peak of the first wave during the spring of 2020. The initial case/death counts are then replaced with the values obtained from this exponential fit. ### Deconvolution and regularization The daily reported deaths attributed to Covid-19 can be viewed as a _convolution_ of the time series of fatal infections with a delay distribution. Specifically, the delay time from a confirmed case to a reported death, denoted as \(\Delta\), follows an unknown distribution: \[\mathbb{P}[\Delta=i]=\delta_{i}\,.\] The number of confirmed deaths in the United States on day \(n\), denoted as \(D_{n}\), can be expressed as \[D_{n}=\sum_{i=0}^{n}I_{i}\delta_{n-i}\text{CFR}\,,\] where \(I_{i}\) represents the confirmed cases in the United States on day i, and CFR represents the case fatality rate. Thus, the process of recovering fatal infections from reported deaths is a _deconvolution_ operation. (Note that a deconvolution is different from a convolution towards the other direction as [12] does, which tends to overly smooth the time series of the true infection count.) Based on the findings presented in [12, 20], we assume that the delay distribution, \(\{\delta_{i}\}\), follows a gamma distribution characterized by two unknown parameters, \(\alpha\) and \(\beta\). Let N denote the duration of available data. The convolution problem can be expressed in matrix form as \[P_{N}(\alpha,\beta)\times\vec{I}\times\text{CFR}=\vec{D}\,, \tag{1}\] where \(\vec{I}\) is a column vector of length N containing the number of confirmed infections each day, \(\vec{D}\) is a similar column vector containing the number of reported deaths, and \(P_{N}(\alpha,\beta)\) is an \(N\times N\) square matrix that represents the discretized gamma distribution. Each column of the matrix represents the conditional probability of death on each day. Specifically, the entry in column \(i\) and row \(j\) represents the probability that a newly confirmed COVID-19 patient on day \(i\) eventually dies on day \(j\), conditioned on the assumption that this infection is fatal. (Note that the daily new infection and daily reported death have been pre-processed using the method described in the previous subsection.) In other words, we have \[P_{N}(\alpha,\beta)=\begin{bmatrix}\theta_{0}&&&&\\ \theta_{1}&\theta_{0}&&\\ \theta_{2}&\theta_{1}&\theta_{0}&&\\ \vdots&\vdots&\vdots&\ddots&&\\ \theta_{m}&\theta_{m-1}&\theta_{m-2}&\ldots&\ddots&\\ &\ddots&\ddots&\ddots&\ddots&\ddots\\ &&\theta_{m}&\theta_{m-1}&\theta_{m-2}&\ldots&\theta_{0}\end{bmatrix}\in\mathbb{ R}^{n\times n}\,,\] where \[\delta_{i}=\mathbb{P}[i\leq Z\leq i+1]\] for a Gamma distributed random variable \(Z\) with parameters \(\alpha\) and \(\beta\). One might attempt to find suitable parameters by minimizing \(\|P_{N}(\alpha,\beta)^{-1}\times\vec{D}\times\mathrm{CFR}-\vec{I}\|_{2}^{2}\) over all possible values of \(\alpha\) and \(\beta\). However, this approach is not feasible for two reasons. Firstly, the deconvolution process is known to be unstable due to the ill-conditioned nature of \(P_{N}(\alpha,\beta)\)[21, 22]. Even small noise in \(\vec{D}\) can lead to significant amplification during matrix inversion. Secondly, the case fatality ratio CFR is unknown. Thus, _regularization_ is necessary to prevent excessive fluctuations in the recovered \(\vec{I}\). The optimization problem also involves the recovery of the unknown CFR. The regularization is achieved by incorporating penalty terms for the second and fourth order derivatives. Two matrices, namely \(R_{2}\) and \(R_{4}\), are employed to discourage excessive fluctuations in the time series of confirmed cases. Each row of \(R_{2}\) is responsible for regularizing one entry of \(\vec{I}\) (except the first and last ones). More precisely, \(R_{2}\) has the form \[R_{2}=\lambda_{2}\begin{bmatrix}1&-2&1&&\\ &1&-2&1&\\ &&\ddots&\ddots&\ddots&\\ &&&1&-2&1\end{bmatrix}\in\mathbb{R}^{(N-2)\times N}\,.\] Similarly, matrix \(R_{4}\) regularizes the fourth order derivative of entries of \(\vec{I}\) (except the first two entries and the last two entries). It has the form \[R_{4}=\lambda_{4}\begin{bmatrix}1&-4&6&-4&-1&&\\ &1&-4&6&-4&1&\\ &&\ddots&\ddots&\ddots&\ddots&\ddots&\\ &&1&-4&6&-4&-1\end{bmatrix}\in\mathbb{R}^{(N-4)\times N}\,.\] To simplify the computation, we make the assumption that the maximum delay is 35 days and denote the modified delay matrix as \(\hat{P}\). Consequently, each column of \(\hat{P}\) contains at most 35 non-zero entries (ranging from \(\theta_{0}\) to \(\theta_{m}\), where \(m=34\)). With the regularization and the tuning parameter \(\lambda\), we can utilize the modified delay matrix \(P\) to perform a reliable deconvolution of the reported death time-series into the corresponding fatal infections. This allows us to estimate the daily confirmed infection count by solving the following system using the least squares method: \[\begin{bmatrix}P_{N}(\alpha,\beta)\text{CFR}\\ \lambda_{2}R_{2}\\ \lambda_{4}R_{4}\end{bmatrix}\vec{I}=\vec{D}\,. \tag{2}\] Figure 4 demonstrates the effectiveness of regularization and how the variation of \(\lambda_{2}\) and \(\lambda_{4}\) impacts the recovered least squares solution of equation (2). Figure 4: The red curve shows the reported death time series in the United States between 03/01/20 and 11/30/21, and the blue curve is the deconvolve time series of fatal infections when some arbitrary matrix \(P\) is constructed with values of \(\alpha=30\) and \(\beta=0.5\). For the sake of simplification, \(\lambda_{4}=0\) in all plots. \(\lambda=\lambda_{2}\) changes from 0 (no regularization) to \(10^{5}\) (too much regularization) in six plots. Since the parameter CFR is also unknown, we introduce \(\gamma=\text{CFR}\) into the optimization problem. After conducting several tests, we determine that \(\lambda_{2}=0.5\) and \(\lambda_{4}=2\) are suitable coefficients for the regularization matrices. Additionally, we only focus on minimizing the difference between the observed \(\vec{I}\) and the least squares solution after a period of 120 days from the start date (March 1st, 2020). This choice is made because the testing was limited during the initial few months of the pandemic, and widespread testing became available in the summer of 2020, stabilizing the case fatality rate. This gives the optimization problem \[\min_{\alpha,\beta,\gamma}\|\mathcal{P}_{120}(\vec{I}-\hat{I}(\alpha,\beta, \gamma))\|_{2}^{2} \tag{3}\] where \(\hat{I}\) is the least square solution of \[\begin{bmatrix}P_{N}(\alpha,\beta)\gamma\\ \lambda_{2}R_{2}\\ \lambda_{4}R_{4}\end{bmatrix}\hat{I}=\vec{D}\,,\] the projection matrix \(\mathcal{P}_{120}\) cuts off the first 120 entries of the vector. We implemented the optimization procedure using the fmincon function in MATLAB, and the results are displayed in Figure 5. The daily confirmed case count and daily death count of the United States come from the Coronavirus Resource Center of Johns Hopkins University [23]. As illustrated in the figure, the recovered case count from the death count shows a reasonable match with the confirmed case count, particularly after the availability of widespread testing in the summer of 2020. The optimal values obtained for \(\alpha\) and \(\beta\) are \(\hat{\alpha}=25.358\) and \(\hat{\beta}=0.802\), respectively. These values indicate that the expected value of the delay between a fatal infection and the reported death is approximately \(\hat{\alpha}\hat{\beta}=20.337\) days. ### Recovery of daily infection count After obtaining the optimal values \(\hat{\alpha}\) and \(\hat{\beta}\), we can recover the true daily new infection count by solving the following least squares problem: \[\begin{bmatrix}P_{N}(\alpha,\beta)\Gamma_{ifr}\\ c\lambda_{2}R_{2}\\ c\lambda_{4}R_{4}\end{bmatrix}\hat{I}=\vec{D}\,, \tag{4}\] where \(\Gamma_{ifr}\) is a diagonal matrix whose entries represent the time series of the infection fatality rate (IFR), and \(c\) is the average value of the IFR. Note that the IFR is typically a small value around 0.01. Without multiplying \(c\), we would overly regularize the deconvolution problem. The solution to this least squares problem, denoted by \(\tilde{I}\), represents the time series of the recovered infection count. As seen from the problem (4), the next crucial data required is the time series of the infection fatality rate (IFR). The IFR is influenced by various factors, including the overall healthiness of the population, treatment methods, age composition of cases, vaccination rate, and the presence of variants. In the following section, we will discuss these factors in detail. ## 4 Generation of training set II: Estimation of infection fatality ratio (IFR) The infection fatality ratio (IFR) of a state at time \(t\) can be expressed as \[\text{IFR}(t)=\text{IFR}_{b}\times\text{IFR}_{R}(t)\,,\] where \(\text{IFR}_{b}\) is the baseline IFR and \(\text{IFR}_{R}(t)\) is the relative change in IFR over time. The baseline IFR is calibrated using well-acknowledged data, as discussed in the previous subsection. The time series \(\text{IFR}_{R}(t)\) represents the relative changes in the IFR due to various factors, including improvements in treatment, changes in the age composition of cases, vaccination efforts, and the emergence of different variants. More precisely, \(\text{IFR}_{R}(t)\) is represented by \[\text{IFR}_{R}(t)=\text{IFR}_{T}(t)\times\text{IFR}_{A}(t)\times\text{IFR}_{V} (t)\times\text{IFR}_{O}(t)\,,\] where \(\text{IFR}_{T}(t)\) is the relative reduction of IFR due to the improvement of treatment, \(\text{IFR}_{A}\) is the relative change in IFR due to the age composition of cases, \(\text{IFR}_{V}(t)\) is the relative reduction in IFR due to vaccination, and \(\text{IFR}_{O}(t)\) is the reduction in IFR Figure 5: The result of the deconvolution is the yellow curve (found using \(\lambda_{2}=0.5\) and \(\lambda_{4}=2\)). The delay distribution is shown in the lower panel. due to the Omicron variant. At the baseline (July 1st, 2020), all four factors of the IFR are assumed to be equal to 1. ### Time dependence of IFR baseline Since the beginning of the pandemic, significant advancements have been made in the treatment of COVID-19. The relative reduction in the infection fatality rate (IFR\({}_{T}\)) due to treatment improvements is obtained from the study [7]. We gather IFR estimates for the United States on April 15, 2020, July 15, 2020, October 15, 2020, and January 1, 2021. To estimate the values between April 15, 2020, and January 1, 2021, cubic interpolation is employed. Linear extrapolation is used for estimating IFR\({}_{T}\) before April 15, 2020, and after January 1, 2021. The extrapolation is halted in March 2022 when oral antiviral treatments become widely available. Linear interpolation is no longer suitable after this time. At the end of our estimation, the final IFR\({}_{T}(t)\) is approximately 0.005 before rescaling to the baseline. This estimate may be slightly conservative as monoclonal antibody treatments became widely available in 2021. However, it is challenging to estimate the reduction in IFR for each category of treatment method. The calibration performed in the previous subsection, utilizing serological survey data, partially addresses this issue. The plot of IFR\({}_{T}(t)\) is shown in Figure 6 (left). ### Change of age compositions Unlike many other pathogens, age is the most significant risk factor for COVID-19. The disease poses a considerable risk to individuals in advanced age groups. As illustrated in Figure 6 (right), based on data from [7], the infection fatality rate (IFR) for those aged 85 and above is thousands of times higher compared to younger age groups. Moreover, due to changes in public health policies and events such as nursing home outbreaks and school reopenings, the age composition of confirmed COVID-19 cases has varied significantly throughout the pandemic. Therefore, it is crucial to estimate the age composition of COVID-19 cases for each state. Figure 6: Left: time dependent baseline IFR. Right: IFR for different age group in log-linear plot. Case rates for seven different age groups (under 20, 20-29, 30-39, 40-49, 50-64, 65-74, and 75+) in ten different Health and Human Services (HHS) regions are obtained from the CDC COVID-19 patient database [15]. For the under 20 age group, which is further divided into many groups in the CDC patient database, we use the case rate of ages 12-15 to represent the entire under 20 age group. This approximation has a minimal impact on the overall population IFR since the IFR for the under 20 age group is very low. The case rates at the beginning of each month are collected and interpolated using the modified Akima algorithm [24, 25]. The advantage of the modified Akima algorithm is its ability to minimize overshooting or undershooting when data changes dramatically. However, during the Omicron surge at the end of 2021, weekly data is utilized as the case rates exhibit significant fluctuations during this period. The interpolated case rates for all age groups and regions are presented in Figure A9 in the Appendix. The time series \(\text{IFR}_{A}(t)\) can be obtained by calculating a weighted average of the case rate and the age group IFR. The age group IFR is derived from a weighted average of the age-specific IFR values reported in the study [7] and the population of each 5-year age group based on the 2020 US Census data. However, due to the significantly higher IFR in the 85+ age group, it is estimated separately. In this estimation, we assume that the number of individuals in each age group decreases exponentially. By fitting the exponential distribution, we can determine the rate of this decline. As there are approximately 97,000 individuals in the United States aged over 100 years, the rate of the exponential distribution can be easily determined. The IFR of the 85+ age group is obtained through a weighted average of the IFR values for each specific age provided in [7] and the estimated number of individuals in each age group based on the exponential distribution fitting. ### Change of vaccination rate To assess the relative change in IFR due to vaccination, represented by the time series \(\text{IFR}_{V}\), we need to estimate the relative risk of cases and deaths for the vaccinated group compared to the unvaccinated group. This can be achieved by analyzing CDC data provided by [13], which provides case rates and death rates for each age group among both the vaccinated and unvaccinated populations. The CDC vaccination data given by covers five age groups: 18-29, 30-49, 50-64, 65-79, and 80+. Vaccination for individuals under 18 years old does not significantly impact the overall IFR due to their relatively low risk. It's important to note that the CDC data is reported on a weekly basis. To match the daily basis of our analysis, we use cubic spline interpolation to convert the data to daily values. For a specific age group \(i\), we can determine the relative risk of cases and deaths by comparing the case and death rates of the unvaccinated group to those of the vaccinated group. Let \(R_{c}(i)\) and \(R_{d}(i)\) denote the relative risk of cases and deaths, respectively, for age group \(i\). Additionally, let \(\alpha(i)\) represents the vaccination rate of age group \(i\), and \(d_{i}\) denotes the death rate of this age group. We can then determine the proportion of deaths contributed by the vaccinated group, denoted by \(\beta(i)\), \[\beta(i):=\frac{\alpha(i)}{\alpha(i)+(1-\alpha(i))R_{d}(i)}\,.\] Considering the IFR of age group \(i\) as IFR\((i)\), we can then calculate the relative reduction of IFR due to vaccination using the expression \[\frac{\sum_{i}\frac{d_{i}}{\text{IFR}(i)}}{\sum_{i}\frac{d_{i} \beta(i)R_{d}(i)}{R_{c}(i)\text{IFR}(i)}+\sum_{i}\frac{d_{i}(1-\beta(i))}{ \text{IFR}(i)}}\,.\] Performing this calculation for each day allows us to obtain the time series IFR\({}_{V}\). The daily vaccination rates for each age group can be obtained from CDC data set [13]. ### Change of variants The final step is to account for the impact of variants on the infection fatality rate (IFR). Based on the observations shown in Figure 5, it can be seen that the case fatality rate in the United States remained relatively stable from the summer of 2020 (when testing became widely available) until December 2021, when the Omicron variant became dominant. This suggests that the Alpha and Delta variants did not significantly alter the IFR in the United States, despite some studies indicating that the Delta variant may be more intrinsically virulent. One possible explanation is that Delta can infect some vaccinated individuals who have a significantly lower risk of death. Therefore, the overall IFR was not significantly affected by these variants. However, the Omicron variant has been found to have a substantial impact on the IFR. The estimated hazard ratio of death for Omicron variant comparing with Delta variant in various studies ranges from 0.12 to 0.34. Here we set the relative risk of the Omicron variant compared to the pre-Omicron era as 0.25, which is roughly the average hazard ratio reported in [26, 27, 28, 29]. Consequently, the relative risk IFR\({}_{O}(t)\) is given by \[\text{IFR}_{O}(t)=(1-O(t))+0.25O(t)\,,\] where \(O(t)\) represents the proportion of the Omicron variant at time \(t\). The time series \(O(t)\) can be obtained through logistic regression analysis of sequencing data from [16]. ### Calibration of state baseline IFR After acquiring information on how the IFR changes over time, age composition, vaccination rate, and variants, it is necessary to calibrate the baseline IFR using established results from modeling and serological surveys. The baseline IFR, denoted as IFR\({}_{b}\), represents the estimated IFR on July 1st, 2020. The time series of IFR is then expressed as \[\text{IFR}(t)=\text{IFR}_{b}\times\text{IFR}_{\tau}(t)\,,\] where \(\text{IFR}_{r}(t)\) has already been determined by combining all relevant factors. In this paper, we use two well-recognized studies published in [7] and [17] to calibrate our baseline IFR. The study published in [7] utilizes serological surveys to estimate the IFR for each state on April 15, 2020, July 15, 2020, October 15, 2020, and January 1, 2021. On the other hand, the study described in [17] employs a combination of modeling and serological surveys to estimate the IFR and undercounting factor (the ratio of true cases to confirmed cases) for each state on March 7, 2021. Both studies provide confidence intervals to account for the uncertainty in their estimates. The method used to estimate \(\text{IFR}_{b}\) is as follows. We start by assuming \(\text{IFR}_{b}=0.00754X\) to simplify the calculation, where \(0.00754\) represents the estimated IFR of the United States as of January 1st, 2021, as reported in [7]. The parameter \(X\) acts as a relative prefactor. Next, we use \(\text{IFR}_{b}=0.00754\) to estimate the IFR on January 1st, 2021, March 7th, 2021, and the undercounting factor on March 7th, 2021. By comparing these estimated values with the data provided in [7] and [17], we can determine the likelihood of \(X\) for each state. To estimate the probability density of \(X\), we employ a Monte Carlo-like approach. We assume that the two IFRs and the undercounting factor are normally distributed. The mean and variance of the normal distribution are derived from the estimated values and their respective confidence intervals. This approach yields an estimated probability density function for \(X\). For example, if the IFR of a state on January 1st, 2021, using the baseline IFR, is denoted as \(r_{1}\), and the normal random variable representing the IFR of this state, based on [7], has a mean of \(\mu\) and a variance of \(\sigma^{2}\), then this data suggests that \(X\) follows a normal probability density function \(N(\mu/r_{1},(\sigma/r_{1})^{2})\). Let \(f_{1}\), \(f_{2}\), and \(f_{3}\) denote the probability density functions obtained from the IFR in [7], the IFR in [17], and the undercounting factor in [17], respectively. The likelihood of \(X\) is represented by the rescaled sum of these probability density functions, i.e., \(f_{1}(x)+f_{2}(x)+f_{3}(x)\). The estimated baseline IFR, denoted as \(IFR_{b}\), can then be calculated as \[\text{IFR}_{b}=0.00754\times\frac{1}{3}\int_{-\infty}^{\infty}x(f_{1}(x)+f_{2 }(x)+f_{3}(x))\text{d}x\,,\] which is equal to \(0.00754\) multiplied by the expectation of \(X\). Additionally, the lower and upper bounds of \(\text{IFR}_{b}\) can be determined as the \(0.05\) and \(0.95\) percentiles of the rescaled probability density function \(\frac{1}{3}(f_{1}(x)+f_{2}(x)+f_{3}(x))\), respectively. In states with limited case counts or a significant number of COVID-19 cases in people of advanced age, the serological survey may tend to overestimate the IFR. This can result in the recovered true cases being larger than the confirmed cases during certain time periods. To address this issue, we introduce an additional upper bound correction to ensure that the recovered true cases are not smaller than the confirmed cases. We use a criterion where, after a sufficient number of cases, the 50-day moving average of the recovered true cases should be greater than the confirmed cases. This provides an upper bound for the prefactor \(X\). If the calibrated \(X\) from the likelihood function exceeds the upper bound, we set \(X\) to be the upper bound value and adjust the lower bound of the confidence interval. The lower bound is reset to \(0.6891\) (which is the average ratio of the lower bound of the confidence interval to the estimated IFR in [7]) multiplied by \(X\). This additional correction is applied in a few states such as Virginia, Rhode Island, and Massachusetts. Regarding the estimation of IFR in Vermont, there is a significant difference between the estimates in [7] and [17]. We observe that the upper bound of \(X\) is close to the estimate in [17]. Taking into account the healthcare conditions, the estimated IFR for Vermont in [7] (which is \(2.498\%\)) appears unreasonably high. Therefore, we only use the estimates from [17] to calculate the likelihood of \(X\) for Vermont. After calibration, we obtain the time series of IFR, denoted as \(\Gamma_{ifr}\), for 10 selected states. These states are shown in Figure 7. The time series of IFR for all states, including Washington DC, can be found in the Appendix. After obtaining the time series of IFR, \(\Gamma_{ifr}\), we proceed to solve the least square problem for each state, including Washington DC. The results for the 10 selected states are plotted in Figure 8. The time series of all 51 states, spanning from February 29, 2020, to March 1, 2022, are provided in the Appendix. These 51 time series serve as the output data for the training set. It is evident that in most cases, the trend of the recovered true cases aligns with that of the confirmed cases. Figure 7: Time series of each state after calibration. ## 5 Artificial neural network training ### Data Normalization As previously mentioned, the objective of the neural network approximation is to find a function \[I_{t}=f(I_{c},\boldsymbol{\lambda},\boldsymbol{\theta})\,,\] where \(I_{t}\) and \(I_{c}\) represent the normalized recovered true case rate and confirmed case rate, respectively. The parameters \(\boldsymbol{\theta}\) correspond to the neural network parameters, while \(\boldsymbol{\lambda}=(\lambda_{1},\lambda_{2},\lambda_{3})\) represents the input data normalized from testing volume, testing rate, and population density. To ensure proper normalization of the data, we apply the following nonlinear transformations to the input data. Let Pop denote the local population (state population), \(C_{c}\) denote the daily confirmed case count, \(C_{t}\) denote the recovered true case count, \(T_{v}\) denote the daily test volume, and \(d_{en}\) denote the local population density. We utilize the following transformations: \[I_{c} =50\sqrt{C_{c}/\text{Pop}}\] \[I_{t} =25\sqrt{C_{t}/\text{Pop}}\] \[\lambda_{1} =0.05\sqrt[3]{T_{v}}\] \[\lambda_{2} =200T_{v}/\text{Pop}\] \[\lambda_{3} =0.2\log d_{e}n\,.\] This results in a training data set \(\{(X_{i},y_{i})\}_{i=1}^{N}\), where \(X_{i}=(I_{c}(i),\lambda_{1}(i),\lambda_{2}(i),\lambda_{3}(i))\) and \(y_{i}=I_{t}(i)\). Figure 9 displays the distribution of the data in the training data set. Figure 8: Time series of recovered true case of 10 selected state and undercounting factor. CC: confirmed case. RC: recovered true case. UF: undercounting factor. ### Neural network architecture and training In this paper, we employ a feed-forward neural network composed of six layers to approximate the true case count \(I_{t}\). The network architecture consists of layers with widths of 64, 128, 128, 128, 64, and 16, respectively. Each layer incorporates a sigmoid activation function and includes an \(L^{2}\) regularization term with a magnitude of 0.005. We choose the sigmoid activation function instead of ReLU because the ReLU activation function exhibits discontinuities in its first-order derivative. Since we require the first-order derivative for regularization during training, the sigmoid activation function proves to be a suitable choice. The loss function used in this study comprises two components: the classical mean squared error (\(L_{1}\)) and a penalty term (\(L_{2}\)) for regularization. The \(L_{1}\) term quantifies the mean squared difference between the neural network prediction \(f(X_{i})\) and the observed daily true case count \(y_{i}\): \[L_{1}=\frac{1}{N}\sum_{i=1}^{N}\left(f\left(X_{i}\right)-y_{i}\right)^{2}\,.\] However, due to the limited coverage of the observed data in the 4D domain of \(f\), training solely on \(L_{1}\) is insufficient to effectively capture the relationship between true cases, confirmed cases, and testing data. For instance, when varying the confirmed cases and testing data from a specific day, there is a noticeable discrepancy in the neural network's predictions. This is illustrated in Figure 10, where the predicted cases Figure 9: Distribution of normalized data do not exhibit a monotonic increase with confirmed cases or a decrease with testing volume. Moreover, the training results lack robustness, as demonstrated in Figure 10 with two different training results producing distinct true case count profiles for Massachusetts and New York. Therefore, in order to address these issues, we incorporate the concept of a Physics-informed neural network and introduce a _biology-informed regularization_ based on three principles. Firstly, we enforce that the true case count must be greater than the confirmed case count. This is achieved by including a term that penalizes the deviation between the predicted true case count and twice the confirmed case count. Secondly, we ensure that the partial derivative of the true case count with respect to the confirmed case count is positive. This is important because a higher confirmed case count implies a higher true case count. We incorporate this constraint by including a term that penalizes negative partial derivatives. Lastly, we aim to capture the negative relationship between the true case count and the testing rate. This is because an increased testing rate, given a fixed number of confirmed cases, suggests a lower number of true infections. To enforce this relationship, we introduce a term that penalizes positive partial derivatives of the true case count with respect to the testing rate. Figure 10: Predicted case with varying confirmed case and testing volume without using regularization. Input data is identical to that of Figure 2. Two rows are from the prediction of two different training results. By incorporating these penalty terms, the overall regularization term \(L_{2}\) becomes \[L_{2}=\frac{1}{N}\sum_{i=1}^{N}\left[5\max\left(I_{c}(i)-2f(X_{i}),0\right)+8\max \left(-\frac{\partial f}{\partial I_{c}}(X_{i}),0\right)+12\max\left(\frac{ \partial f}{\partial\lambda_{1}}(X_{i}),0\right)\right].\] To compute the partial derivative terms in \(L_{2}\), we can utilize backpropagation, which is a built-in function of TensorFlow (tf.gradient). Since the training set derived from real-world data may not sufficiently cover the entire input domain, and the loss term \(L_{2}\) is independent of the output, we can address this issue by uniformly sampling an additional set of \(M=50000\) points within the 4D box that contains the original training set \(\{(X_{i},y_{i})\}_{i=1}^{N}\). These new points, denoted as \(\{Z_{j}\}_{j=1}^{M}\), provide a more representative distribution as they uniformly cover the 4D domain. Training the penalty term \(L_{2}\) using the training set \(\{Z_{j}\}_{j=1}^{M}\) will help the neural network "learn" and properly capture the three constraints described above. To address the differences in scales between \(L_{1}\) and \(L_{2}\) and the fact that they are trained on different sets, a training strategy inspired by the "Alternating Adam" approach proposed in [30] is employed. During each training step, a random batch is sampled from the original training set \(\{(X_{i},y_{i})\}_{i=1}^{N}\), and the Adam optimizer is applied to optimize \(L_{1}\) with respect to this batch. Then, another random batch is sampled from the training set \(\{Z_{j}\}_{j=1}^{M}\), and the Adam optimizer is used to optimize \(L_{2}\) with respect to this batch. The learning rates for the Adam optimizers are set to \(0.0003\) for \(L_{1}\) and \(0.0005\) for \(L_{2}\). This approach effectively trains both loss functions simultaneously, regardless of their scales. The Adam optimizer's inherent scaling-invariance ensures that the optimization is performed properly for each loss function [31]. To monitor the training progress and prevent overfitting, a testing set is randomly selected, consisting of \(20\%\) of the original training set \(\{(X_{i},y_{i})\}_{i=1}^{N}\). The losses on both the training set and the testing set are observed during training. The training process stops after \(25\) epochs to avoid overtraining. Each epoch involves training both loss functions using \(458\) batches with a batch size of \(32\). ## 6 Data assimilation of SEIR model ### Recover S,E,I,R trajectories. Consider the SEIR model, which describes the dynamics of a population in terms of susceptible (\(S\)), exposed (\(E\)), infected (\(I\)), and recovered (\(R\)) case count, \[\frac{dS}{dt}=-\beta SI+\delta R\] \[\frac{dE}{dt}=\beta SI-\alpha E\] \[\frac{dI}{dt}=\alpha E-\gamma I\] \[\frac{dR}{dt}=\gamma I-\delta R\,. \tag{5}\] \(\alpha\) represents the reciprocal of the latent period, \(\gamma\) represents the recovery rate, and \(\delta\) represents the death rate. The parameter \(\beta\) is the time-dependent infection rate, which controls the spread of the infection. In section 3, we obtained the daily new infected cases, denoted by \(I_{n}(t)\), where \(t\) is measured in days. Note that the SEIR model (5) satisfies the mass-action dynamics. By the theory of mass-action networks [32, 33], the ODE (5) can be seen as the infinite population limit of a countable state Markov process that describes the exposure, infection, and recovery of individual. Notice that the jumping time between two states of a continuous-time Markov process satisfies an exponential distribution, we can treat the daily new infection cases as a convolution of the daily new exposed case and an exponential distribution with rate \(\alpha\). Based on this understanding, we can estimate the daily new exposed cases, denoted by \(E_{n}(t)\), by backcasting from \(I_{n}(t)\) using an exponentially distributed latent period of \(1/\alpha\). Consider a time window of 35 days where the daily new infection cases are transferred from the daily new exposed cases over this period. We introduce the density function \(f_{m}\) for the exponential distribution \(m\) days from the current day, defined as \(f_{m}=\alpha e^{-\alpha m}\). Let \(M=35\) denote the time window. The daily new infection cases at time \(t=k\) can be expressed as \[I_{n}(k)=\left\{\begin{aligned} &\sum_{j=0}^{k}f_{k-j}E_{n}(j)\,,& k\leq M\\ &\sum_{j=0}^{M}f_{M-j}E_{n}(k+j-M)\,,& k>M\end{aligned}\right.\,. \tag{6}\] To compactly represent this system, let's define \(P_{M}(\alpha)\) as the exponential distribution matrix \[P_{M}(\alpha)=\begin{bmatrix}f_{1}&&&&\\ f_{2}&f_{1}&&\\ \vdots&\vdots&\ddots&&\\ f_{M}&f_{M-1}&\cdots&\ddots&&\\ &\ddots&\ddots&\ddots&\ddots&\\ &&&& f_{M}&f_{M-1}&\cdots&f_{1}\end{bmatrix}\in\mathbb{R}^{N\times N}\,.\] Here, \(N\) represents the number of time points. Let \(\vec{I}_{n}\) and \(\vec{E}_{n}\) denote the daily new infected vector and the daily new exposed vector, respectively: \[\vec{I}_{n}=[I_{n}(0),I_{n}(2),\cdots]^{T}\in\mathbb{R}^{N},\] \[\vec{E}_{n}=[E_{n}(0),E_{n}(2),\cdots]^{T}\in\mathbb{R}^{N}.\] We can rewrite the system (6) in a matrix form as \[\vec{I}_{n}=P_{M}(\alpha)\vec{E}_{n}\,.\] Considering the ill-conditioned nature of the matrix \(P_{M}(\alpha)\), regularization is necessary. To achieve this, we define the matrices \(R_{2}\) and \(R_{4}\) as described in Section 3 to regularize the second and fourth-order derivatives of the entries of vector \(\vec{E}_{n}\). Introducing regularization parameters \(\lambda_{2}=1.5\) and \(\lambda_{4}=3\), we construct the matrix \(D\) as follows: \[D=\begin{bmatrix}\lambda_{2}R_{2}\\ \lambda_{4}R_{4}\end{bmatrix}\,.\] To obtain the daily new exposed cases \(E_{n}(t)\) while incorporating regularization, we solve the following matrix equation \[\begin{bmatrix}P_{M}(\alpha)\\ D\end{bmatrix}\vec{E}_{n}=\begin{bmatrix}\vec{I}_{n}\\ 0\end{bmatrix}\,.\] Given the daily new infected cases and daily new exposed cases, we can recover the trajectories of the four case variables: susceptible cases \(\hat{S}(t)\), exposed cases \(\hat{E}(t)\), infection cases \(\hat{I}(t)\), and recovered cases \(\hat{R}(t)\). To determine the total infected case count at time \(t=k\), we sum the daily new infection cases at the current time \(t=k\) with the remaining total infection cases from the previous time \(t=k-1\). By the law of mass-action, each day the number of new recovered individual is \(exp(-1/\gamma)\hat{I}\). This gives an recursive relation \[\hat{I}(k)=I_{n}(k)+\exp(-1/\gamma)\hat{I}(k-1)\,.\] Similarly, for the total exposed case count, we consider the daily new exposed cases at time \(t=k\) and add the remaining total exposed cases from the previous time \(t=k-1\). This gives an recursive relation \[\hat{E}(k)=E_{n}(k)+\exp(-1/\alpha)\hat{E}(k-1)\,.\] The total recovered cases at time \(t=k\) are determined by a recursive relation that adds the transferred infection cases from time \(t=k-1\), based on the recovery rate \(\gamma\), and the remaining total recovered cases from time \(t=k-1\), based on the rate of immunity loss \(\delta\). Additionally, we incorporate the vaccination cases in our model. The modified equations for the recovered cases and susceptible cases are as follows: \[\frac{dS}{dt} =-\beta SI+\delta R-vaccination\] \[\frac{dR}{dt} =\gamma I-\delta R+vaccination\] For the recovered case count, we estimate \(\hat{R}(k)\) using the following equation: \[\hat{R}(k)=[1-\exp(-1/\gamma)]\hat{I}(k-1)+\exp(-1/\delta)\hat{R}(k-1)+V_{n}( k)\,,\] where \(V_{n}(t)\) represents the daily new vaccination count. This recursive relation considers the contributions from the transferred infected cases, remaining recovered cases, and new vaccination count at time \(t=k\). Finally, the total susceptible cases \(\hat{S}(t)\) can be obtained by subtracting the estimated values of infection cases, exposed cases, and recovered cases from the total population: \[\hat{S}(t)=\text{Pop}-\hat{I}(t)-\hat{E}(t)-\hat{R}(t)\,.\] where Pop is the total population. ### Recover Time-dependent Infection Rate. We have obtained the daily new exposed cases \(E_{n}(t)\), as well as the total infection cases \(\hat{I}(t)\) and total susceptible cases \(\hat{S}(t)\). According to the SEIR model equation (5), there exists a relationship between the daily new exposed cases and the infection rate \[E_{n}=\beta SI.\] We can calculate the empirical transmission rate \(\hat{\beta}\) directly using the following formula: \[\hat{\beta}(t)=\frac{E_{n}(t)}{\hat{S}(t)\hat{I}(t)}.\] However, due to over-smoothing during the data refining process, the recovered daily new death cases may have lost their growing trend in the early period. This can lead to an unrealistic excessive growth in the calculated value of \(\hat{\beta}\). To address this issue, we propose performing an exponential fit to the early initial daily new death cases in order to regain their growth pattern. We then replace the excessive growth data in \(\hat{\beta}\) with the mean value of the first month. Next, we employ a three-step process to smooth the empirical transmission rate \(\hat{\beta}(t)\). For simplicity, we will reuse the notation \(\hat{\beta}\). First, we apply the moving average method over a time window of one month before and after each point to reduce random fluctuations in the empirical \(\hat{\beta}(t)\). If there are fewer than 60 data points available from \(\hat{\beta}(t)\) near the endpoints, the time window will be truncated accordingly. Second, we perform a local quadratic regression on \(\hat{\beta}(t)\) to smooth out discontinuities and further reduce the impact of noise. This regression helps to create a smoother and more continuous representation of the data. Third, we apply a discrete cosine transform (DCT) to \(\hat{\beta}(t)\) and discard the higher-frequency coefficients to filter out noise components. The DCT is defined as follows: \[\hat{\beta}_{k}=n(k)\sum_{t=1}^{N}\hat{\beta}(t)\cos\Big{(}\frac{\pi}{2N}(2t-1 )(k-1)\Big{)},\] where \(k=1,2,...,N\) and \[n(k)=\begin{cases}\sqrt{1/N},&\text{if }k=1\\ \sqrt{2/N},&\text{otherwise}\end{cases}.\] During the optimization process, we save the smoothed \(\hat{\beta}(t)\) in the form of its DCT coefficients. ### Optimize Initial Conditions and Parameters. Here, we utilize a nonlinear regression approach to fit the SEIR model with the available data. We introduce a time-dependent function for the infection rate \(\beta(t)\) in Equation (5). In the optimization problem, we make the assumption that the DCT of \(\beta(t)\) only contains eight nonzero modes, which we denote as \(\hat{\beta}_{1},\hat{\beta}_{2},...,\hat{\beta}_{8}\). Additionally, the initial conditions \([S_{0},E_{0},I_{0},R_{0}]\) and the parameters \([\alpha,\gamma,\delta]\) in Equation (5) are unknown. Consequently, we have a total of 15 parameters that need to be fitted using the available data. The loss function in the regression problem consists of three components: 1. First, we aim to minimize the discrepancies between the model-produced trajectories \(S,E,I,R\) and the corresponding trajectories from the data, denoted as \(\hat{S},\hat{E},\hat{I},\hat{R}\). Since the scales of \(S\) and \(R\) are larger than those of \(E\) and \(I\), we assign a higher weight to the \(L^{2}\) error of \(E\) and \(I\). The first loss function is defined as: \[\text{Loss}_{1}= \sqrt{\sum_{k=1}^{N}\left|S(k)-\hat{S}(k)\right|^{2}}+20\sqrt{ \sum_{k=1}^{N}\left|E(k)-\hat{E}(k)\right|^{2}}\] \[+20\sqrt{\sum_{k=1}^{N}\left|I(k)-\hat{I}(k)\right|^{2}}+\sqrt{ \sum_{k=1}^{N}\left|R(k)-\hat{R}(k)\right|^{2}}\,.\] 2. Second, we aim to minimize the sum of squared errors between the solved parameters \(\alpha,\gamma,\delta\) and their assumed values. Since the parameters have different magnitudes, we modify their form to balance their weighting. The second loss function is defined as: \[\text{Loss}_{2}=20\left(\alpha-\frac{1}{5}\right)^{2}+5\left(\frac{1}{\gamma} -7\right)^{2}+\left(\frac{1}{\delta}-180\right)^{2}\,.\] 3. Third, we aim to maximize the correction coefficient between the mobility \(m(t)\) and the infection rate, denoted as \(\rho_{m,\beta}\), while minimizing the errors between the produced \(\beta\) in terms of DCT coefficients and the smoothed \(\hat{\beta}\) calculated from the data. The time series of mobility is smoothed from Google map data [34]. The third loss function is defined as: \[\text{Loss}_{3}=(1-\rho_{m,\beta})+\sqrt{\sum_{k=1}^{8}\left|\beta(k)-\hat{ \beta}_{k}\right|^{2}}\,.\] The final loss function is a combination of the three components \[\text{LOSS}=\text{Loss}_{1}+0.5\text{Loss}_{2}+10\text{Loss}_{3}\,.\] To obtain the solved trajectories \(S,E,I,R\), we employ an RK4 solver and choose the initial conditions and parameters as follows: \(E_{0}=I_{0}=2\times 10^{-4}\), \(R_{0}=0\), \([\alpha_{0},\gamma_{0},\delta_{0}]=[1/2,1/7,1/180]\), and \(\beta_{0}(t)=\hat{\beta}(t)\). ### Change of Variants. Now, we will incorporate the change of variants into the model. We introduce two parameters, \(\delta_{I}\) and \(\delta_{L}\), which represent the factors contributing to the increase in the infection rate and loss of immunity due to variants, respectively. The spread of new variants can cause recovered cases to lose their immunity and transition back into the susceptible group. Additionally, depending on the nature of the new variants, the infection rate will increase to some extent. Denote the rate of percentage change of a new variant by \(Q(t)\). Equations about \(S\) and \(R\) in the SEIR model (5) becomes the following \[\frac{dS}{dt} =-\tilde{\beta}SI+\delta R-vaccination+\delta_{L}QR\] \[\frac{dR}{dt} =\gamma I-\delta R+vaccination-\delta_{L}QR\,, \tag{7}\] where the term \(\delta_{L}QR\) accounts for the transferred cases due to the loss of immunity, and \(\tilde{beta}\) is the new infection rate when considering the variant, which satisfies \[\tilde{\beta}=(1+\delta_{I}Q)\beta\,.\] During the time period from February 29, 2020, to March 1, 2022, two new variants, Delta and Omicron, emerged and became dominant for a certain period. To incorporate the effects of these variants, we introduce four new parameters: \(\delta_{L}^{d}\), \(\delta_{I}^{d}\), \(\delta_{L}^{o}\), and \(\delta_{I}^{o}\). These parameters represent the loss of immunity and increase in the infection rate due to the Delta variant and the Omicron variant, respectively. Additionally, we use \(Q_{d}(t)\) and \(Q_{o}(t)\) to denote the change rate of the variant percentage for the Delta variant and the Omicron variant, respectively. To calculate the recovered cases that reflect the presence of the Delta and Omicron variants at time \(t=k\), we modify the recovered cases \(\hat{R}(k)\) as follows: \[\tilde{R}(k)=\hat{R}(k)-\delta_{L}^{d}Q_{d}(k)\hat{R}(k-1)-\delta_{L}^{o}Q_{o }(k)\hat{R}(k-1)\,.\] Here, \(\hat{R}(k)\) is the total recovered cases at time \(t=k\) as described in Subsection 6.1. In other words, every day the amount of individual that loss immunity due to the change of variant is \(\delta_{L}^{d}Q_{d}(k)\hat{R}(k-1)-\delta_{L}^{o}Q_{o}(k)\hat{R}(k-1)\). Furthermore, to account for the new variants in the infection rate at time \(t=k\), we modify the baseline infection rate \(\beta(k)\) as follows \[\tilde{\beta}(k)=\left(1+\delta_{I}^{d}Q_{d}(k)+\delta_{I}^{o}Q_{d}(k)\right) \beta(k).\] We introduce the new parameters \(\delta_{I}^{d}\), \(\delta_{L}^{d}\), \(\delta_{I}^{o}\), and \(\delta_{L}^{o}\) into the optimization process. Since we do not have reference values for these parameters, we exclude them from the loss function and consider them as additional unknown parameters to be fitted with the data. In the optimization process, we choose initial values for these parameters as \([\delta^{d}_{I_{0}},\delta^{d}_{L_{0}},\delta^{o}_{I_{0}},\delta^{o}_{L_{0}}]=[0.2,0.05,0.6,0.4]\). These initial values serve as starting points for the optimization algorithm to find the optimal values for these parameters. We use Massachusetts COVID-19 data as an example to show the result of model fitting. After performing the optimization on the initial conditions and parameters, we obtain a fitted SEIR model with optimized parameters. The optimal parameters are shown in Table 1. We can see that a very significant proportion of the population lost immunity due when changing to the Omicron variant. A comparison of observed data and model generated data is shown in Figure 11. The time series of observed \(\beta(t)\) and fitted \(\beta(t)\) us also shown in the bottom panel of Figure 11. ## 7 Conclusion and future work In this paper we use the backcasting method to estimate the true daily new case count of each state in the United States. The idea is that the true daily new case can be seen as a de-convolution problem because daily new death count is the convolution of a delay distribution and the product of the daily new case count and the infection fatality ratio(IFR). We first use case and death count from the whole United States to estimate the delay distribution from case to death. Then many factors including age, vaccination, and variants are taken into consideration of the time dependent infection fatality ratio (IFR). The resultant estimated true case count is then used as the output training data of an artificial neural network to investigate the relation among testing data, confirmed case count, and true case count. It can also be use to provide a real-time true case count before seeing the death count. This becomes more and more important because of two factors that happen nowadays. (1) Today the IFR becomes harder to estimate due to factors like hybrid immunity, new variants, and oral antiviral treatments. (2) Now less people choose to do PCR tests, which makes the estimation of current situation of COVID-19 increasingly difficult. Despite the progress made in this paper, we should acknowledge that some additional work is necessary for our artificial neural network estimator to give an accurate estimate of the true daily new case count of COVID-19. This is because the home antigen test becomes widely available since 2022 late spring. As a result, a significant proportion of daily new COVID cases from low risk groups were not reported to the healthcare agency because many people just test themselves at home. This factor has not been addressed into our neural network estimator. We expect an additional undercounting factor about \(1.5-3\) in mid 2022 and much more in 2023 due to the fact that home antigen test is widely available. \begin{table} \begin{tabular}{l l l l l l l} \hline \(\alpha\) & \(\gamma\) & \(\delta\) & \(\delta^{d}_{I}\) & \(\delta^{d}_{L}\) & \(\delta^{o}_{I}\) & \(\delta^{o}_{L}\) \\ \hline 0.3820 & 0.1376 & 0.0056 & 0.0444 & 8.4629e-5 & 1.3374e-4 & 0.7031 \\ \hline \end{tabular} \end{table} Table 1: Optimal Parameters As discussed before, today's IFR of COVID-19 is more difficult to estimate due to many factors. In addition, the death count becomes less trustable due to higher immunity level and less virulent variants. Back in 2020, the vast majority of COVID-19 deaths were in deed caused by COVID-19. But this picture becomes less clear. The hospitalization data is not better. For example, on June 19th, there are 160 patient hospitalized with COVID-19 in Massachusetts. But only 59 out of 160 patients are primarily hospitalized for COVID-19 [35]. Therefore, the focus of our future work focuses on the use of wastewater viral RNA data. Since late 2020, many cities and states test the COVID-19 RNA concentration in their wastewater regularly. It is known that wastewater surveillance is an important tool to infer the COVID-19 transimission dynamics [36, 37]. This provides an invaluable bridge that connects the days when daily death count and IFR are more trustable (in 2020 and 2021) and the days when Figure 11: Upper and middle panels: Observed trajectories (Legend: Recovered S/E/I/R) and SEIR model produced trajectories (Legend: RK4 S/E/I/R) of S, E, I, and R populations. Lower panel: A comparison of observed, smoothed, and fitted time-series of the infection rate \(\beta(t)\). wastewater viral RNA data is available (after mid-2021). This, plus the data assimilation of the SEIR model, can give a more accurate estimate of the current COVID-19 cases and also make predictions in the near future. Acknowledgments.We would like to thank REU students Ziyan Zhao and Jessica Hu for their help in data processing. ## Declarations Yao Li and Ning Jiang are partially supported by NSF DMS-1813246 and DMS-2108628. Charles Kolozsvary is partially supported by the REU part of NSF DMS-1813246 and NSF DMS-2108628. ## Appendix A Additional data about COVID-19 In this section we present many figures that demonstrate raw data, processed data, and intermediate results used to generate the training set. Some data for selected states have been already demonstrated in the main text. This includes 1. Time series of IFR for all 50 states plus Washington DC 2. Time series of recovered true cases and undercounting factor for all 50 states plus Washington DC 3. Raw and smoothed confirmed daily case count and daily death count for all 50 states plus Washington DC 4. Time series of case rate per age group at all regions of the United States 5. Time series of mobility of all 50 states plus Washington DC 6. Time series of vaccination rate of all age group for all 50 states plus Washington DC 7. Incident rate ratio of COVID-19 case and death for vaccinated and unvaccinated groups. 8. Time series of testing volume for all 50 states plus Washington DC ### Time series of state IFR The time series of IFR for 10 selected states are presented in the main text. Below we demonstrate the time series of IFR for all 50 states plus Washington DC after considering age group case rate, vaccination, variant in Figure A1 and A2. ### Time series of state recovered true case The time series of recovered true case and under counting factor for 10 selected states are demonstrated in the main text. Here we show these data for all 50 states plus Washington DC in Figure A3 and Figure A4. **Fig. A3** Time series of recovered true case count for 24 states plus Washington DC. ### State confirmed case and death Figure A5 and Figure A6 show the daily case count and \(100\times\) daily death count of all 50 states plus Washington DC. The data comes from the JHU COVID-19 database [23]. Figure A7 and Figure A8 are the processed daily case count and daily death count after addressing data dump and holiday issues. **Fig. A5** Daily confirmed case count and 100\(\times\) daily death count for 24 states plus Washington DC. Raw data before processing. **Fig. A6** Daily confirmed case count and 100\(\times\) daily death count for 26 states. Raw data before processing. **Fig. A7** Daily confirmed case count and 100\(\times\) daily death count for 24 states plus Washington DC. Processed data after addressing weekday issue, holiday issue, and artificial data dump from backlogs. ### Case rate per age group Figure 14 shows the time series of case rate of each age group from all 10 regions provided by CDC [15]. The HHS regions used by CDC is described in the following table. ### State mobility Figure A10 and A11 lists the average mobility versus time provided by Google map [34]. **Fig. A10** Time series of average mobility of 24 state plus Washington DC. Data is smoothed. ### State vaccination rate Figure 12 and 13 gives the time series of vaccinate rate for each age group older than 18 years old in all 50 states plus Washington DC. This data is obtained from CDC [13]. **Fig. A12** Time series of vaccination rate of each age group for 24 states plus Washington DC. ### Incident rate ratio (IRR) of vaccinated and unvaccinated groups The incident ratio of COVID-19 infection and death for each group is given in Figure A14. This data is obtained from CDC website [14]. Note that death data of younger age group is not included because there are too few, sometimes zero, death count from vaccinated young group in many weeks. The ratio of IFR of unvaccinated group to vaccinated group of three older age groups are shown in Figure A14 Right. Figure A13: Time series of vaccination rate of each age group for 26 states. ### State testing volume Figure A15 and A16 gives the time series of smoothed COVID-19 test volume in all 50 states plus Washington DC. This data comes from the Coronavirus Resource Center of Johns Hopkins University [23]. **Fig. A15** Time series of COVID-19 testing volume for 24 states plus Washington DC.
2305.04414
Untrained Neural Network based Bayesian Detector for OTFS Modulation Systems
The orthogonal time frequency space (OTFS) symbol detector design for high mobility communication scenarios has received numerous attention lately. Current state-of-the-art OTFS detectors mainly can be divided into two categories; iterative and training-based deep neural network (DNN) detectors. Many practical iterative detectors rely on minimum-mean-square-error (MMSE) denoiser to get the initial symbol estimates. However, their computational complexity increases exponentially with the number of detected symbols. Training-based DNN detectors typically suffer from dependency on the availability of large computation resources and the fidelity of synthetic datasets for the training phase, which are both costly. In this paper, we propose an untrained DNN based on the deep image prior (DIP) and decoder architecture, referred to as D-DIP that replaces the MMSE denoiser in the iterative detector. DIP is a type of DNN that requires no training, which makes it beneficial in OTFS detector design. Then we propose to combine the D-DIP denoiser with the Bayesian parallel interference cancellation (BPIC) detector to perform iterative symbol detection, referred to as D-DIP-BPIC. Our simulation results show that the symbol error rate (SER) performance of the proposed D-DIP-BPIC detector outperforms practical state-of-the-art detectors by 0.5 dB and retains low computational complexity.
Hao Chang, Alva Kosasih, Wibowo Hardjawana, Xinwei Qu, Branka Vucetic
2023-05-08T01:47:02Z
http://arxiv.org/abs/2305.04414v1
# Untrained Neural Network based Bayesian Detector for OTFS Modulation Systems ###### Abstract The orthogonal time frequency space (OTFS) symbol detector design for high mobility communication scenarios has received numerous attention lately. Current state-of-the-art OTFS detectors mainly can be divided into two categories; iterative and training-based deep neural network (DNN) detectors. Many practical iterative detectors rely on minimum-mean-square-error (MMSE) denoiser to get the initial symbol estimates. However, their computational complexity increases exponentially with the number of detected symbols. Training-based DNN detectors typically suffer from dependency on the availability of large computation resources and the fidelity of synthetic datasets for the training phase, which are both costly. In this paper, we propose an untrained DNN based on the deep image prior (DIP) and decoder architecture, referred to as D-DIP that replaces the MMSE denoiser in the iterative detector. DIP is a type of DNN that requires no training, which makes it beneficial in OTFS detector design. Then we propose to combine the D-DIP denoiser with the Bayesian parallel interference cancellation (BPIC) detector to perform iterative symbol detection, referred to as D-DIP-BPIC. Our simulation results show that the symbol error rate (SER) performance of the proposed D-DIP-BPIC detector outperforms practical state-of-the-art detectors by 0.5 dB and retains low computational complexity. OTFS, symbol detection, deep image prior, Bayesian parallel interference cancellation, mobile cellular networks. ## I Introduction The future mobile system will support various high-mobility scenarios (e.g., unmanned aerial vehicles and autonomous cars) with strict mobility requirements [1]. However, current orthogonal frequency division multiplexing (OFDM) [2] is not suitable for these scenarios due to the high inter-carrier interference (ICI) caused by a large number of high-mobility moving reflectors. The orthogonal time frequency space (OTFS) modulation was proposed in [1] to address this issue because it allows the tracking of ICI during the symbol estimation process. Multiple OTFS symbol detectors [3, 4, 5, 6, 7, 8, 9, 10] have been investigated in current literature. Several iterative detectors have been proposed in OTFS systems, e.g., message passing (MP) [3], approximate message passing (AMP) [4], Bayesian parallel interference cancellation (BPIC) that uses minimum-mean-square-error (MMSE) denoiser [5], unitary approximate message passing (UAMP) [6], and expectation propagation (EP) [7] detectors. These detectors provide a significant symbol error rate (SER) performance gain compared to that of the classical MMSE detector [8]. Unfortunately, when a large number of moving reflectors exist, MP and AMP suffer from performance degradation due to high ICI [5]. The UAMP detector addresses this issue by performing singular value decomposition (SVD) that exploits the structure of the OTFS channel prior to executing AMP. Similar performance in terms of reliability and complexity to the UAMP detector has also been achieved by our proposed iterative MMSE-BPIC detector in [5]. We combined an MMSE denoiser, the Bayesian concept, and parallel interference cancellation (PIC) to perform iterative symbol detection. Unfortunately, their performance is still suboptimal in comparison with the EP OTFS detector [7]. EP uses the Bayesian concept and multivariate Gaussian distributions to approximate the mean and variance of posterior detected symbols iteratively from the observed received signals. The outperformance of the EP detector comes at the cost of high computational complexity in performing iterative matrix inversion operations. In addition to those iterative detectors, deep neural network (DNN) based approaches are widely used in symbol detector design. They can be divided into two categories; 1) Training-based DNN and 2) untrained DNN. The training-based DNN requires a large dataset to train the symbol detector prior to deployment. Recent examples of training-based DNN category are a 2-D convolutional neural network (CNN) based OTFS detector in [9] and also our recently proposed BPICNet OTFS detector in [10] that integrates the MMSE denoiser, BPIC and DNN whereby the modified BPIC parameters are trained by using DNN. There are two major disadvantages for the training-based DNN approach; 1) dependency on the availability of large computation resources that necessitate substantial energy or CO2 consumptions and high cost for the training phase [11]; 2) the fidelity of synthetic training data, artificially generated due to high cost of acquiring real datasets, in the real environment [12]. For example, a high fidelity training dataset implies the distribution functions for all possible velocity of mobile reflectors is known beforehand, which is impossible. The second category, untrained DNN, avoids the need for training datasets. Deep image prior (DIP) proposed in [13] has been widely used in image restoration as an untrained DNN approach. The encoder-decoder architecture used in the original DIP shows excellent performance in image restoration tasks but the use of up to millions of trainable parameters results in high latency and thus still cannot be used for an OTFS detector that requires close to real-time processing time. Recently, the authors in [14] show that the decoder-only DIP offers similar performance as compared to an encoder-decoder DIP architecture when it is applied to Magnetic Resonance Imaging (MRI). The complexity of decoder-only DIP is significantly lower than the original encoder-decoder DIP, thus enhancing its potential use as a real-time OTFS detector. To date, no study has been conducted on untrained DNN based OTFS detectors. In this paper, we propose to use untrained DNN with BPIC to perform iterative symbol detection. Specifically, we use DIP with a decoder-only architecture, referred to as D-DIP to act as a denoiser and to provide the initial symbol estimates for the BPIC detector. We choose BPIC here in order to keep low computational complexity for the OTFS receiver. We first describe a single-input single-output (SISO) OTFS system model consisting of the transmitter, channel and receiver. We then provide a review of the MMSE-BPIC detector in [5, 15] that uses the MMSE denoiser to obtain the initial symbol estimates. Instead of using MMSE, we propose a high-performance D-DIP denoiser to calculate the initial symbol estimates inputted to the BPIC. We then explain our proposed D-DIP in detail and also provide computational complexity and performance comparisons to other schemes. Simulation results indicate an average of approximately 0.5 dB SER outperformance as compared to other practical schemes in the literature. The main contribution of this paper is the first to propose a combination of a decoder-only DIP denoiser and the BPIC OTFS detector. The proposed denoiser 1) provides better initial symbol estimates for the BPIC detector and 2) has lower computational complexity than the MMSE denoiser. This leads to the proposed scheme having the closest SER performance to the EP scheme as compared to other schemes, achieved with much lower computational complexity (approximately 15 times less complex than the EP). **Notations**: \(a\), \(\mathbf{a}\) and \(\mathbf{A}\) denote scalar, vector, and matrix respectively. \(\mathbb{C}^{M\times N}\) denotes the set of \(M\times N\) dimensional complex matrices. We use \(\mathbf{I}_{N}\), \(\mathbf{F}_{N}\), and \(\mathbf{F}_{N}^{\mathbf{H}}\) to represent an \(N\)-dimensional identity matrix, \(N\)-points discrete Fourier Transform (DFT) matrix, and \(N\)-points inverse discrete Fourier transform (IDFT) matrix. \((\cdot)^{T}\) represents the transpose operation. We define \(\mathbf{a}=\mathsf{vec}(\mathbf{A})\) as the column-wise vectorization of matrix \(\mathbf{A}\) and \(\mathbf{A}=\mathsf{vec}^{-1}(\mathbf{a})\) denotes the vector elements folded back into a matrix. The Kronecker product is denoted as \(\otimes\). \([\frac{a}{b}]\) represents the floor operation, and \([\cdot]_{M}\) represent the mod-\(M\) operations. The Euclidean distance of vector \(\mathbf{x}\) is denoted as \(\|\mathbf{x}\|\). We use \(\mathcal{N}(\mathbf{x}:\boldsymbol{\mu},\boldsymbol{\Sigma})\) to express the multivariate Gaussian distribution of a vector \(\mathbf{x}\) where \(\boldsymbol{\mu}\) is the mean and \(\boldsymbol{\Sigma}\) is the covariance matrix. ## II OTFS System Model We consider an OTFS system, as illustrated in Fig. 1. In the following, we explain the details of the OTFS transmitter, channel and receiver. ### _OTFS Transmitter_ In the transmitter side, \(MN\) information symbols \(\mathbf{X}_{\mathrm{DD}}\in\mathbb{C}^{M\times N}\) from a modulation alphabet of size \(Q\mathbb{A}=\{a_{1},\cdots,a_{Q}\}\) are allocated to an \(M\times N\) grids in the delay-Doppler (DD) domain, where \(M\) and \(N\) represent the number of subcarriers and time slots used, respectively. As illustrated in Fig. 1, the DD domain symbols are transformed into the time-frequency (TF) domain by using the inverse symplectic finite Fourier transform (ISFFT) [1]. Here, the TF domain is discretized to \(M\) by \(N\) grids with uniform intervals \(\Delta f\) (Hz) and \(T_{s}=1/\Delta f\) (seconds), respectively. Therefore, the sampling time is \(T_{s}/M\). The TF domain sample \(\mathbf{X}_{\mathrm{TF}}\in\mathbb{C}^{M\times N}\) is an OTFS frame, which occupies the bandwidth of \(M\Delta f\) and the duration of \(NT_{s}\), is given as \[\mathbf{X}_{\mathrm{TF}}=\mathbf{F}_{M}\mathbf{X}_{\mathrm{DD}}\mathbf{F}_{N}^ {\mathbf{H}}, \tag{1}\] where \(\mathbf{F}_{M}\in\mathbb{C}^{M\times M}\) and \(\mathbf{F}_{N}^{\mathbf{H}}\in\mathbb{C}^{N\times N}\) are \(M\)-points DFT and \(N\)-points IDFT matrices, and the \((p,q)\)-th entries of them are \((\frac{1}{\sqrt{M}}e^{-j2\pi pq/M})_{p,q=0,\cdots,M-1}\) and \((\frac{1}{\sqrt{N}}e^{j2\pi pq/N})_{p,q=0,\cdots,N-1}\), respectively. The \((m,n)\)-th entries \(X_{\mathrm{TF}}[m,n]\) of \(\mathbf{X}_{\mathrm{TF}}\) is written as \[X_{\mathrm{TF}}[m,n]=\frac{1}{\sqrt{MN}}\sum_{k=0}^{N-1}\sum_{l=0}^{M-1}X_{ \mathrm{DD}}[k,l]e^{j2\pi(\frac{nk}{N}-\frac{ml}{M})}, \tag{2}\] where \(X_{\mathrm{DD}}[k,l]\) represents the \((k,l)\)-th entries of \(\mathbf{X}_{\mathrm{DD}}\) for \(k=0,\cdots,M-1,l=0,\cdots,N-1\). The (discrete) Heisenberg transform [1] is then applied to generate the time domain transmitted signal by using (1) and Kronecker product rule1, the vector form of the transmitted signal can be written as Footnote 1: A matrix multiplication is often expressed by using vectorization with the Kronecker product. That is, \(\mathsf{vec}(ABC)=(C^{T}\otimes A)\mathsf{vec}(B)\) \[\mathbf{s}=\mathsf{vec}(\mathbf{G}_{\mathrm{tx}}\mathbf{F}_{M}^{\mathbf{H}} \mathbf{X}_{\mathrm{TF}})=(\mathbf{F}_{N}^{\mathbf{H}}\otimes\mathbf{G}_{ \mathrm{tx}})\mathbf{x}_{\mathrm{DD}}, \tag{3}\] where \(\mathbf{G}_{\mathrm{tx}}\) is the pulse-shaping waveform, and we consider the rectangular waveform with a duration of \(T_{s}\) that leads to \(\mathbf{G}_{\mathrm{tx}}=\mathbf{I}_{M}\)[16], \(\mathbf{x}_{\mathrm{DD}}=\mathsf{vec}(\mathbf{X}_{\mathrm{DD}})\), and \(\mathbf{x}_{\mathrm{DD}}=[x_{\mathrm{DD}}(0),\cdots,x_{\mathrm{DD}}(MN-1)]^{T}\). \(\mathbf{s}\in\mathbb{C}^{MN\times 1}\) is the vector form of the transmitted signal, \(\mathbf{s}=[s(0),\cdots,s(n),\cdots,s(MN-1)]^{T}\), \(n=0,\cdots,MN-1\), and \(s(n)\) can be written as \[s(n)=\frac{1}{\sqrt{N}}\sum_{k=0}^{N-1}e^{j2\pi[\frac{1}{M}]k/N}x_{\mathrm{DD} }([n]_{M}+kM). \tag{4}\] We insert the cyclic prefix (CP) at the beginning of each OTFS frame, the length of CP is the same as the index of maximum delay \(l_{max}\). Thus, the time duration after adding CP is \(NT_{s}+N_{\mathrm{cp}}\frac{T_{s}}{M}\), where \(N_{\mathrm{cp}}=l_{max}\). After adding CP, \(\mathbf{s}=[s(MN-N_{\mathrm{cp}}+1),s(MN-N_{\mathrm{cp}}+2),\cdots,s(MN-1),s(0 ),\cdots,s(n),\cdots,s(MN-1)]^{T}\), and \(\mathbf{s}\) is transmitted through a time-varying channel. ### _OTFS Wireless Channel_ The OTFS wireless channel is a time-varying multipath channel, represented by the impulse responses in the DD domain, \[h(\tau,v)=\sum_{i=1}^{P}h_{i}\delta(\tau-\tau_{i})\delta(v-v_{i}) \tag{5}\] where \(\delta(\cdot)\) is the Dirac delta function, \(h_{i}\sim\mathcal{N}(0,1/P)\) denotes the \(i\)-th path gain, and \(P\) is the total number of paths. Each of the paths represents a channel between a moving reflector/transmitter and a receiver with a different delay \((\tau_{i})\) and/or Doppler \((v_{i})\) characteristics. The delay and Doppler shifts are given as \(\tau_{i}=l_{i}\frac{T_{i}}{M}\) and \(v_{i}=k_{i}\frac{\Delta f}{N}\), respectively. The ICI depends on the delay and Doppler of the channel as illustrated in [16]. Here, for every path, the randomly selected integers \(l_{i}\in[0,l_{max}]\) and \(k_{i}\in[-k_{max},k_{max}]\) denote the indices of the delay and Doppler shifts, where \(l_{max}\) and \(k_{max}\) are the indices of the maximum delay and maximum Doppler shifts among all channel paths. Note for every path, the combination of the \(l_{i}\) and \(k_{i}\) are different. For our wireless channel, we assume \(l_{max}\leq M-1\) and \(k_{max}\leq\lfloor\frac{N}{2}\rfloor\), implying maximum channel delay and Doppler shifts of less than \(T_{s}\) seconds and \(\Delta f\) Hz, respectively. ### _OTFS Receiver_ At the receiver side, the time domain received signal \(r(t)\) is shown as [1] \[r(t)=\int\int h(\tau,v)s(t-\tau)e^{j2\pi v(t-\tau)}d\tau dv+w(t), \tag{6}\] where \(s(t)\) is the time-domain received signal \(\mathbf{s}\), while \(h(\tau,v)\) is the DD domain channel shown in (5). The received signal \(r(t)\) is then sampled at \(t=\frac{n}{M\Delta f}\), where \(n=0,\cdots,MN-1\). After discarding CP, the discrete received signal \(r(n)\) is obtained from (5) and (6), written as \[r(n)=\sum_{i=1}^{P}h_{i}e^{j2\pi\frac{k_{i}(n-l_{i})}{MN}}s([n-l_{i}]_{MN})+w( n), \tag{7}\] We then write (7) in the vector form as \[\mathbf{r}=\mathbf{H}\mathbf{s}+\mathbf{w}, \tag{8}\] where \(\mathbf{w}\) is the complex independent and identically distributed (i.i.d.) white Gaussian noise that follows \(\mathcal{N}(\mathbf{0},\sigma_{\mathrm{c}}^{2}\mathbf{I})\), \(\sigma_{\mathrm{c}}^{2}\) is the variance of the noise. \(\mathbf{H}=\sum_{i=1}^{P}h_{i}\mathbf{I}_{MN}(l_{i})\mathbf{\Delta}(k_{i})\), \(l_{MN}(l_{i})\) denotes a \(MN\times MN\) matrix obtained by circularly left shifting the columns of the identity matrix by \(l_{i}\). \(\mathbf{\Delta}\) is the \(MN\times MN\) Doppler shift diagonal matrix, \(\mathbf{\Delta}(k_{i})=\text{diag}\left[e^{\frac{j2\pi k_{i}(0)}{MN}},e^{ \frac{j2\pi k_{i}(1)}{MN}},\cdots,e^{\frac{j2\pi k_{i}(MN-1)}{MN}}\right]\), and \(\text{diag}(\cdot)\) denotes a diagonalization operation on a vector. Note that the matrices \(\mathbf{I}_{MN}(l_{i})\) and \(\mathbf{\Delta}(k_{i})\) model the delay and Doppler shifts in (5), respectively. As shown in Fig. 1, the TF domain received signal \(\mathbf{Y}_{\mathrm{TF}}\in\mathbb{C}^{M\times N}\) is obtained by applying the Wigner transform [16], shown as, \[\mathbf{Y}_{\mathrm{TF}}=\mathbf{F}_{M}\mathbf{G}_{\mathrm{rx}}\mathbf{R}, \tag{9}\] where \(\mathbf{R}=\mathsf{vec}^{-1}(\mathbf{r})\), \(\mathbf{G}_{\mathrm{rx}}\) is the rectangular waveform with a duration \(T_{s}\) in the receiver, and \(\mathbf{G}_{\mathrm{rx}}=\mathbf{I}_{M}\). Then the DD domain received signal \(\mathbf{Y}_{\mathrm{DD}}\in\mathbb{C}^{M\times N}\) is obtained by using the symplectic finite Fourier transform (SFFT), which is \[\mathbf{Y}_{\mathrm{DD}}=\mathbf{F}_{M}^{\mathbf{H}}\mathbf{Y}_{\mathrm{TF}} \mathbf{F}_{N}=\mathbf{F}_{M}^{\mathbf{H}}\mathbf{F}_{M}\mathbf{G}_{\mathrm{rx} }\mathbf{R}\mathbf{F}_{N}=\mathbf{G}_{\mathrm{rx}}\mathbf{R}\mathbf{F}_{N}. \tag{10}\] By following the vectorization with Kronecker product rule, we can rewrite (10) as \[\mathbf{y}_{\mathrm{DD}}=\mathsf{vec}(\mathbf{Y}_{\mathrm{DD}})=\mathsf{vec}( \mathbf{G}_{\mathrm{rx}}\mathbf{R}\mathbf{F}_{N})=(\mathbf{F}_{N}\otimes \mathbf{G}_{\mathrm{rx}})\mathbf{r}. \tag{11}\] By substituting (3) into (8) and (11) we obtain \[\mathbf{y}_{\mathrm{DD}}=\mathbf{H}_{\mathrm{DD}}\mathbf{x}_{\mathrm{DD}}+ \tilde{\mathbf{w}}, \tag{12}\] where \(\mathbf{H}_{\mathrm{DD}}=(\mathbf{F}_{N}\otimes\mathbf{G}_{\mathrm{rx}}) \mathbf{H}(\mathbf{F}_{N}^{\mathbf{H}}\otimes\mathbf{G}_{\mathrm{rx}})\) and \(\tilde{\mathbf{w}}=(\mathbf{F}_{N}\otimes\mathbf{G}_{\mathrm{rx}})\mathbf{w}\) denote the effective channel and noise in the DD domain, respectively. Here, \(\tilde{\mathbf{w}}\) is an i.i.d. Gaussian noise, since \(\mathbf{F}_{N}\otimes\mathbf{G}_{\mathrm{rx}}\) is a unitary orthogonal matrix [1, 16]. For convenience, we transform complex-valued model in (12) into real-valued model. Accordingly, \(\mathbf{x}=\left[\Re(\mathbf{x}_{\mathrm{DD}})\ \Im(\mathbf{x}_{\mathrm{DD}})\right]^{T}\), \(\mathbf{y}=\left[\Re(\mathbf{y}_{\mathrm{DD}})\ \Im(\mathbf{y}_{\mathrm{DD}})\right]^{T}\), \(\mathbf{n}=\left[\Re(\tilde{\mathbf{w}})\ \Im(\tilde{\mathbf{w}})\right]^{T}\), \(\mathbf{H}_{\mathrm{eff}}=\left[\Re(\mathbf{H}_{\mathrm{DD}})\ \ -\Im(\mathbf{H}_{\mathrm{DD}})\right]\), \(\Re(\cdot)\) and \(\Im(\cdot)\) are the real and imaginary parts, respectively. Thus, the variance of \(\mathbf{n}\) is \(\sigma^{2}=\sigma_{\mathrm{c}}^{2}/2\) and \(\mathbf{x},\mathbf{y},\mathbf{n}\) are vectors of size \(2MN\) and \(\mathbf{H}_{\mathrm{eff}}\) is a matrix of size \(2MN\times 2MN\). Then, we can rewrite (12) as \[\mathbf{y}=\mathbf{H}_{\mathrm{eff}}\mathbf{x}+\mathbf{n}. \tag{13}\] We assume \(\mathbf{H}_{\mathrm{eff}}\) is known at the detector side. For notation simplicity, we omit the subscript of \(\mathbf{H}_{\mathrm{eff}}\) in (13) and just write it as \(\mathbf{H}\) in all subsequent sections. The signal-to-noise ratio (SNR) of the system is defined as \(\mathrm{SNR}=10\mathrm{log}_{10}(\frac{1}{\sigma_{\mathrm{c}}^{2}})\mathrm{dB}\). ## III MMSE-BPIC Detector In this section, we briefly describe the BPIC detector that employs MMSE denoiser, recently proposed in [15]. The structure of the BPIC detector is shown in Fig. 2. It consists of four modules: Denoiser, Bayesian symbol observation (BSO), Bayesian symbol estimation (BSE), and decision statistics combining (DSC). Fig. 1: The system model of OTFS modulation scheme In the Denoiser module, the MMSE scheme is used to obtain the initial symbol estimates \(\hat{\mathbf{x}}^{(0)}\) in the first BPIC iteration [15] as shown in Fig. 2. The MMSE denoiser can be expressed as \[\hat{\mathbf{x}}^{(0)}=\left(\mathbf{H}^{T}\mathbf{H}+\sigma^{2}\mathbf{I} \right)^{-1}\mathbf{H}^{T}\mathbf{y}. \tag{14}\] In the BSO module, the matched filter based PIC scheme is used to detect the transmitted symbols, shown as \[\mu_{q}^{(t)}=\hat{x}_{q}^{(t-1)}+\frac{\mathbf{h}_{q}^{T}\left(\mathbf{y}- \mathbf{H}\hat{\mathbf{x}}^{(t-1)}\right)}{\|\mathbf{h}_{q}\|^{2}}, \tag{15}\] where \(\mu_{q}^{(t)}\) is the soft estimate of \(q\)-th symbol \(x_{q}\) in iteration \(t\), \(\mathbf{h}_{q}\) is the \(q\)-th column of matrix \(\mathbf{H}\). \(\hat{\mathbf{x}}^{(t-1)}=[\hat{x}_{1}^{(t-1)},\cdots,\hat{x}_{q}^{(t-1)}, \cdots,\hat{x}_{2MN}^{(t-1)}]^{T}\) is the vector of the estimated symbol. The variance \(\Sigma_{q}^{(t)}\) of the \(q\)-th symbol estimate is derived in [15] as \[\Sigma_{q}^{(t)}=\frac{1}{(\mathbf{h}_{q}^{T}\mathbf{h}_{q})^{2}}\left(\sum_{ \begin{subarray}{c}j=1\\ j\neq q\end{subarray}}^{MN}(\mathbf{h}_{q}^{T}\mathbf{h}_{q})^{2}v_{j}^{(t-1) }+(\mathbf{h}_{q}^{T}\mathbf{h}_{q})\sigma^{2}\right), \tag{16}\] where \(v_{j}^{(t-1)}\) is the \(j\)-th element in a vector of symbol estimates variance \(\mathbf{v}^{(t-1)}\) in iteration \(t-1\) and \(\mathbf{v}^{(t-1)}=[v_{1}^{(t-1)},\cdots,v_{q-1}^{(t-1)},\cdots,v_{2MN}^{(t-1) }]^{T}\), we set \(\mathbf{v}^{(0)}=0\) because we have no prior knowledge of the variance at the beginning. Then the estimated symbol \(\boldsymbol{\mu}^{(t)}=[\mu_{1}^{(t)},\cdots,\mu_{q}^{(t)},\cdots,\mu_{2MN}^{(t )}]^{T}\)and variance \(\boldsymbol{\Sigma}^{(t)}=[\Sigma_{1}^{(t)},\cdots,\Sigma_{q}^{(t)},\cdots, \Sigma_{2MN}^{(t)}]^{T}\) are forwarded to the BSE module, as shown in Fig. 2 In the BSE module, we compute the Bayesian symbol estimates and the variance of the \(q\)-th symbol obtained from the BSO module. given as \[\hat{x}_{q}^{(t)}=\mathbb{E}\left[x_{q}\Big{|}\mu_{q}^{(t)}, \Sigma_{q}^{(t)}\right]=\sum_{a\in\Omega}a\hat{p}^{(t)}(x_{q}=a|\mathbf{y}) \tag{17}\] \[v_{q}^{(t)}=\mathbb{E}\left[\left|x_{q}-\mathbb{E}\left[x_{q} \Big{|}\mu_{q}^{(t)},\Sigma_{q}^{(t)}\Big{]}\right|^{2}\right], \tag{18}\] where \(\hat{p}^{(t)}\left(x_{q}|\mathbf{y}\right)=\mathcal{N}(x_{q}:\mu_{q}^{(t)}, \Sigma_{q}^{(t)})\) is obtained from the BSO module and it is normalized so that \(\sum_{q\in\Omega}\hat{p}^{(t)}\left(x_{q}=a|\mathbf{y}\right)=1\). The outputs of the BSE module, \(\hat{x}_{q}^{(t)}\)and \(v_{q}^{(t)}\) are then sent to the following DSC module. The DSC module performs a linear combination of the symbol estimates in two consecutive iterations, shown as \[\hat{x}_{q}^{(t)}=\left(1-\rho_{q}^{(t)}\right)\hat{x}_{q}^{(t-1)}+\rho_{q}^{( t)}\hat{x}_{q}^{(t)} \tag{19}\] \[v_{q}^{(t)}=\left(1-\rho_{q}^{(t)}\right)v_{q}^{(t-1)}+\rho_{q}^{(t)}v_{q}^{(t )}. \tag{20}\] The weighting coefficient is determined by maximizing the signal-to-interference-plus-noise-ratio variance, given as \[\rho_{q}^{(t)}=\frac{e_{q}^{(t-1)}}{e_{q}^{(t)}+e_{q}^{(t-1)}}, \tag{21}\] where \(e_{q}^{(t)}\) is defined as the instantaneous square error of the \(q\)-th symbol estimate, computed by using the MRC filter, \[e_{q}^{(t)}=\left\|\frac{\mathbf{h}_{q}^{T}}{\|\mathbf{h}_{q}\|^{2}}\left( \mathbf{y}-\mathbf{H}\hat{\mathbf{x}}^{(t)}\right)\right\|^{2}. \tag{22}\] The weighted symbol estimates \(\hat{\mathbf{x}}^{(t)}\) and their variance \(\mathbf{v}^{(t)}\) are then returned to the BSO module to continue the iteration. After \(T\) iterations, \(\hat{\mathbf{x}}^{(T)}\) is taken as a vector of symbol estimates. ## IV D-DIP denoiser For symbol estimation In this section, we propose D-DIP to improve the initial symbol estimates performance of the BPIC detector, and the whole iterative process of D-DIP is shown in Fig. 3. The DNN used in D-DIP is classified as a fully connected decoder DNN that consists of \(L=5\) fully connected layers. Those layers can be broken down into an input layer, an output layer and three hidden layers with p1 = 4, p2 = 8, p3 = 16, p4 = 32, p5 = \(2MN\) neurons, respectively. We use a random vector \(\mathbf{z}_{0}\) drawn from a normal distribution \(\mathcal{N}(\mathbf{0},\mathbf{1})\) of size 4x1 as the input of the DNN first layer (i.e., input layer). \(\mathbf{z}_{0}\) is fixed during the D-DIP iterative process. DNN output at iteration \(i\)\(\mathbf{x}_{\mathrm{D-DIP}}^{(i)}\) is obtained by passing \(\mathbf{z}_{0}\) through 5 layers, shown as \[\mathbf{x}_{\mathrm{D-DIP}}^{(i)}=cf_{L}^{(i)}(f_{L-1}^{(i)}(\cdots f_{2}^{(i) }(\mathbf{z}_{0}))), \tag{23}\] where \(c\) is a constant used to control the output range of the DNN and \(f_{l}^{(i)}\) is the output of layer \(l\) at iteration \(i\), \[f_{l}^{(i)}=\mathrm{Tanh}(\mathbf{W}_{l}^{(i)}f_{l-1}^{(i)}+\mathbf{b}_{l}^{(i) }),l=2,\ldots,L \tag{24}\] where \(f_{1}^{(i)}=\mathbf{z}_{0}\), \(\mathbf{W}_{l}^{(i)}\) represents the weight matrix between layer \(l\) and \(l-1\) at iteration \(i\). \(\mathbf{b}_{l}^{(i)}\) is the bias vector in layer \(l\) at iteration \(i\). In the beginning, each entry of \(\mathbf{W}_{l}^{(0)}\) and \(\mathbf{b}_{l}^{(0)}\) are initialized randomly following a uniform distribution with a range of \((\frac{-1}{\sqrt{p_{l}}},\frac{1}{\sqrt{p_{l}}})\)[17], where \(p_{l}\) represents the number of neurons in layer \(l\). \(\mathrm{Tanh}\) is an activation function used after each layer. After that, we use a stopping scheme in [18] to control the iterative process of D-DIP to avoid the overfitting problem due to the parameterization feature in the DIP. The stopping scheme is based on calculating the variance of the DNN output, given as \[\varsigma^{(i)}=\frac{1}{W}\sum_{j=i-W}^{i}\|\mathbf{x}_{\mathrm{D-DIP}}^{(j)}- \frac{1}{W}\sum_{j^{\prime}=i-W}^{i}\mathbf{x}_{\mathrm{D-DIP}}^{(j^{\prime})} \|^{2},i\geq W, \tag{25}\] Fig. 2: BPIC detector architecture where \(\varsigma^{(i)}\) is the variance value at iteration \(i\). When \(i<W\), the variance calculation is inactive. \(W\) is a constant determined based on the experiments and should be smaller than the iterations needed for D-DIP to converge. As shown in Fig. 3, we compare \(\varsigma^{(i)}\) with a threshold \(\epsilon\). If \(\varsigma^{(i)}<\epsilon\) the iterative process of D-DIP will stop, and the output of D-DIP \(\mathbf{x}_{\mathrm{D-DIP}}^{(I)}\) is then forwarded to BPIC as initial symbol estimates, i.e., \(\hat{\mathbf{x}}^{(0)}=\mathbf{x}_{\mathrm{D-DIP}}^{(I)}\), where \(I\) is the number of the last D-DIP iteration. Otherwise use mean square error (MSE) to calculate the loss shown as \[\mathcal{L}^{(i)}=\frac{1}{2MN}\|\mathbf{H}\mathbf{x}_{\mathrm{D-DIP}}^{(i)}- \mathbf{y}\|^{2}. \tag{26}\] The DNN parameters that consist of weights \(\mathbf{W}_{l}^{(i)}\) and biases \(\mathbf{b}_{l}^{(i)}\) are then optimized by using Adam optimizer [19] and the calculated loss in (26). The process is then repeated as shown in Fig. 3. ## V Complexity Analysis In this section, we analyze the computational complexity of the proposed D-DIP-BPIC detector. As for the complexity of D-DIP, the computational complexity of fully-connected layers is matrix vector multiplications with a cost of \(\mathcal{O}(M^{2}N^{2}I)\), where \(I\) denotes the number of iterations needed for D-DIP. The computational complexity for different detection algorithms is shown in Table I, where \(T\) represents the iterations needed for the BPIC, UAMP, EP and BPICNet detectors. For instance, for \(M=12,N=7,T=10,I=50\), the complexity of D-DIP-BPIC is approximately 1.5 times lower than MMSE-BPIC, UAMP and BPICNet. The complexity of D-DIP-BPIC is approximately 15 times lower than EP. Thus our proposed detector has the lowest complexity compared to above high-performance detectors. Note that BPICNet has an extra complexity due to training requirements. BPICNet uses a large data set used for the training prior to deployment. For example, \(b=5.12\times 10^{6}\) is used in [10]. Fig. 4 shows the cumulative distribution function (CDF) of \(I\) (i.e., the number of D-DIP iterations needed to satisfy the stopping scheme (25)) for \(M=12,24,36,48,N=7,l_{max}=M-1,k_{max}=3,SNR=15dB\). The figure shows that the number of iterations required for D-DIP to converge, \(I\), is not sensitive to the OTFS frame size (i.e., \(M\) and \(N\)) which is a significant advantage. ## VI Numerical results In this section, we evaluate the performance of our proposed detector by comparing its SER performance with those in MMSE-BPIC [5], UAMP [20], EP [7] and BPICNet [10]. Here we use UAMP in [20] instead, because the UAMP proposed in [6] is not suitable for our system model as shown in [5]. For the simulations, we set \(N=7,l_{max}=M-1\), \(\Delta f=15\)kHz. The carrier frequency is set to \(f_{c}=10\)GHz. The \(4\)-QAM modulation is employed for the simulations, and we set \(c=1/\sqrt{2}\) that is corresponding to the normalized power of constellations to normalize the DNN output. The same DNN parameters described in section IV (e.g., number of layers and number of neurons in each layer) are used in the DNN for all simulations. We use the Adam optimizer with a learning rate of \(0.01\) to optimize the DNN parameters. The stopping criteria parameter for (25), \(W\) is set to 30, and the threshold \(\epsilon\) is set to 0.001. The number of iterations for the BPIC, UAMP, EP and BPICNet is set to \(T=10\) to ensure convergence. For the training setting of BPICNet, we use the same setting in [10], where \(M=12,N=7,l_{max}=11,k_{max}=3\) and 500 epochs are used during the training process, in each epoch, 40 batches \begin{table} \begin{tabular}{|c|c|c|} \hline Detector & Complexity order (Training) & Complexity order (Deployment) \\ \hline MMSE-BPIC [5] & Not required & \(\mathcal{O}(M^{3}N^{3}+M^{2}N^{2}T)\) \\ \hline UAMP [20] & Not required & \(\mathcal{O}(M^{3}N^{3}+M^{2}N^{2}T)\) \\ \hline EP [7] & Not required & \(\mathcal{O}(M^{3}N^{3}T)\) \\ \hline BPICNet [10] & \(\mathcal{O}(b(M^{3}N^{3}+MN)\)\(+M^{2}N^{2}T))\) & \(+M^{2}N^{2}T)\) \\ \hline D-DIP-BPIC & Not required & \(\mathcal{O}(M^{2}N^{2}I+M^{2}N^{2}T)\) \\ \hline \end{tabular} \end{table} Table I: Computational complexity comparison Fig. 4: CDF of I Fig. 3: D-DIP structure of 256 samples were generated. \(P\in\{6,\ldots,12\}\) is randomly chosen and the values of SNR are uniformly distributed in a certain range, more details are shown in [10]. Fig. 5(a) demonstrates that the proposed D-DIP-BPIC detector achieves around 0.5 dB performance gain over MMSE-BPIC and UAMP. In fact, its SER performance is very close to BPICNet and EP. Fig. 5(b) evaluates the scalability of our proposed D-DIP-BPIC detector. As we increase the OTFS frame size (i.e., number of subcarriers), D-DIP-BPIC remains the outperformance over MMSE-BPIC and UAMP and achieves a close to BPICNet and EP performance. Fig. 5(c) shows that when the number of paths (e.g., mobile reflectors) increases, the D-DIP-BPIC detector still can achieve close to BPICNet and EP performance and outperform others. As shown in Fig. 5(d), it is obvious that the performance of the BPICNet detector degrades in the case of \(k_{max}=1\) as compared to \(k_{max}=2,3\) as the fidelity of training data is compromised while our D-DIP-BPIC still retains its benefit. ## VII Conclusion We proposed an untrained neural network based OTFS detector that can achieve excellent performance compared to state-of-the-art OTFS detectors. Our simulation results showed that the proposed D-DIP-BPIC detector achieves a 0.5 dB SER performance improvement over MMSE-BPIC, and achieve a close to EP SER performance with much lower complexity.
2306.03603
Trial matching: capturing variability with data-constrained spiking neural networks
Simultaneous behavioral and electrophysiological recordings call for new methods to reveal the interactions between neural activity and behavior. A milestone would be an interpretable model of the co-variability of spiking activity and behavior across trials. Here, we model a mouse cortical sensory-motor pathway in a tactile detection task reported by licking with a large recurrent spiking neural network (RSNN), fitted to the recordings via gradient-based optimization. We focus specifically on the difficulty to match the trial-to-trial variability in the data. Our solution relies on optimal transport to define a distance between the distributions of generated and recorded trials. The technique is applied to artificial data and neural recordings covering six cortical areas. We find that the resulting RSNN can generate realistic cortical activity and predict jaw movements across the main modes of trial-to-trial variability. Our analysis also identifies an unexpected mode of variability in the data corresponding to task-irrelevant movements of the mouse.
Christos Sourmpis, Carl Petersen, Wulfram Gerstner, Guillaume Bellec
2023-06-06T11:46:31Z
http://arxiv.org/abs/2306.03603v2
# Trial matching: capturing variability with data-constrained spiking neural networks ###### Abstract Simultaneous behavioral and electrophysiological recordings call for new methods to reveal the interactions between neural activity and behavior. A milestone would be an interpretable model of the co-variability of spiking activity and behavior across trials. Here, we model a cortical sensory-motor pathway in a tactile detection task with a large recurrent spiking neural network (RSNN), fitted to the recordings via gradient-based optimization. We focus specifically on the difficulty to match the trial-to-trial variability in the data. Our solution relies on optimal transport to define a distance between the distributions of generated and recorded trials. The technique is applied to artificial data and neural recordings covering six cortical areas. We find that the resulting RSNN can generate realistic cortical activity and predict jaw movements across the main modes of trial-to-trial variability. Our analysis also identifies an unexpected mode of variability in the data corresponding to task-irrelevant movements of the mouse. ## 1 Introduction Over the past decades, there has been a remarkable advancement in neural recording technology. Today, we can simultaneously record hundreds, even thousands, of neurons with millisecond time precision. Coupled with behavior measurements, modern experiments enable us to better understand how brain activity and behavior are intertwined [1]. In these experiments, it is often observed that even well-trained animals respond to the same stimuli with considerable variability. For example, mice trained on a simple tactile detection task occasionally miss the water reward [2], possibly because of satiation, lack of attention or neural noise. It is also clear that there is additional uncontrolled variability in the recorded neural activity [3; 4; 5] induced for instance by a wide range of task-irrelevant movements. Our goal is to reconstruct a simulation of the sensory-motor circuitry driving the variability of neural activity and behavior. To understand the generated activity at a circuit level, we develop a generative model which is biologically interpretable: all the spikes are generated by a recurrent spiking neural network (RSNN) with hard-biological constraints (i.e. the voltage and spiking dynamics are simulated with millisecond precision, neurons are either inhibitory or excitatory, spike transmission delay takes \(2-4\) ms). First contribution, we make a significant advance in the simulation methods for data-constrained RSNNs. While most prior works [6; 7; 8] were limited to single recording sessions, our model is constrained to spike recordings from \(28\) sessions covering six cortical areas. The resulting spike-based model enables a data-constrained simulation of a cortical sensory-motor pathway (from somatosensory to motor cortices responsible for the whisker, jaw and tongue movements). As far as we know, our model is the first RSNN model constrained to multi-session recordings with automatic differentiation methods for spiking neural networks [8; 9; 10]. Second contribution, using this model we aim to pinpoint the circuitry that induces variability in behavior (asking for instance what circuit triggers a loss of attention). Towards this goal, we identify an unsolved problem: "how do we enforce the generation of a realistic distribution of neural activity and behavior?" To do this, the model is fitted jointly to the recordings of spiking activity and movements to generate a realistic trial-to-trial co-variability between them. Our technical innovation is to define a supervised learning loss function to match the recorded and generated variability. Concretely the _trial matching_ loss function is the distance between modeled and recorded distributions of neural activity and movements. It relies on recent advances in the field of optimal transport [11; 12; 13] providing notions of distances between distributions. In our data-constrained RSNN, _trial matching_ enables the recovery of the main modes of trial-to-trial variability which includes the neural activity related to instructed behavior (e.g. miss versus hit trials) and uninstructed behavior like spontaneous movements. **Related work** While there is a long tradition of data fitting using the leaky integrate and fire (LIF) model, spike response models [14] or generalized linear models (GLM) [6], most of these models were used to simulate single neuron dynamics [15; 16] or small networks with dozens of neurons recorded in the retina and other brain areas [6; 8; 7; 17]. A major drawback of those fitting algorithms was the limitation to a single recording session. Beyond this, researchers have shown that FORCE methods [18] could be used to fit up to \(13\) sessions with a large RSNN [17; 19; 20]. But in contrast with back-propagation through time (BPTT) in RSNNs [9; 10; 21], FORCE is tied to the theory of recursive least squares making it harder to combine with deep learning technology or arbitrary loss functions. We know only one other study where BPTT is used to constrain RSNN to spike-train recordings [8] but this study was limited to a single recording session. Regarding generative models capturing trial-to-trial variability in neural data, many methods rely on trial-specific latent variables [22; 23; 24; 25; 26]. This is often formalized by abstracting away the physical interpretation of these latent variables using deep neural networks (e.g. see LFADS [22] or spike-GAN [27]) but our goal is here to model the interpretable mechanisms that can generate the recorded data. There are hypothetical implementations of latent variables in RSNNs, most notoriously, latent variables can be represented as the activity of mesoscopic populations of neurons [25], or linear combinations of the neurons' activity [28; 26; 29]. These two models assume respectively an implicit grouping of the neurons [25] or a low-rank connectivity matrix [28; 26; 29]. Here, we want to avoid making any structural hypothesis of this type a priori. We assume instead that the variability is sourced by unstructured noise (Gaussian current or Poisson inputs) and optimize the network parameters to transform it into a structured trial-to-trial variability (e.g. multi-modal distribution of hit versus miss trials). The optimization therefore decides what is the network mechanism that best explains the trial-to-trial variability observed in the data. This hypothesis-free approach is made possible by the _trial matching_ method presented here. This method is complementary to previous optimization methods for generative models in neuroscience. Many studies targeted solely trial-averaged statistics and ignored single-trial activity, for instance methods using the FORCE algorithm [30; 31; 17; 20; 19], RSNN methods using back-propagation through time [8] and multiple techniques using (non-interpretable) deep generative models [32]. There exist other objective functions which can constrain the trial-to-trial variability in the data, namely: the maximum likelihood principle [6; 15] or spike-GANs [27; 33]. We illustrate however in the discussion section why these two alternatives are not a straightforward replacement for the _trial matching_ loss function with our interpretable RSNN generator. ## 2 Large data-constrained Recurrent Spiking Neural Network (RSNN) This paper aims to model the large-scale electrophysiology recordings from [2], where they recorded 4415 units from 12 areas across 22 mice. All animals in this dataset were trained to perform the whisker tactile detection task described in Figure 1: in 50% of the trials (the GO trials), a whisker is deflected and after a \(1\) s delay period an auditory cue indicates water availability if the mouse ticks, whereas in the other 50% of trials (the No-Go trials), there is no whisker deflection and licking after the auditory cue is not rewarded. Throughout the paper we attempt to create a data-constrained model of the six areas that we considered to play a major role in this behavioral task: the primary and secondary whisker somatosensory cortices (wS1, wS2), motor cortices (wM1, wM2), the primary tongue-jaw motor cortex (tjM1) and the anterior lateral motor cortex (ALM), also known as tjM2 (see Figure 1A and 3A). While we focus on this dataset, the method described below aims to be broadly applicable to most contemporary large-scale electrophysiological recordings. We built a spiking data-constrained model that simulates explicitly a cortical neural network at multiple scales. At the single-cell level, each neuron is either excitatory or inhibitory (the output weights have only positive or negative signs respectively), follows leaky-integrate and fire (LIF) dynamics, and transmits information in the form of spikes with synaptic delays ranging from \(2\) to \(4\) ms. At a cortical level, we model six brain areas of the sensory-motor pathway where each area consists of \(250\) recurrently connected neurons (\(200\) excitatory and \(50\) inhibitory) as shown in Figure 3A, such that only excitatory neurons project to other areas. Since the jaw movement defines the behavioral output in this task, we also model how the tongue-jaw motor cortices (tjM1, ALM) drive the jaw movements. Mathematically, we model the spikes \(z_{j,k}^{t}\) of the neuron \(j\) at time \(t\) in the trial \(k\) as a binary number. The spiking dynamics are then driven by the integration of the somatic currents \(I_{j,k}^{t}\) into the membrane voltage \(v_{j,k}^{t}\), by integrating LIF dynamics with a discrete time step \(\delta_{t}=2\) ms. The jaw movement \(y_{k}^{t}\) is simulated with a leaky integrator driven by the activity of tjM1 and ALM neurons, followed by an exponential non-linearity. This can be summarized with the following equations, the trial index \(k\) is omitted for simplicity: \[v_{j}^{t} = \alpha_{j}v_{j}^{t-1}+(1-\alpha_{j})I_{j}^{t}-v_{\mathrm{thr},j} z_{j}^{t-1}+\xi_{j}^{t} \tag{1}\] \[I_{j}^{t} = \sum_{d,i}W_{ij}^{d}z_{i}^{t-d}+\sum_{d,i}W_{ij}^{\mathrm{in},d}x _{i}^{t-d}\] (2) \[\tilde{y}^{t} = \alpha_{jaw}\tilde{y}^{t-1}+(1-\alpha_{jaw})\sum_{i}W_{i}^{ \mathrm{jaw}}z_{i}^{t}\] (3) \[y^{t} = \exp(\tilde{y}^{t})+b \tag{4}\] Figure 1: **Modeling trial-variability in electrophysiological recordings.****A**. During a delayed whisker detection task, the mouse should report the sensation of a whisker stimulation by liking to obtain a water reward. Neural activity and behavior of the mouse are recorded simultaneously. **B**. A recurrent spiking neural network (RSNN) of the sensorimotor pathway receives synaptic input modeling the sensory stimulation and produces the jaw movement as a behavioral output. **C**. The stimuli and the liking action of the mouse organize the trials into four types (hit, miss, false alarm, and correct rejection). Our goal is to build a model with realistic neural and behavioral variability. Panels A and C are adapted from [34]. where \(W_{ij}^{d}\), \(W_{ij}^{\text{in},d}\), \(W_{i}^{\text{jaw}}\), \(v_{\text{thr},j}\), and b are model parameters. The membrane time constants \(\tau_{m}=30\) ms for excitatory and \(\tau_{m}=10\) ms for inhibitory neurons define \(\alpha_{j}=\exp\left(-\frac{\delta t}{\tau_{m,j}}\right)\) and \(\tau_{jaw}=50\) ms define similarly \(\alpha_{jaw}\) which controls the velocity of integration of the membrane voltage and the jaw movement. To implement a soft threshold crossing condition, the spikes inside the recurrent network are sampled with a Bernoulli distribution \(z_{j}^{t}\sim\mathcal{B}(\sigma(\frac{v_{j}^{t}-v_{\text{thr},j}}{v_{0}}))\), where \(v_{0}\) is the temperature of the sigmoid (\(\sigma\)). The spike trains \(x_{i}^{t}\) model the thalamic inputs as simple Poisson neurons producing spikes randomly with a firing probability of \(5\) Hz and increasing their firing rate when a whisker stimulation is present (see Appendix). The last noise source \(\xi_{j}^{t}\) is an instantaneous Gaussian noise \(\xi_{j}^{t}\) of standard deviation \(\beta v_{\text{thr}}\sqrt{\delta t}\) modeling random inputs from other areas (\(\beta\) is a model parameter that is kept constant over time). Session stitchingAn important aspect of our fitting method is to leverage a dataset of electrophysiological recordings with many sessions. To constrain the neurons in the model to the data, we uniquely assign each neuron in the model to a single neuron from the recordings as illustrated in Figure 2A and 3A. Since our model has 1500 neurons, we therefore select randomly 1500 neurons from the recordings (\(250\) in each area, we ignore the other recorded neurons to have the same number of excitatory and inhibitory neurons in each area). This bijective mapping between neurons in the data and the model is fixed throughout the analysis and defines the area and cell types of the neurons in the model. The area is inferred from the location of the corresponding neuron in the dataset and the cell type is inferred from the action potential waveform of this cell (for simplicity, fast-spiking neurons are considered to be inhibitory and regular-spiking neurons as excitatory). Given this assignment, we denote \(\mathbf{z}_{j}^{\mathcal{D}}\) as the spike train of neuron \(j\) in the dataset and \(\mathbf{z}_{j}\) as the spike train of the corresponding neuron in the model; in general, an upper script \(\mathcal{D}\) always refers to the recorded data. A consequence is that two neurons \(i\) and \(j\) might be synaptically connected in the model although they correspond to neurons recorded in separate sessions. This choice is intended to model network sizes beyond what can be recorded during a single session. Our network is therefore a "collage" of multiple sessions stitched together as illustrated in Figure 2A and 3A. This network is then constrained to the recorded data by optimizing the parameters to minimize the loss functions defined in the following section. Altogether, when modeling the dataset from Esmaeili and colleagues [2], the network consists of \(1500\) neurons where each neuron is assigned to one neuron recorded in one of the \(28\) different recording sessions. Since multiple sessions are typically coming from different animals, we model a "template mouse brain" which is not meant to reflect subject-to-subject differences. ## 3 Fitting single-trial variability with the trial matching loss function We fit the network to the recordings with gradient descent and we rely on surrogate gradients to extend back-propagation to RSNNs [9; 10]. At each iteration until convergence, we simulate a batch of \(K=150\) statistically independent trials. We measure some trial-average and single-trial statistics of the simulated and recorded activity, calculate a loss function, and minimize it with respect to all the trainable parameters of the model via gradient descent and automatic differentiation. This protocol is sometimes referred to as a sample-and-measure method [8] as opposed to the likelihood optimization in GLMs where the network trajectory is clamped to the recorded data during optimization [6]. The full optimization lasts for approximately one to three days on a GPU A100-SXM4-40GB. Trial-average lossWe consider the trial-averaged activity over time of each neuron from every session \(\mathcal{T}_{\text{neuron}}\), sometimes referred also as neuron peristimulus time histogram (PSTH). This is defined by \(\mathcal{T}_{\text{neuron}}(\mathbf{z}_{j})=\frac{1}{K}\sum_{k}\mathbf{z}_{j,k}*f\) where \(f\) is a rolling average filter with a window of \(12\) ms, and \(K\) is the number of trials in a batch of spike trains \(\mathbf{z}\). The statistics \(\mathcal{T}_{\text{neuron}}(\mathbf{z}_{j}^{\mathcal{D}})\) are computed similarly on the \(K^{\mathcal{D}}\) trials recorded during the session corresponding to neuron \(j\). We denote the statistics \(\mathcal{T}^{\prime}_{\text{neuron}}\) after normalizing each neuron's trial-averaged activity, and we define the trial-averaged loss function as follows: \[\mathcal{L}_{\text{neuron}}=\sum_{j}\|\mathcal{T}^{\prime}_{\text{neuron}}( \mathbf{z}_{j})-\mathcal{T}^{\prime}_{\text{neuron}}(\mathbf{z}_{j}^{\mathcal{D}})\|^ {2}\;. \tag{5}\] It is expected from [8] that minimizing this loss function alone generates realistic trial-averaged statistics like the average neuron firing rate: Trial matching loss: fitting trial-to-trial variabilityGoing beyond trial-averaged statistics, we now describe the _trial matching_ loss function to capture the main modes of trial-specific activity. From the previous neuroscience study [2], it appears that population activity in well-chosen areas is characteristic of the trial-specific variability. For instance, intense jaw movements are preceded by increased activity in the tongue-jaw motor cortices, and hit trials are characterized by a secondary transient appearing in the sensory cortices a hundred milliseconds after a whisker stimulation. To define single-trial statistics which can capture these features we denote the population-averaged firing rate of an area \(A\) as \(\mathcal{T}_{A}(\mathbf{z}_{k})=\frac{1}{|A|}\sum_{j\in A}(\mathbf{z}_{j,k}*f)\) where \(|A|\) is the number of neurons in area \(A\), the smoothing filter \(f\) has a window size of \(48\) ms and the resulting signal is downsampled to avoid unnecessary redundancy. We write \(\mathcal{T^{\prime}}_{A}\) when each time bin is normalized to mean \(0\) and standard deviation \(1\) using the recorded trials and we use \(\mathcal{T^{\prime}}_{A}\) as feature vectors to characterize the trial-to-trial variability in area \(A\). To construct a single feature vector encapsulating the joint activity dynamics in all areas and the jaw movements in a session, we concatenate all these feature vectors together into \(\mathcal{T^{\prime}}_{\mathrm{trial}}=(\mathcal{T^{\prime}}_{A1},\mathcal{T^{ \prime}}_{A2},\mathbf{y}_{k}*f)\), where \(A1\) and \(A2\) are the areas recorded in this session. The challenging part is now to define the distance between the recorded statistics \(\mathcal{T}_{\mathrm{trial}}(\mathbf{z}^{\mathcal{D}})\) and the generated ones \(\mathcal{T}_{\mathrm{trial}}(\mathbf{z})\). Common choices of distances like the mean square error are not appropriate to compare distributions. This is because the order of trials in a batch of generated/recorded trials has no meaning a priori: there is no reason for the random noise of the first generated trial to correspond to the first recorded trial - rather we want to compare unordered sets of trials and penalize if any generated trial is very far from any recorded trial. Formalizing this mathematically we consider a distance between distributions inspired by the optimal transport literature. Since the plain mean-squared error cannot be used, we use the mean-squared error of the optimal assignment between pairs of recorded and generated trials: we select randomly \(K^{\prime}=\min(K,K^{\mathcal{D}})\) generated and recorded trials (\(K\) and \(K^{\mathcal{D}}\) are respectively the number of generated and recorded trials in one session), and this optimal assignment is formalized by the integer permutation \(\pi:\{1,\dots K^{\prime}\}\rightarrow\{1,\dots K^{\prime}\}\). Then using the feature vector \(\mathcal{T}_{\mathrm{trial}}\) for any trial \(k\), we define the hard _trial matching_ loss function as follows: \[\mathcal{L}_{\mathrm{trial}}=\min_{\pi}\sum_{k}||\mathcal{T^{\prime}}_{ \mathrm{trial}}(\mathbf{z}_{k})-\mathcal{T^{\prime}}_{\mathrm{trial}}(\mathbf{z}^{ \mathcal{D}}_{\pi(k)})||^{2}\;. \tag{6}\] We compute this loss function identically on all the recorded sessions and take the averaged gradients to update the parameters. Each evaluation of this loss function involves the computation of the optimal trial assignment \(\pi\) which can be computed with the Hungarian algorithm [35] (see linear_sum_assignment for an implementation in scipy). This is not the only way to define a distance between distributions of statistics \(\mathcal{T^{\prime}}_{\mathrm{trial}}\). In fact, this choice poses a potential problem because the optimization over \(\pi\) is a discrete optimization problem, so we have to assume that \(\pi\) is a constant with respect to the parameters when computing the loss gradients. We also tested alternative choices relying on a relaxation of the hard assignment into a smooth and differentiable bi-stochastic matrix. This results in the soft _trial matching_ loss function, which replaces the optimization over \(\pi\) by the Sinkhorn divergence [12; 13] (see the geomloss package for implementation in pytorch [13]). In practice, to minimize both \(\mathcal{L}_{\mathrm{trial}}\) (either the soft or hard version) and \(\mathcal{L}_{\mathrm{neuron}}\) simultaneously we optimize them in an additive fashion with a parameter-free multi-task method from deep learning which re-weights the two loss functions to ensure that their gradients have comparable scales (see [36] for a similar implementation). ## 4 Simulation results Validation using an artificial datasetWe generated an artificial dataset with two distinct areas with \(250\) neurons each to showcase the effect of trial variability. In this dataset A1 (representing a sensory area) is responding always to a stimulus while A2 (representing a motor area) responds to the stimulus in only \(80\%\) of the trials (the firing rates of neurons in the artificial dataset are shown in Figure 2B with light shades). This is a toy representation of the variability that is observed in the real data recorded in mice, so we construct the artificial data so that a recording resembles a hit trial ("hit-like") if the transient activity in A2 is higher than \(30\) Hz (otherwise it's a "miss-like" trial). From the results of our simulations Figure 2B-C, we can observe that the models that use trial matching (either soft _trial matching_ or hard _trial matching_) can re-generate faithfully this bimodal response distribution ("hit-like" and "miss-like") in A2. In this dataset we saw little difference between the solutions of soft and hard _trial matching_, if any, soft _trial matching_ reached its optimal performance with fewer iterations (see Appendix). As expected, when the model is only trained to minimize the neuron loss for trial-averaged statistics, it cannot generate stochastically this bimodal distribution and consistently generates instead a noisy average response. Delayed whisker tactile detection datasetWe then apply our modeling approach to the real large-scale electrophysiology recordings from [2]. After optimization, we verify quantitatively that our model generates activity that is similar to the recordings in terms of trial-averaged statistics. First, we see that the 1500 neurons in the network exhibit a realistic diversity of averaged firing rates: the distribution of neuron firing rates is log-normal and matches closely the distribution extracted from the data in Figure 3B. Second, the single-neuron PSTHs of our model are a close match to the PSTHs from the recordings. This can be quantified by the Pearson trial-averaged correlation between the generated and held-out test trials which we did not use for parameter fitting. We obtain an averaged Pearson correlation of \(0.30\pm 0.01\) which is very close to the Pearson correlation obtained when comparing the training and testing sets \(0.31\pm 0.01\). Figure 3C shows how the trial-averaged correlation is distributed over neurons. As expected, this trial averaged metric is not affected if we do not use _trial matching_ (\(0.30\pm 0.01\)). To quantify how the models capture the trial-to-trial variability, we then quantify how the distributions of neural activity and jaw movement are consistent between data and model. So we need to define the _trial-matched Pearson correlation_ to compute a Pearson correlation between the distribution of trial statistics \(T^{\prime}_{\text{trial}}\) which are unordered sets of trials. So we compute the optimal assignment \(\pi\) between trial pairs from the data and the recordings, and we report the averaged Pearson correlation over all trial pairs. Between the data and the model, we measure a _trial-matched Pearson Figure 2: **Artificial Dataset A. Session stitching: every neuron from the recordings is uniquely mapped to a neuron from our model. For example, an excitatory neuron from our model that belongs in the putative A1 is mapped to an excitatory neuron “recorded” in A1. In our network, we constrain the connectivity so that only excitatory neurons can project across different brain regions. B. The first area (A1) responds equally in a hit-like and a miss-like trial, while the second area (A2) responds only in hit-like trials. A model that does not use trial matching cannot capture the bimodal distribution of A2. C. Distribution of max firing rate of the population average of A2 from each trial. Only the trial matching algorithms retrieves the bimodal behavior of A2.** _correlation_ of \(0.48\pm 0.01\), with a performance ceiling at \(0.52\pm 0.01\) obtained by comparing the training and testing set directly (see Figure 3C for details). For reference, the model without _trial matching_ has a lower _trial-matched Pearson correlation_\(0.28\pm 0.003\). Successful recovery of trial type distributionWhile the neuronal activity is recorded, the behavioral response of the animal is also variable. When mice receive a stimulation they perform correctly with a \(66\%\) hit rate, while in the absence of a stimulus, mice still falsely kick with a \(20\%\) false alarm rate. Even in correct trials, the neural activity reflects variability which is correlated to uninstructed jaw and tongue movements [2]. We evaluate the distribution of trial types (hit, miss, correct rejection, and false alarm) from our fitted network model. Indeed, the 95% confidence intervals of the estimated trial type frequencies are always overlapping between the model and the data (see Figure 4A). In this Figure, we classify the trial type with a nearest-neighbor-like classifier using only the neural activity (see Appendix). In contrast, a model without the _trial matching_ would fail completely because it always produces averaged trajectories instead of capturing the multi-modal variability of the data as seen in Figure 4A. With _trial matching_ it is even possible to classify trial types using jaw movement. To define equivalent trial types in the model, we rely on the presence or absence of the stimulation and a classifier to identify a lick action given the jaw movements. This classifier is a multi-layer perceptron trained to predict a lick action on the water dispenser given the recorded jaw movements (like in the data, it occurs that the model "moves" the jaw without inducing a lick action). After optimization with _trial matching_, since the jaw movement \(y^{t}\) is contained in the fitted statistics \(T_{\text{trial}}\), the distribution of jaw movement and the trial types are similar in the fitted model and the trial type distribution remains consistent. In Figure 4B we show population-averaged activity traces where the jaw is used to determine the trial type. Figure 3: **Large-scale electrophysiology recordings dataset A. Session stitching: Every neuron from the recordings across sessions is uniquely mapped to a neuron from our model. For example, an excitatory neuron from our model that belongs in the putative area iM1 is mapped to an excitatory neuron recorded from iM1. In the pink box are the areas from which we decode the jaw trace. B. Baseline firing rate histogram, \(200\) ms before the whisker stimulus, from each neuron of our model and the recordings. C. Left: Pearson correlation of the PSTH, the violin plots represent the pearson correlations across neurons. Right: _trial-matched Pearson correlation_ of \(\mathcal{T}^{\prime}_{trial}\), the violin plots represent the distribution over \(200\) generated and recorded trial pairs.** Unsupervised discovery of modes of variabilitySo far we have analyzed whether the variability among the main four trial types was expressed in the model, but the existence of these four trial types is not enforced explicitly in the loss function. Rather, the _trial matching_ loss function aims to match the overall statistics of the distributions and it has discovered these four main modes of trial-variability without explicit supervision. A consequence is that our model has possibly generated other modes of variability which are needed for the model to explain the full distribution of recorded data. To display the full distribution of generated trials, we represent the neural activity of \(400\) generated trials in 2D in Figure 4C. Formally, we apply UMAP to the sub-selection of \(\mathcal{T}_{\text{trial}}\) which excludes the jaw components: \((\mathcal{T}_{wS1},\ldots\mathcal{T}_{ALM})\). Importantly, the representation of the trial distribution in a 2D projection is only possible with a generative model like ours. Otherwise, it would be nontrivial to define feature vectors for unique recorded trials because of the missing data: in each session, only a couple of areas are recorded simultaneously. However, to confirm that the generated distribution is consistent with the data we display template vectors for each trial condition \(c\) that are calculated from the recorded data. These templates are drawn with stars in Figure 4C and they are computed as follows: the coefficient \(\mathcal{T}_{\text{A,c}}^{\mathcal{D}}\) of this template vector is computed by averaging the population activity of area \(A\) in all recorded trials from all sessions (see Appendix for details), these averaged vectors are then concatenated and projected in the 2D UMAP space. The emerging distribution in this visualization is grouped in clusters. We observe that the template vectors representing the correct rejection, miss, and false alarm trials are located at the center of the corresponding cluster of generated trials. More surprisingly the generated hit trials are split into two clusters (see the two boxed clusters in Figure 4C). This can be explained by a simple feature: 85% of the generated hit trials on the left-hand cluster of panel 4C have intense jaw movements during the Figure 4: **Emergent trial types A**. Trial type distribution from the recordings in the models. The whiskers show the \(95\%\) confidence interval of the trial type frequency. In this panel, trial types use the template matching method to avoid disadvantage models without _trial matching_ which have almost no variability in the jaw movement, see Appendix. **B**. Population and trial-averaged neuronal activity per area and per Hit and Miss trial type from the \(400\) simulated trials from the model against the averaged recordings from the testing set. **C**. Two-dimensional UMAP representation of the \(\mathcal{T}_{trial}\) of \(400\) simulated trials. The jaw movement is not used in this representation. **D**. For the model, we separated the active hit and quiet hit trials based on their location in the UMAP. For the data, we separated the active hit and quiet hit trials based on the jaw movement as in [2]. delay period (\(\max_{t}|y^{t}-y^{t-1}|>4\delta\) where \(\delta\) is the standard deviation of \(|y^{t}-y^{t-1}|\) in the \(200\) ms before whisker stimulation). In fact, a similar criterion had been used in [2] to separate the hit trials in the recorded data, so we also refer to them as the "active hit" and "quiet hit" trials and show the population activity in Figure 4D. It shows that our algorithm has captured without supervision the same partition of trial types that neuroscientists have used to describe this dataset. We conclude that our modeling approach can be used for a hypothesis-free identification of modes of trial-to-trial variability, even when they reflect task-irrelevant behavior. ## 5 Discussion We introduced a generative modeling approach where a data-constrained RSNN is fitted to multi-session electrophysiology data. The two major innovations of this paper are (1) the technical progress towards multi-session RSNN fitted with automatic differentiation, and (2) a _trial matching_ loss function to match the trial-to-trial variability in recorded and generated data. Interpretable mechanistic model of activity and behaviorOur model has a radically different objective in comparison with other deep-learning models: our RSNN is aimed to be biophysically interpretable. In the long term, we hope that this method will be able to capture biological mechanisms (e.g. predicting network structure, causal interaction between areas and anatomical connectivity), but in this paper, we have focused on numerical and methodological questions which are getting us one step closer to this long-term objective. Mechanisms of latent dynamicsA long-standing debate in neuroscience is whether the brain computes with low-dimensional latent representations and how that is implemented in a neural circuit. Deep auto-encoders of neural activity like LFADS [22] can indeed generate trial-to-trial variability from low-dimensional latent representations. By construction, the variability is sourced by the latent variable which contains all the trial-specific information. This is in stark contrast with our approach, where we see the emergence of structured manifolds in the trial-to-trial variability of the RSNN (see the UMAP representation of Figure 4C), although we did not enforce the presence of low-dimensional latent dynamics. Structure in the trial-to-trial variability emerges because the RSNN is capable of transforming the unstructured noise sources (stochastic spikes and Gaussian input current) into a low-dimensional trial-to-trial variability - a typical variational auto-encoder setting would not achieve this. Note that it is also possible however to add a random low-dimensional latent as a source of low-dimensional variability like in LFADS. In the Appendix, we reproduce our results on the multi-session dataset from [2] while assuming that all voltages \(v^{t}_{i,k}\) have a trial-specific excitability offset \(\xi_{i,k}\) using a 5-dimensional gaussian noise \(\mathbf{\psi}_{k}\) and a one-hidden-layer perceptron \(F_{\theta}\) such that \(\xi_{i,k}=F_{\theta,i}(\mathbf{\psi}_{k})\). We observe that this latent noise model accelerates drastically the optimization, probably because \(\xi_{i,k}\) is an ideal noise source for minimizing \(\mathcal{L}_{\mathrm{trial}}\). However, the final solution achieves similar fitting performance metrics so, our method demonstrates that the extra assumption of a low-dimensional input is not necessary to generate realistic variability. Arguably, providing this low-dimensional input might even be counterproductive if the end goal is to identify the mechanism by which the circuit produces the low-dimensional dynamics. Alternative loss functions to capture variabilityThe main alternative methods to constrain the trial-to-trial variability would be likelihood-based approaches [6; 15] or spike-GANs [27; 33]. These methods are appealing as they do not depend on the choice of trial statistics \(\mathcal{T}_{\mathrm{trial}}\). Since these methods were never applied with a multi-session data-constrained RSNN we explored how to extend them to our setting and compare the results. We tested these alternatives on the Artificial dataset in the Appendix. The likelihood of the recorded spike trains [6; 15] cannot be defined with multiple sessions because we cannot clamp neurons that are not recorded (see [8] for details). The closest implementation that we could consider was to let the network simulate the data "freely" which requires, therefore, an optimal assignment between recorded and generated data, so it is a form of _trial-matched likelihood_). With this loss function, we could not retrieve the bi-model hit versus miss trial type distribution unless it is optimized jointly with \(\mathcal{L}_{\mathrm{trial}}\). We also tested the implementation of a spike-GAN discriminator. In GANs the min-max optimization is notoriously hard to tune, and we were unable to train our generator with a generic spike-GAN discriminator from scratch (probably because the biological constraints of our generator affect the robustness of the optimization). In our hands, it only worked when the GAN discriminator was fed directly with the trial statistics \(\mathcal{T}_{\text{trial}}\) and the network was jointly fitted to the trial-averaged loss \(\mathcal{L}_{\text{neuron}}\). It shows that a GAN objective and the _trial matching_ loss function hold a similar role. We conclude that both of these clamping-free methods are promising to fit data-constrained RSNNs. What differs between them, however, is that _trial matching_ replaces the discriminator with the optimal assignment \(\pi\) and the statistics \(\mathcal{T}\) which are parameter-free, making them easy to use and numerically robust. It is conceivable for future work the best results are obtained by combining _trial matching_ with other GAN-like generative methods. ## Acknowledgments and Disclosure of Funding This research was supported by the Swiss National Science Foundation (no. 31003A_182010, TMAG-3_209271, 200020_207426), and Sinergia Project CRSII5_198612. Many thanks to Lenaic Chizat, James Isbister, Shuqi Wang, and Vahid Esmaeili for their helpful discussions.
2301.04875
Color-NeuraCrypt: Privacy-Preserving Color-Image Classification Using Extended Random Neural Networks
In recent years, with the development of cloud computing platforms, privacy-preserving methods for deep learning have become an urgent problem. NeuraCrypt is a private random neural network for privacy-preserving that allows data owners to encrypt the medical data before the data uploading, and data owners can train and then test their models in a cloud server with the encrypted data directly. However, we point out that the performance of NeuraCrypt is heavily degraded when using color images. In this paper, we propose a Color-NeuraCrypt to solve this problem. Experiment results show that our proposed Color-NeuraCrypt can achieve a better classification accuracy than the original one and other privacy-preserving methods.
Zheng Qi, AprilPyone MaungMaung, Hitoshi Kiya
2023-01-12T08:47:33Z
http://arxiv.org/abs/2301.04875v1
Color-Neuracrypt: Privacy-Preserving Color-Image Classification Using Extended Random Neural Networks ###### Abstract In recent years, with the development of cloud computing platforms, privacy-preserving methods for deep learning have become an urgent problem. NeuraCrypt is a private random neural network for privacy-preserving that allows data owners to encrypt the medical data before the data uploading, and data owners can train and then test their models in a cloud server with the encrypted data directly. However, we point out that the performance of NeuraCrypt is heavily degraded when using color images. In this paper, we propose a Color-NeuraCrypt to solve this problem. Experiment results show that our proposed Color-NeuraCrypt can achieve a better classification accuracy than the original one and other privacy-preserving methods. Zheng Qi, AprilPyone MaungMaung and Hitoshi Kiya Tokyo Metropolitan University 6-6, Asahigaoka, Hino-shi, Tokyo, 191-0065, Japan Phone/FAX:+81-042-585-8454 E-mail: {qi-zheng@ed., apmaung@, kiya@}tmu.ac.jp ## 1 Introduction In recent years, the spread of deep neural networks (DNNs) [1] has greatly contributed to solving complex tasks for many applications, and it has been very popular for data owners to train DNNs on large amounts of data in cloud servers. However, data privacy such as personal medical records, may be compromised in that process, because a third party can access the uploaded data illegally, so it is necessary to protect data privacy in cloud environments, and privacy-preserving methods for deep learning have become an urgent challange [2]. One of the most efficient solutions is to encrypt data before the data uploading, so that data owners can train and then test their DNNs in a cloud server with the encrypted data directly [3, 4, 5, 6]. NeuraCrypt [7] is a private random neural network that allows us to encrypt data before uploading. Vision Transformation (ViT) [8] models have been demonstrated to maintain a high classification performance for medical images (with one channel) under the use of NeuraCrypt, but we point out that the performance of NeuraCrypt is heavily degraded when using color images. In this paper, we extend NeuraCrypt from one channel to three channels, called Color-NeuraCrypt, to avoid performance degradation. Experiment results show that our proposed Color-NeuraCrypt achieved a better classification accuracy than the original one and outperformed other privacy-preserving methods on the CIFAR-10 dataset. ## 2 Related Work Lightweight privacy-preserving methods, called learnable encryption, have almost the same usage scenario as the random neural network. Generally, Privacy-preserving image classification methods have to satisfy two requirements: high classification accuracy and strong robustness against various attacks. Tanaka first introduced a block-wise learnable image encryption (LE) method with an adaptation layer [9], which is used prior to a classifier to reduce the influence of image encryption. Another encryption method is a pixel-wise encryption (PE) method in which negative-positive transformation and color component shuffling are applied without using any adaptation layer [10]. However, both encryption methods are not robust enough against ciphertext-only attacks as in [11]. To enhance the security of encryption, LE was extended by adding a block scrambling step and a pixel encryption operation with multiple keys (hereinafter denoted as ELE) [12]. However, ELE still has a lower accuracy than Figure 1: Framework of proposed method. that of using plain images. Recently, block-wise learnable encryption methods with an isotropic network have been proposed to reduce the influence of image encryption [13, 14]. Meanwhile, NeuraCrypt was proposed with ViT and achieved a good performance on grayscale medical images, but its performance degraded heavily for color images. In addition, it cannot be directly applied to a standard pre-trained ViT. Accordingly, we propose a novel random neural network called Color-NeuraCypt to improve these issues that the conventional methods have. ## 3 Proposed Method ### Overview Figure 1 depicts the framework of the proposed scheme. A user encrypts training images by using a random neural network and sending the encrypted images to a cloud provider. Next, the cloud provider trains a ViT model with the uploaded encrypted images without perceiving any visual information. After training, the user also encrypts the testing images using the same random neural network as in training, and sends that to the cloud server. Data privacy can be protected in both training and testing processes in this framework. ### Color-NeuraCrypt NeuraCrypt is a randomly constructed neural network to encode input data [7] as shown in Fig. 1(a). It consists of patch embedding, several blocks of a \(1\times 1\) convolutional layer, position embedding, and linear projection. It can achieve a high classification accuracy for grayscale medical images, but its performance significantly drops for color images (see Section 4). To avoid performance degradation for color images, we propose a novel random neural network called Color-NeuraCrypt. Figure 1(b) shows the architecture of Color-NeuraCrypt. There are two major differences between the two random neural networks: * The output of NeuraCrypt is a patch representation. In contrast, the output of Color-NeuraCrypt is an image because we add a pixel shuffling layer at the end of the Color-NeuraCrypt to reshape a patch representation. Figure 3 shows an example of plain and encrypted images. * NeuraCrypt randomly permutes patches at the output independently for each image in patch shuffling. In contrast, to align with a standard ViT, we remove the patch shuffling step but still retain the random position embedding to hide the spatial information of plain images. Furthermore, we utilize a standard pre-trained ViT, which has trainable patch embedding and position embedding. We fine-tune ViT with encrypted images for training and testing. ## 4 Experiments We conducted image classification experiments on the MNIST [17], and CIFAR-10 [18] datasets. The MNIST dataset consists of 70,000 grayscale images (dimension of \(1\times 28\times 28\)) of handwritten digits with ten classes, where 60,000 images are for training and 10,000 for testing. The CIFAR-10 dataset consists of 60,000 color images (dimension of \(1\times 28\times 28\)), where 50,000 images are for training and 10,000 for testing. Figure 3: Example of plain and encrypted images. Figure 2: Architecture of two random neural networks (a) NeuroCrypt (b) Color-NeuraCrypt (proposed) We used a PyTorch implementation of ViT1 and fine-tuned the ViT-B_16 model which was pre-trained with the ImageNet21k dataset. To maximize the classification performance, we followed the training settings from [8] except for the learning rate. The parameters of the stochastic gradient descent (SGD) optimizer for encrypted images that we used were: a momentum of 0.9, a weight decay of 0.0005, and a learning rate value of 0.03-0.1. In addition, the depth of NeuraCrypt and Color-NeuraCrypt was set to 4. Footnote 1: [https://github.com/jeonsworld/ViT-pytorch](https://github.com/jeonsworld/ViT-pytorch) As shown in Table 1, ViT models with NeuraCrypt performed with 97.93% accuracy on the MNIST dataset, which was similar to medical images. However, it achieved unsatisfactory accuracy on the CIFAR-10 dataset. These results confirmed that NeuraCrypt is very effective on grayscale images but difficult to be applied to color images. Table 1 also shows the classification performance of other privacy-preserving methods. Our Color-NeuraCrypt outperformed not only NeuraCrypt but also the two block-wise encryption methods (ELE and EtC [16]) on the CIFAR-10 dataset, so the proposed method was confirmed to be more suitable for color images. ## 5 Conclusion and Future Work In this research, we proposed a random neural network, called Color-NeuraCrypt for privacy-preserving. Color images encrypted by Color-NeuraCrypt can be applied to ViT models for both training and testing directly. Experiment results showed that our Color-NeuraCrypt achieved a better accuracy than NeuraCrypt and other privacy-preserving methods on color images. As a random neural network is considered as an encryption method for privacy-preserving, its security needs to be evaluated. For example, we can correctly match a plain and encrypted sample using the algorithm in [19]. Furthermore, a random neural network can hide the visual information of plain images, but it is hard to secrete some transparent information, such as the distribution of the dataset and the encryption scheme. An attacker may perform a ciphertext-only attack via that information to reconstruct visual information from encrypted images. ## Acknowledgment This study was partially supported by JSPS KAKENHI (Grant Number JP21H01327).
2305.18773
On a neural network approach for solving potential control problem of the semiclassical Schrödinger equation
Robust control design for quantum systems is a challenging and key task for practical technology. In this work, we apply neural networks to learn the control problem for the semiclassical Schr\"odinger equation, where the control variable is the potential given by an external field that may contain uncertainties. Inspired by a relevant work [29], we incorporate the sampling-based learning process into the training of networks, while combining with the fast time-splitting spectral method for the Schr\"odinger equation in the semiclassical regime. The numerical results have shown the efficiency and accuracy of our proposed deep learning approach.
Yating Wang, Liu Liu
2023-05-30T06:14:20Z
http://arxiv.org/abs/2305.18773v1
On a neural network approach for solving potential control problem of the semiclassical Schrodinger equation ###### Abstract Robust control design for quantum systems is a challenging and key task for practical technology. In this work, we apply neural networks to learn the control problem for the semiclassical Schrodinger equation, where the control variable is the potential given by an external field that may contain uncertainties. Inspired by a relevant work [29], we incorporate the sampling-based learning process into the training of networks, while combining with the fast time-splitting spectral method for the Schrodinger equation in the semi-classical regime. The numerical results have shown the efficiency and accuracy of our proposed deep learning approach. ## 1 Introduction Control of quantum phenomena has been an important scientific problem in the emerging quantum technology [16]. The control of quantum electronic states in physical systems has a variety of applications such as quantum computers [4], control of photochemical processes [38] and semiconductor lasers [18]. Detailed overviews of the quantum control field can be found in survey papers and monographs [15; 43]. One issue of the controllability theory [35] aims to assess the ability to steer a quantum system from an arbitrary initial state to a targeted final state, under the impact of a control field such as a potential function, given possibly noisy observation data. Uncertainty Quantification (UQ) has drawn many attentions over the past decade. In simulating physical systems, which are often modeled by differential equations, there are inevitably modeling errors, imprecise measurements of the initial data or background coefficients, which may bring uncertainties to the models. In this project, we study the semiclassical Schrodinger equation with external potential that may contain uncertainties, and is treated as the control variable. Let \(\Omega\) be a bounded domain in \(\mathbb{R}\), the Schrodinger equation in the semiclassical regime is described by a wave function \(\psi:\mathcal{Q}\mapsto\mathbb{C}\), \[\left\{\begin{array}{l}i\varepsilon\partial_{t}\psi^{\varepsilon}=-\frac{ \varepsilon^{2}}{2}\Delta\psi^{\varepsilon}+V(x,\boldsymbol{z})\psi^{ \varepsilon},\qquad(x,t)\in\mathcal{Q}\times(0,T),\\ \psi|_{t=0}=\psi_{0}(x),\qquad x\in\Omega\subset\mathbb{R},\end{array}\right. \tag{1.1}\] where \(0<\varepsilon\ll 1\) is the scaled Planck constant describing the microscopic and macroscopic scale ratio. Here the solution \(\psi=\psi(t,x,\boldsymbol{z})\) is the electron wave function with initial condition \(\psi_{0}(x)\) the potential \(V(x,\mathbf{z})\in L^{\infty}(\Omega\times I_{\mathbf{z}})\) is the control variable that models the external field and is spatially dependent. Periodic boundary condition is assumed in our problem. The uncertainty is described by the random variable \(\mathbf{z}\), which lies in the random space \(I_{\mathbf{z}}\) with a probability measure \(\pi(\mathbf{z})d\mathbf{z}\). We introduce the notation for the expected value of \(f(\mathbf{z})\) in the random variable \(\mathbf{z}\), \[\langle f\rangle_{\pi(\mathbf{z})}=\int f(\mathbf{z})\pi(\mathbf{z})d\mathbf{z}. \tag{1.2}\] The solution to the Schrodinger equation is a complex valued wave function, whose nonlinear transforms lead to probabilistic measures of the physical observables. The primary physical quantities of interests include position density, \[n^{\varepsilon}=|\psi^{\varepsilon}|^{2}, \tag{1.3}\] and current density \[J^{\varepsilon}=\varepsilon\operatorname{Im}\left(\overline{\psi^{\varepsilon }}\nabla\psi^{\varepsilon}\right)=\frac{1}{2i}\left(\overline{\psi^{ \varepsilon}}\nabla\psi^{\varepsilon}-\psi^{\varepsilon}\nabla\overline{\psi^ {\varepsilon}}\right). \tag{1.4}\] At each fixed \(\mathbf{z}\), with \(V\) being continuous and bounded, the Hamiltonian operator \(H^{\varepsilon}\) defined by \[H^{\varepsilon}\psi^{\varepsilon}=-\frac{\varepsilon^{2}}{2}\Delta\psi^{ \varepsilon}+V(x,\mathbf{z})\psi^{\varepsilon}\] maps functions in \(H^{2}(\mathbb{R}^{d})\) to \(L^{2}(\mathbb{R}^{d})\) and is self-adjoint. The operator \(\frac{1}{i\varepsilon}H^{\varepsilon}\) generates a unitary, strongly continuous semi-group on \(L^{2}(\mathbb{R}^{d})\), which guarantees a unique solution of the Schrodinger equation (1.1) that lie in the space [39]: \[W(0,T):=\left\{\phi\in L^{2}((0,T);H^{1}_{0}(\Omega;\mathbb{C}))\Big{|}\frac{ d\phi}{dt}\in L^{2}((0,T);H^{-1}(\Omega;\mathbb{C}))\right\}.\] As a literature review, we mention that there has been several work [2, 27] on boundary control for the Schrodinger equation (1.1), where the observation is taken from the Dirichlet or Neumann boundary data. In some references such as [7], the authors consider the quantum system with evolution of its state \(|\psi(t)\rangle\) described by the Schrodinger equation \(\frac{d}{dt}|\psi(t)\rangle=-iH(t)|\psi(t)\rangle\) with the initial condition \(|\psi(0)\rangle=|\psi_{0}\rangle\). The Hamiltonian \(H(t)\) there corresponds to a time-dependent control variable that contains random parameters. Their goal is to drive the quantum ensemble from an initial state \(|\psi_{0}\rangle\) to the target state \(|\psi_{\text{target}}\rangle\), by employing a gradient-based learning method to optimize the control field. In [5, Section 7.3], the control problem of a charged particle in a well potential was formulated, where in their setting the potential field is time-dependent. We mention some other relevant work on stability estimates and semiclassical limit of inverse problem for the Schrodinger equation [3, 8, 17, 26, 39]. We continue to mention several studies that are related to the inverse problems for the Schrodinger equation or other models. For relevant inverse boundary value problems on this topic, there are existing iterative methods applied to the Helmholtz equation [31], where one starts with an initial guess of the boundary condition, then adjusts it iteratively by minimizing functionals such as error norms between the calculated data and measured data. This could be extremely time-consuming since at each iteration step, a forward problem needs to be solved. In the partial boundary data situation, there has been research on studying the linearized inverse problem of recovering potential function for the time-independent Schrodinger equation [47]. Moreover, for inverse potential problems, well-posedness of the continuous regularized formulation was analyzed in both elliptic and parabolic problems, with conditional stability estimates and error analysis for the discrete scheme studied in [10, 22]. The desired control problem can be described as the following: To which extend can the wave solution \(\psi^{\varepsilon}\) of (1.1) be perturbed by the control field-in our case the potential function \(V\), in order to reach the desired target state at the final time \(T\)? The above question can be reformulated into an _optimal control_ problem. At the final time \(T\), given the target state \(\psi_{\text{\it target}}\), let \(V\) be approximated by a neural network parameterized by \(\boldsymbol{\theta}\), and \(\lambda>0\) be a regularization coefficient, we aim to solve the following minimization problem: \[\left\{\begin{array}{l}\min_{\boldsymbol{\theta}}J_{\lambda}(V(\boldsymbol{ \theta}))=\min_{\boldsymbol{\theta}}||\psi^{\varepsilon}(x,T;\boldsymbol{ \theta})-\psi_{\text{\it target}}||^{2}_{L^{2}(\Omega)}+\lambda\,||V(x; \boldsymbol{\theta})||^{2}_{L^{2}(\Omega)},\\ \text{such that }\ \ \ \ i\varepsilon\partial_{t}\psi^{\varepsilon}(x,t; \boldsymbol{\theta})=-\frac{\varepsilon^{2}}{2}\Delta\psi^{\varepsilon}(x,t; \boldsymbol{\theta})+V(x;\boldsymbol{\theta})\psi^{\varepsilon}(x,t; \boldsymbol{\theta}),\\ \psi^{\varepsilon}(x,t=0;\boldsymbol{\theta})=\psi_{0}(x).\end{array}\right. \tag{1.5}\] if \(V\) is a deterministic potential, and \[\left\{\begin{array}{l}\min_{\boldsymbol{\theta}}J_{\lambda}(V(\boldsymbol{ \theta}))=\min_{\boldsymbol{\theta}}||\psi^{\varepsilon}(x,T;\boldsymbol{ \theta},\boldsymbol{z})-\psi_{\text{\it target}}(\boldsymbol{z})||^{2}_{L^{2}( \Omega\times I_{\boldsymbol{z}})}+\lambda\,||V(x;\boldsymbol{\theta}, \boldsymbol{z})||^{2}_{L^{2}(\Omega\times I_{\boldsymbol{z}})},\\ \text{such that }\ \ \ \ i\varepsilon\partial_{t}\psi^{\varepsilon}(x,t; \boldsymbol{\theta},\boldsymbol{z})=-\frac{\varepsilon^{2}}{2}\Delta\psi^{ \varepsilon}(x,t;\boldsymbol{\theta},\boldsymbol{z})+V(x;\boldsymbol{\theta}, \boldsymbol{z})\psi^{\varepsilon}(x,t;\boldsymbol{\theta},\boldsymbol{z}),\\ \psi^{\varepsilon}(x,t=0;\boldsymbol{\theta},\boldsymbol{z})=\psi_{0}(x; \boldsymbol{z}).\end{array}\right. \tag{1.6}\] if the potential \(V\) contains uncertainty and the random variable is \(\boldsymbol{z}\). In each particular problem setting, discretized form of the above loss function will be presented. We now highlight the main contributions of our work: 1. We take advantage of the rising trend of machine learning and use neural networks to approximate the control variable considered as the potential field in the Schrodinger equation. Both deterministic and stochastic control functions are considered. A fully-connected neural network is used for the deterministic problem, and the DeepONet [30] is applied in the stochastic case. 2. During the training process, the Schrodinger equation in the semiclassical regime is solved using the fast time-splitting spectral method to improve the computational efficiency and accuracy of our algorithm. 3. We study and compare both cases when the observation data is associated with or without noise, and propose different training strategies. For data without noise, the popular stochastic gradient descent (SGD) method is used. For noisy data, we consider a Bayesian framework and adopt the stochastic gradient Markov chain Monte Carlo (MCMC) approach to obtain robust learning results. The rest of the paper is organized as follows. In Section 2, we discuss the oscillatory behavior of solution to the semiclassical Schrodinger equation in the random variable and mention the numerical challenges even for the forward UQ problems. Our main methodology of using the learning-based technique to solve the optimization problem (1.6) will be proposed in Section 3, with numerical scheme for the forward problem introduced in subsection 3.1 and several neural network approaches described in subsection 3.2. We conduct extensive numerical experiments for both the deterministic and stochastic potential control problems and present the results in Section 4. Conclusion and future work will be addressed lastly. ## 2 Regularity of solution in the random space The semi-classical Schrodinger equation is a family of dispersive wave equations parameterized by \(\varepsilon\ll 1\), it is well known that the wave equation propagates \(O(\varepsilon)\) scaled oscillations in space and time. However, for UQ problems it is not obvious whether the small parameter \(\varepsilon\) induces oscillations in the random variable \(\mathbf{z}\). We conduct a regularity analysis of \(\psi\) in the random space, which enables us to study the oscillatory behavior of solution in the random space. To investigate the regularity of the wave function in the \(\mathbf{z}\) variable, we check the following averaged norm \[||\psi||_{\Gamma}:=\left(\int_{I_{z}}\int_{\mathbb{R}^{3}}|\psi(t,\mathbf{x}, \mathbf{z})|^{2}\ d\mathbf{x}\pi(\mathbf{z})d\mathbf{z}\right)^{1/2}. \tag{2.1}\] First, observe that \(\forall\,\mathbf{z}\in I_{\mathbf{z}}\), \[\frac{\partial}{\partial t}\|\psi^{\varepsilon}\|_{L^{2}_{\mathbf{z}}}^{2}(t, \mathbf{z})=0,\] thus \[\frac{d}{dt}\|\psi^{\varepsilon}\|_{\Gamma}^{2}=0,\] which indicates the \(\Gamma\)-norm of the wave function \(\psi^{\varepsilon}\) is conserved in time, \(\psi^{\varepsilon}\|_{\Gamma}(t)=\|\psi^{\varepsilon}_{\mathrm{in}}\|_{\Gamma}\). Below we show that \(\psi^{\varepsilon}\) has \(\varepsilon\)-scaled oscillations in \(\mathbf{z}\). As an example, we analyze first-order partial derivative of \(\psi^{\varepsilon}\) in \(z_{1}\) and denote \(\psi^{1}=\psi^{\varepsilon}_{z_{1}}\) and \(V^{1}=V_{z_{1}}\). By differentiating the semi-classical Schrodinger equation (1.1) with respect to \(z_{1}\), one gets \[i\varepsilon\psi^{1}_{t}=-\frac{\varepsilon^{2}}{2}\Delta_{\mathbf{x}}\psi^{ 1}+V^{1}\psi^{\varepsilon}+V\psi^{1}.\] Direct calculation leads to \[\frac{d}{dt}\|\psi^{1}\|_{\Gamma}^{2} =\int\bigl{(}\psi^{1}_{t}\bar{\psi}^{1}+\psi^{1}\bar{\psi}^{1}_{ t}\bigr{)}\pi d\mathbf{x}d\mathbf{z}\] \[=\int\bigl{(}\frac{1}{i\varepsilon}V^{1}\psi^{\varepsilon}\bar{ \psi}^{1}-\frac{1}{i\varepsilon}V^{1}\psi^{1}\bar{\psi}^{\varepsilon}\bigr{)} \pi d\mathbf{x}d\mathbf{z}\] \[\leq\frac{2}{\varepsilon}\|\psi^{1}\|_{\Gamma}\,\|V^{1}\psi^{ \varepsilon}\|_{\Gamma}\,,\] where we use the Cauchy-Schwarz inequality and Jensen inequality in the last step, namely \[\int V^{1}\psi^{\varepsilon}\bar{\psi}^{1}dx\leq\left(\int(V^{1}\psi^{ \varepsilon})^{2}dx\right)^{1/2}\left(\int(\bar{\psi}^{1})^{2}dx\right)^{1/2},\] \[\int\int V^{1}\psi^{\varepsilon}\bar{\psi}^{1}dx\,\pi(z)dz\leq\left(\int\left(\int V ^{1}\psi^{\varepsilon}\bar{\psi}^{1}dx\right)^{2}\pi(z)dz\right)^{1/2}\leq||V^{1 }\psi^{\varepsilon}||_{\Gamma}\,||\psi^{1}||_{\Gamma}\,.\] Therefore, \[\frac{d}{dt}\|\psi^{1}\|_{\Gamma}\leq\frac{1}{\varepsilon}\|V^{1}\psi^{ \varepsilon}\|_{\Gamma}^{2}\,.\] For \(t=O(1)\), this pessimistic estimate implies \[\|\psi^{1}\|_{\Gamma}=O\big{(}\varepsilon^{-1}\big{)}.\] To summarize, in this part we emphasize the oscillatory behavior of the solution \(\psi\) in the random space, which brings numerical challenges for the forward UQ problem. If one directly adopts the generalized polynomial chaos (gPC)-based Galerkin methods or stochastic collocation methods [44] to the semi-classical Schrodinger equation with random parameters, \(\varepsilon\)-dependent basis functions or quadrature points are needed to get an accurate approximation. There has been some work developed for this forward problem [11; 23], where in our inverse problem case shares the similar difficulty. In the future work, to more efficiently sample from the random space, we will adopt numerical solvers that can resolve the \(\varepsilon\)-oscillations in the random variable. For simplicity of notations, we will omit the superscript \(\varepsilon\) in \(\psi^{\varepsilon}\) and use \(\psi\) in the rest of the paper. ## 3 Optimal control using neural networks ### The time-splitting spectral method In the semiclassical regime where \(\varepsilon\ll 1\), the solution to the Schrodinger equation (1.1) is oscillatory both temporally and spatially, with an oscillation frequency of \(O(1/\varepsilon)\). This poses tremendous computational challenges since one needs to numerically resolve, both spatially and temporally, the small wave length of \(O(\varepsilon)\). The time-splitting spectral (TSSP) method, studied by Bao, Jin and Markowich in [1], is one of the most popular and highly accurate methods for such problems, where the meshing strategy \(\Delta t=O(\varepsilon)\) and \(\Delta x=O(\varepsilon)\) is required for moderate values of \(\varepsilon\). Moreover, in order to just compute accurately the physical observables (such as position density, flux, and energy), one still needs to resolve the spatial oscillations, but the time step \(\Delta t=o(1)\) is much more relaxed [1; 20; 24]. Recently a rigorous uniform in \(\varepsilon\) error estimate was obtained in [19], by using errors measured by a pseudo-metric in analogy to the Wasserstein distance between a quantum density operator and a classical density in phase space, with the regularity requirement for \(V\) being \(V\in C^{1,1}\). In this section, we review the first-order time-splitting spectral method studied in [1, Section 2]. Consider an one-dimensional spatial variable and a given potential \(V(x)\). We choose the spatial mesh size \(h=(b-a)/M\) for an even integer \(M\), and the time step \(k=\Delta t\), let the grid points and time step be \[x_{j}:=a+jh,\qquad t_{n}:=nk,\qquad j=0,1,\cdots,M,\quad n=0,1,2,\cdots.\] For the time discretization, from \(t=t_{n}\) to \(t=t_{n+1}\), the Schrodinger equation (1.1) is solved in the following two steps. First, one solves \[\varepsilon\psi_{t}-i\frac{\varepsilon^{2}}{2}\psi_{xx}=0, \tag{3.1}\] then \[\varepsilon\psi_{t}+iV(x)\psi=0, \tag{3.2}\] in the second step. We discretize (3.1) in space by the spectral method, then integrate in time _exactly_. Note that the ODE (3.2) can be solved exactly. Denote \(\Psi^{n}_{j}\) by the numerical approximation of the analytic solution \(\psi(t_{n},x_{j})\) to the Schrodinger equation (1.1). Then the discretized scheme is given by \[\begin{split}&\Psi^{*}_{j}=\frac{1}{M}\sum_{l=-M/2}^{M/2-1}e^{-i \varepsilon k\mu_{l}^{2}/2}\,\hat{\Psi}^{n}_{l}\,e^{i\mu_{l}(x_{j}-a)},\qquad j =0,1,2,\cdots,M-1,\\ &\Psi^{n+1}_{j}=e^{-iV(x_{j})k/\varepsilon}\Psi^{*}_{j},\end{split} \tag{3.3}\] where the Fourier coefficients of \(\Psi^{n}\) is defined as \[\hat{\Psi}^{n}_{j}=\sum_{j=0}^{M-1}\Psi^{n}_{j}\,e^{-i\mu_{l}(x_{j}-a)},\qquad \mu_{l}=\frac{2\pi l}{b-a},\quad l=-\frac{M}{2},\cdots,\frac{M}{2}-1,\] with \[\Psi^{0}_{j}=\psi(0,x_{j}),\quad j=0,1,2,\cdots,M.\] We remark that instead of directly simulating the semi-classical Schrodinger equation, there are quite a few other methods which are valid in the limit \(\varepsilon\to 0\), see [25] for a general discussion. In particular, many wave packets based methods have been introduced in past few years, which reduce the full quantum dynamics to Gaussian wave packets dynamics [21]. In this work, we simply adopt the TSSP method as our deterministic solver in the learning algorithm ### Learning method for the control problem Thanks to the nonlinear structure of deep neural network, it has shown great potential in approximating high dimensional functions and overcoming the curse of dimensionality. In recent years, deep learning has gained great success in solving high-dimensional PDEs, in both forward and inverse problem settings [34; 45]. There have been studies that suggested learning-based methods on solving general control problems, such as [14; 41]. Recently, in [32] the authors proposed SympOCnet to solve high dimensional optimal control problems with state constraints. The idea is to apply the Symplectic network, which can approximate arbitrary symplectic transformations, to perform a change of variables in the phase space and solve the forward Hamiltonian equation in the new coordinate system. In our work, we consider the control problem for the semiclassical Schrodinger equation and adopt neural networks to approximate the control field \(V\) that may contain uncertainties. The neural network parameterized potential function is learnt by minimizing the discrepancies between the state solution of the system with neural network and the observation of the target state. In this section, we will describe the neural network structures under two different problem settings: (i) the deterministic case where the underlying target potential is fixed; (ii) the stochastic case where the target potential is parameterized by some random variables. In both problems, we will validate the efficiency of our proposed method by using both clean and noisy training data. #### 3.2.1 Deterministic problem In the deterministic problem, our goal is to learn a single target function \(V(x)\) using the neural network. In this case, the input of the neural network is the spatial variable \(\{x_{k}\}\), while the output is the value of the potential function at \(x_{k}\), i.e., \(\{V(x_{k})\}\), \(k=1,\cdots,M\). We will use \(5\) fully connected layers with \(50\) neurons per layer to build up the network. For the data points, assume the spatial domain \(\Omega\in\mathbb{R}\) and temporal domain \([0,T]\), \(N\) equally distributed points in \(\Omega\) (where \(N\ll M\)) are taken, and the measurement data are the corresponding numerical solutions of the wave function at time \(T\). This implies that the data pairs are chosen as \((x_{i},\psi_{\text{obs}}(x_{i}))\) for \(i=1,\cdots,N\) and \(\psi_{\text{obs}}(x_{i})\sim\mathcal{N}(\psi(x_{i}),\sigma^{2})\). In our numerical examples, we set \(N=50\) and \(M=1000\). An illustration of the network for the deterministic problem is presented in Figure 1. As noticed from Figure 1, the input-output pairs for the fully connected neural network are \((x_{i},V(x_{i}))\). The output of the neural network, i.e. the potential function, is then used to solve the forward Schrodinger equation by adopting the time-splitting spectral method. The predicted solution obtained at the final time step \(\psi(x;T)\) is then compared with the measurement data \(\psi_{\text{obs}}(x;T)\). The mismatch between the predicted solution and the measurement data will form the loss function. A pseudocode is presented in Algorithm 1. Figure 1: Illustration of the network for the deterministic problem. ``` 1:Neural network input \(\{x_{i}\}_{i=1}^{M}\). Observation data \(\{\psi_{\mathrm{obs}}(x_{j},T)\}_{j=1}^{N}\). Initialization of neural network parameters \(\mathbf{\theta}_{0}\). 2:for For \(k\gets 0:\#iterations\)do 3: Get the output of the neural network \(\{V(x_{i};\mathbf{\theta}_{k})\}_{i=1}^{M}\). 4: Given \(V(x;\mathbf{\theta}_{k})\), solve equation (1.1) by time-splitting spectral method and get the solution \(\psi(x,T;\mathbf{\theta}_{k})\). 5: Compute the mismatch between \(\psi_{\mathrm{obs}}(x,T)\) and \(\psi(x,T;\mathbf{\theta}_{k})\), and get the loss. 6: Use SGD type or SGLD method to update the network parameter and get \(\mathbf{\theta}_{k+1}\). 7: The solution of (1.1) \(\psi(x_{j},t_{m})\) at all spatial locations and all time steps of interest. ``` **Algorithm 1** Deterministic case #### 3.2.2 Stochastic problem In the stochastic problem, our goal is to learn a set of functions described by a stochastic potential function \(V(x;z)\) containing a random parameter \(z\), by training the DNN. We will utilize the DeepONet architecture developed in [30]. First we give a brief overview of DeepONet, which is a powerful tool designed to learn continuous nonlinear operators. Denote \(G\) by an operator with input function \(u\); for any coordinate \(y\) in the domain of \(G(u)\), the output \(G(u)(y)\) is a number. DeepONet aims to approximate \(G\) with a neural network \(G_{\mathbf{\theta}}\) parameterized by \(\mathbf{\theta}\), which takes inputs \((u,y)\) and returns the output \(G(u)(y)\). The architecture of DeepONet is composed of a branch net and a trunk net. In the unstacked setting, the branch net encodes the discrete input function \(u\) into the features represented by \([b_{1},\cdots,b_{q}]\), and the trunk net takes the coordinate \(y\) as input and encodes it into the features represented by \([t_{1},\cdots,t_{q}]\). Then the dot product of \(\mathbf{b}\) and \(\mathbf{t}\) provides the final output of DeepONet, i.e. \[G_{\mathbf{\theta}}(u)(y)=\sum_{k=1}^{q}b_{k}(u(x_{1}),\cdots,u(x_{N}))t_{k}(y).\] The parameter \(\mathbf{\theta}\) consists of all weights and biases in the branch and trunk net. In our setting, we aim to approximate the parameterized potentials \(V(x;z)\) using \(G_{\mathbf{\theta}}\) that takes the discrete data \([\psi_{\mathrm{obs}}(x_{1};z),\cdots,\psi_{\mathrm{obs}}(x_{N};z)]\) and the coordinate \(y_{k}\) as inputs. Here \(k=1,\cdots,M\). We note that for each \(z\), there are \(N\) sensors that provide the observation data \(\psi_{\mathrm{obs}}(\cdot,z)\), thus the dataset size is equal to the product of \(M\) and the number of \(z\) samples. The value of \(G_{\mathbf{\theta}}(\psi_{\mathrm{obs}}(\cdot;z))(y_{k})\) is a prediction of \(V(y_{k};z)\). Utilizing the predictions from the DeepONet, namely \(V(y_{k};z)\) (\(k=1,\cdots,M\)), the time-splitting spectral method is then applied to compute the value of wave functions \(\psi(y_{k},z)\). We aim to minimize the mismatch between the observations \(\psi_{\mathrm{obs}}(x_{j},z)\) and the numerical solutions \(\psi(x_{j},z)\) at all sensor locations \(x_{j}\) for all \(z\). An illustration of the network for the stochastic problem is presented in Figure 2. A pseudocode is presented in Algorithm 2. ``` 1:Neural network input \(\{\psi_{\text{obs}}(x_{j},t_{m};z_{s})\}_{j=1}^{N}\) for some stochastic samples \(z_{s}\), at few time instances \(t_{m}\), as well as spatial points \(\{y_{k}\}_{k=1}^{M}\). Observation data \(\{\psi_{\text{obs}}(x_{j},T;z_{s})\}_{j=1}^{N}\). Initialization of neural network parameters \(\mathbf{\theta}_{0}\). 2:for\(k\gets 0:\#iterations\)do 3: Get the output of the neural network \(\{V(y_{k};z_{s};\mathbf{\theta}_{k})\}_{k=1}^{M}\). 4: For each \(V(x;z_{s};\mathbf{\theta}_{k})\), solve equation (1.1) by time-splitting spectral method and get the solutions \(\psi(x,t;\mathbf{\theta}_{k})\) at all spatial points and time instances. 5: Compute the mismatch between \(\{\psi_{\text{obs}}(x_{j},T;z_{s})\}_{j=1}^{N}\) and \(\psi(x_{j},T;z_{s};\mathbf{\theta}_{k})\) (at the observational spatial and temporal points) over all samples of \(z_{s}\), and get the loss. 6: Use SGLD method to update the network parameter and get \(\mathbf{\theta}_{k+1}\). 7: For each \(z_{s}\), the solution of (1.1) \(\psi(x_{j},t_{m};z_{s})\) at all spatial locations and all time steps of interest. ``` **Algorithm 2** Stochastic case #### 3.2.3 Training of the neural network When dealing with large-scale problems, traditional Bayesian inference methods, e.g., Markov chain Monte Carlo (MCMC)[37] have shown disadvantages due to extremely expensive computational cost of handling the whole dataset at each iteration. To tackle problems with large datasets, deep learning algorithms such as stochastic gradient descent (SGD) [36] are favorable Figure 2: Illustration of the network for the stochastic problem. and have been popularly used, since one only needs to employ a small subset of samples randomly selected from the whole dataset at each iteration. To bring together advantages of these two types of methods, Welling and Teh [42] first proposed the stochastic gradient Langevin dynamics (SGLD) (also known as stochastic gradient MCMC) method. It adds a suitable amount of noise to the standard SGD and uses mini-batches to approximate the gradient of loss function. With the help of decreasing training step size \(\eta_{k}\), it has demonstrated powerful and provided a transition between optimization and Bayesian posterior sampling [6]. We now briefly review the SGLD method. Denote \(D=\{d_{i}\}_{i=1}^{N}=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{N}\) by a given dataset, where \(\mathbf{x}_{i}\) is the input and \(\mathbf{y}_{i}\) is the corresponding noisy output. We let \(\mathcal{NN}\) be a neural network parameterized by the parameter \(\boldsymbol{\theta}\); the goal of its training is to find suitable parameters \(\boldsymbol{\theta}\) such that \(F(\mathcal{NN}(\mathbf{x}_{i};\boldsymbol{\theta}))\approx\mathbf{y}_{i}\) (\(i=1,\cdots,N\)). Due to the noise in measurement data, we assume the parameters are associated with uncertainties and obey a prior distribution \(p(\boldsymbol{\theta})\). The uncertainties in the parameters \(\boldsymbol{\theta}\) can be captured through Bayesian inference to avoid overfitting. Let \(d^{j}\) be a mini-batch of data with size \(n\), the likelihood can be written as \[p(d^{j}|\boldsymbol{\theta})=\frac{1}{(2\pi\sigma^{2})^{n/2}}\exp\Big{\{}- \frac{\sum\limits_{\mathbf{x}_{i}^{j}\in d^{j}}(\mathbf{y}_{i}^{j}-F( \mathcal{NN}(\mathbf{x}_{i}^{j};\boldsymbol{\theta})))^{2}}{2\sigma^{2}}\Big{\}},\] where \(\sigma\) is standard deviation of the Gaussian likelihood. In our case, for the dataset \(d^{j}=(\mathbf{x}_{i}^{j},\mathbf{y}_{i}^{j})\), \(\mathbf{x}_{j}^{i}\) corresponds to the input \([\psi_{\text{obs}}(x_{1};z),\cdots,\psi_{\text{obs}}(x_{N};z),y]\), \(\mathbf{y}_{j}^{i}\) corresponds to the labels \([\psi_{\text{obs}}(x_{1};z),\cdots,\psi_{\text{obs}}(x_{N};z)]\) and \(F\) maps the output of the neural network output \(\mathcal{NN}(\mathbf{x}_{i};\boldsymbol{\theta})\) which approximates \(V(y,z)\) to the quantities of interest \(\psi(y;z;T)\) with \(T\) the final simulation time. According to the Bayes' theorem, the posterior distribution of \(\boldsymbol{\theta}\), given the data \(D\), then follows \(p(\boldsymbol{\theta}|D)\propto p(\boldsymbol{\theta})\prod_{i=1}^{N}p(d_{i}| \boldsymbol{\theta})\). To sample from the posterior, one efficient proposal algorithm is to use the gradient of the target distribution. Let \(\eta_{k}\) be the learning rate at epoch \(k\) and \(\tau>0\) be the inverse temperature, the parameters will be updated by SGLD based on the following rule: \[\boldsymbol{\theta}_{k+1}=\boldsymbol{\theta}_{k}+\eta_{k}\nabla_{\boldsymbol {\theta}}\tilde{L}(\boldsymbol{\theta}_{k})+\mathcal{N}(0,2\eta_{k}\tau^{-1}).\] Here for a subset of \(n\) data points \(d^{j}=\{d_{1}^{j},\cdots,d_{n}^{j}\}\), \[\nabla_{\boldsymbol{\theta}}\tilde{L}(\boldsymbol{\theta})=\nabla_{\boldsymbol {\theta}}\log p(\boldsymbol{\theta})+\frac{N}{n}\sum_{i=1}^{n}\nabla_{ \boldsymbol{\theta}}\log p(d_{i}^{j}|\boldsymbol{\theta})\] is the stochastic gradient computed by using a minibatch that approximate the true gradient of the loss function \(\nabla_{\boldsymbol{\theta}}L(\boldsymbol{\theta})\). However, if the components of the network parameters \(\boldsymbol{\theta}\) have different scales, the invariant probability distribution for the Langevin equation is not isotropic. If one still uses a uniform learning rate in each direction, this may leads to slow mixing [9, 12, 13, 28, 40, 46]. To incorporate the geometric information of the target posterior, stochastic Gradient Riemann Langevin Dynamics (SGRLD) [33] generalizes SGLD on a Riemannian manifold. Consider the probabilit model on a Riemann manifold with some metric tensor \(P^{-1}(\mathbf{\theta})\), in SGRLD, the parameter is updated at the \(k\)-th iteration by the following rule: \[\mathbf{\theta}_{k+1}=\mathbf{\theta}_{k}+\eta_{k}\left[P(\mathbf{\theta}_{k})\nabla_{\mathbf{ \theta}}\tilde{L}(\mathbf{\theta}_{k})+\Gamma(\mathbf{\theta}_{k})\right]+\mathcal{N}( 0,2\eta_{k}\tau^{-1}P(\mathbf{\theta}_{k})) \tag{3.4}\] where \(\Gamma_{i}(\mathbf{\theta}_{k})=\sum_{j}\dfrac{\partial P_{ij}(\mathbf{\theta}_{k})}{ \partial\theta_{j}}\). One popular and computationally efficient approach to approximate \(P(\mathbf{\theta}_{k})\) is to use a diagonal preconditioning matrix [28; 40], that is, \[P(\mathbf{\theta}_{k}) =diag^{-1}(\lambda+\sqrt{V(\mathbf{\theta}_{k})}), \tag{3.5}\] \[V(\mathbf{\theta}_{k}) =(1-\omega_{k})V(\mathbf{\theta}_{k-1})+\omega_{k}g(\mathbf{\theta}_{k}) \circ g(\mathbf{\theta}_{k}), \tag{3.6}\] where \(\lambda\) is a regularization constant, \(g(\mathbf{\theta}_{k})=\nabla_{\mathbf{\theta}}\tilde{L}(\mathbf{\theta}_{k})\) is the stochastic gradient, the operator \(\circ\) denotes a elementwise multiplication, and \(\omega_{k}\in(0,1)\) is a weight parameter used in the moving average \(V(\mathbf{\theta}_{k})\). In our framework, we will use the preconditioned SGLD to train the network parameters. ## 4 Numerical results In our numerical experiments, we consider two types of potential functions, the deterministic and stochastic potential. In the deterministic case, the potential \(V\) is only spatially dependent. In the stochastic problem, the potential function \(V(\cdot,z)\) is assumed to depend on a random parameter characterized by \(z\). In particular, we consider a simple example with \(V(x,z)=(1+0.5z)x^{2}\), where \(z\) is a random variable following the uniform distribution in \([-1,1]\). ### Test I: A Deterministic Potential In the first problem setup, we assume the potential function as \(V(x)=x^{2}\). The network architecture introduced in Section 3.2.1 is adopted, and we train the network by using standard SGD and SGLD studied in Section 3.2.3. For the observation data, we choose it to be the electron wave function \(\psi\) solved by the Schrodinger equation (1.1) at several spatial locations and time instances using the forward TSSP solver, given the reference potential function \(V\). We first consider that there is no noise in the observation data and apply both SGD and SGLD to train. The numerical results show that the wave function \(\psi\) obtained from the network of both training algorithms matches well with the observation data, while it is also noticeable that the SGLD gives a slightly better approximation of the potential function. We then consider when some noise is added to the observation data, one can just apply SGLD to train the network in order to more accurately capture the uncertainties in the target potential function. #### 4.1.1 \(V(x)=x^{2}\), no noise in the observation and by SGD In this case, we let the reference potential function be \(V(x)=x^{2}\), here the observation data is clean and without noise interference, SGD method is used in our training algorithm. In the forward solver, the spatial mesh size is \(\pi/250\) and the temporal mesh size is \(6.25\times 10^{-4}\) The learning rate is \(10^{-4}\) and the total training epoch is \(20000\). In Figure 3 (a), a comparison between the reference and predicted potential function obtained from the neural network is shown. We observe that there is some mismatch in the region when \(x>0\), while the underlying reason remains to be discovered. In Figure 3 (b)-(c), a comparison between the reference with predicted position density \(n^{\varepsilon}\) and the wave function \(\psi^{\varepsilon}\) (real and imaginary parts) is presented. We conclude that the predicted wave and density functions at the final time \(T\), which are computed by solving the Schrodinger equation (1.1) under the neural network's predicted output potential, can provide good approximations to the solution quantities obtained by using the true potential \(V(x)=x^{2}\) in the TSSP solver. #### 4.1.2 \(V(x)=x^{2}\), no noise in the observation and by SGLD In the second case, the problem setup is the same as the previous case, while we apply SGLD algorithm to train the neural network. In the forward solver, the spatial mesh size is \(\pi/1000\) and the temporal mesh size is \(3\times 10^{-3}\). The learning rate is \(10^{-5}\) and the total training epoch is Figure 3: Test I case 1: \(V(x)=x^{2}\), without noise in the observation and by SGD. (a) True and predicted value of the potential function. (b) True and predicted value of the position density at time \(T\). (c) True and predicted value of the wave function at time \(T\). 10000. A comparison between the reference and predicted potential function is shown in Figure 4 (a). According to the nature of SGLD, we collect samples of neural network's parameters during the training process, then compute the mean and standard deviation of output potential functions (at each spatial point) obtained by using those parameter samples. The blue dashed line represents the mean of the predicted potential \(V\), and the confidence interval are depicted by the shaded blue area in Figure 3. Based on these two tests, we observe that SGLD provides more reliable results compared to the standard SGD, and the uncertainty is neglible in the prediction since the data is clean. In Figure 4 (b)-(c), we again present a comparison between the reference and predicted wave function \(\psi^{\varepsilon}\) or position density \(v^{\varepsilon}\) that is computed by the TSSP solver by using the predicted mean value of the potential. Similar to the previous test, it is obvious that the predicted wave or density can provide quite good approximations to the true data, i.e., the numerical solution at final time \(T\) obtained by using the true potential \(V(x)=x^{2}\) in the TSSP solver. Figure 4: Test I case 2: \(V(x)=x^{2}\), without noise in the observation and by SGLD. (a) True and predictions of the potential function. (b) True and predictions of the position density at time \(T\). (c) True and predictions the wave function at time \(T\). #### 4.1.3 \(V(x)=x^{2}\), noisy data and by SGLD In the third case, we consider some noise in the observation data and use SGLD to train the network. The mesh size in the forward solver, the learning rate and the training epochs are the same as in the previous subsection. We let the noise be a random variable that follows the normal distribution with mean \(0\) and standard deviation \(0.05\). In Figure 5 (c), the yellow circles are the noisy values of \(\psi^{\varepsilon}\) at \(50\) equally spaced locations. A comparison between the reference, i.e., \(V(x)=x^{2}\), with the predicted mean of the potential function is shown in Figure 5 (a). One can observe that the predicted mean value is consistent with the reference potential, and the blue shaded area indicates that there are some uncertainties due to the noisy data, compared to the previous tests where there is no noise in the observation. Similarly, we can see from Figure 5 (b)-(c), the predicted wave and density at final time \(T\) that are computed using the mean of network's predicted potential \(V\), capture well the true solution obtained by using \(V(x)=x^{2}\) in the TSSP solver. Therefore, we conclude that SGLD can deal with the noisy data and provide reliable results. ### Test II: A Stochastic Potential In Test II, we consider a stochastic potential, \(V(x,z)=(1+0.5z)x^{2}\), where \(z\) follows the uniform distribution on \([-1,1]\). To generate the dataset, we first take eight Gauss-Legendre points for \(z\in[-1,1]\). For each \(z_{k}\) (\(k=1,\cdots,K\)), i.e., each specific potential \(V(x;z_{k})\), we have the corresponding noisy measurement data \(\psi_{\rm obs}(x;z_{k})\) at the final time instance \(T=0.6\). The observation \(\psi_{\rm obs}(x;z_{k})\sim\mathcal{N}(\psi(x;z_{k}),\sigma^{2})\) where \(\sigma\) is the standard deviation. The wave functions at the final time instance \(\psi(x;z_{k})\) is computed using the time-splitting spectral method on a \(640\times 1000\) temporal-spatial grid. Then for each \(z_{k}\) we select \(N\) sensor locations to collect the measurement data, the sensors are uniformly located in the spatial domain \(\Omega=[-\pi/2,\pi/2]\). We will take \(N=20,50\) in the numerical tests. In the forward solver, the spatial mesh size is \(\pi/1000\) and the temporal mesh size is \(6.25\times 10^{-4}\). The learning rate is \(10^{-5}\) and the total training epoch is \(10000\). The input of the network then consists of the spatial evaluation point \(x_{i}\), and the real part \(\Re(\psi_{\rm obs}(x_{1};z_{k})),\cdots,\Re(\psi_{\rm obs}(x_{N};z_{k})\) and the imaginary part \(\Im(\psi_{\rm obs}(x_{1};z_{k})),\cdots,\Im(\psi_{\rm obs}(x_{N};z_{k})\) of the observation data. The output of the network is the value of potential at \(x_{i}\), i.e., \(V(x_{i};z_{k})\). The number of training samples is equal to the product of \(M\) (the number of evaluation points \(x_{i}\)) and the number of \(z\) samples. We assume that the values of \(V(x,z)\) at the endpoints \(x=-\frac{\pi}{2}\) and \(x=\frac{\pi}{2}\) are known for the training samples. The loss function consists of three parts, (1) the mismatch between the observation data \(\psi_{\rm obs}(x;z_{k})\) and the \(\psi\) computed using neural network predicted potential function, (2) the mismatch between the true potential and neural network predicted potential at the endpoints of the spatial domain, and (3) a regularization term on the potential. After training, we will obtain the full potential profile for different \(z\) samples. In the testing stage, we will only have noisy observations of the wave function at final time \(T\) without knowing any information of the true potential function. We will feed the a set of spatial location Figure 5: Test I case 3: \(V(x)=x^{2}\), noisy data and by SGLD. (a) True and predictions (with confidence interval) of the potential function. (b) True and predictions of the position density at time \(T\). (c) True and predictions the wave function at time \(T\). \(x_{i}\) as well as the observation data into the neural network, and obtain the predictions of the potential evaluated at these points \(x_{i}\). We first show the predictions of \(V(x;z)\) for some training samples of \(z\) when there are 50 sensors and \(\psi_{\text{obs}}(x;z)\sim\mathcal{N}(\psi(x;z),0.05)\). The comparison of predictions and references of \(V(x;z)=(1+0.5z)x^{2}\) for four different \(z\) values (\(z=[0.9603,0.7967,0.5255,0.1834]\)) are presented on the left of Figure 6. The expected value of \(V\) over the random variable \(z\) are computed using 8 Legendre quadrature points in the interval \(z\in[-1,1]\), and the comparison of the predicted mean and reference mean are shown on the right of Figure 6. With large numbers of observation data and suitable amount of noise in the data, the neural network can provide reasonable approximations for the potential functions. The corresponding predictions of wave function \(\psi\) (computed using predicted potential functions) at the final time \(T=0.6\) with different values of \(z\) are shown in Figures 7, 8. We observe good agreements between the predictions and the true values of the wave functions. A testing case for \(z=0.0976\) is shown in Figure 9. It shows that our trained neural network can generalize well to new samples of \(z\). We then show the predictions of \(V(x;z)\) when there are 20 sensors and \(\psi_{\text{obs}}(x;z)\sim\mathcal{N}(\psi(x;z),0.02)\), that is, the number of sensors are getting smaller and the noise in the observation data is also less. In this case, the predictions of the potential function for \(z=[0.9603,0.7967,0.5255,0.1834]\) are shown in Figure 10. The corresponding predictions of wave function \(\psi\) with different values of \(z=[0.9603,-0.9603]\) are shown in Figure 11 and 12, respectively. In addition, a testing case for \(z=-0.57315\) is presented in 13. We observe that the results are still quite satisfactory under this test setting. This indicates our proposed network architecture and training algorithm can work well to learn the target stochastic potential, when the observation data is corrupted with a reasonable amount of noise. Schrodinger equation. We then develop a learning-based optimal control strategy by training neural networks to learn the control variate, considering observation data with or without noise. Our numerical results show that more reliable predictions can be obtained by adopting the SGLD Figure 11: Test II, 20 sensors, true and predicted value of the potential function \(\psi\) at final time \(T=1.0\), for a training sample \(z=0.0.7967\). Figure 12: Test II, 20 sensors, true and predicted value of the potential function \(\psi\) at final time \(T=1.0\), for a training sample \(z=-0.9603\). Figure 10: Test II, 20 sensors, true and predicted value of the potential function \(V(x;z)=(1+0.5z)x^{2}\). Left: different \(z\)s, right: mean prediction with respect to \(z\). algorithm. We address the importance of our work by the following: (i) we investigate a _new_ problem that is barely studied in the scientific computing fields; (ii) we introduce a _novel_ hybrid NN-TSSP method as a deep learning approach to study the potential control problem described by the Schrodinger equation; (iii) the TSSP method as the forward solver in the sampling process is crucial, as the small parameter in the Schrodinger equation brings numerical challenges. We mention some limitations of the current work, thus propose them as future works listed below. In the loss function during the training process, one can try to minimize the variance of the solution for more robust control. Besides, we shall investigate higher-dimensional space problem for the Schrodinger equation, where other efficient schemes such as Gaussian wave packet based schemes can be adapted. Finally, more complicated potential function that depend on the temporal variable will be studied, in order to explore more general cases with practical applications for the quantum control problem.
2304.06670
Do deep neural networks have an inbuilt Occam's razor?
The remarkable performance of overparameterized deep neural networks (DNNs) must arise from an interplay between network architecture, training algorithms, and structure in the data. To disentangle these three components, we apply a Bayesian picture, based on the functions expressed by a DNN, to supervised learning. The prior over functions is determined by the network, and is varied by exploiting a transition between ordered and chaotic regimes. For Boolean function classification, we approximate the likelihood using the error spectrum of functions on data. When combined with the prior, this accurately predicts the posterior, measured for DNNs trained with stochastic gradient descent. This analysis reveals that structured data, combined with an intrinsic Occam's razor-like inductive bias towards (Kolmogorov) simple functions that is strong enough to counteract the exponential growth of the number of functions with complexity, is a key to the success of DNNs.
Chris Mingard, Henry Rees, Guillermo Valle-Pérez, Ard A. Louis
2023-04-13T16:58:21Z
http://arxiv.org/abs/2304.06670v1
# Do deep neural networks have an inbuilt Occam's razor? ###### Abstract The remarkable performance of overparameterized deep neural networks (DNNs) must arise from an interplay between network architecture, training algorithms, and structure in the data. To disentangle these three components, we apply a Bayesian picture, based on the functions expressed by a DNN, to supervised learning. The prior over functions is determined by the network architecture, and is varied by exploiting a transition between ordered and chaotic regimes. For Boolean function classification, we approximate the likelihood using the error spectrum of functions on data. When combined with the prior, this accurately predicts the posterior, measured for DNNs trained with stochastic gradient descent. This analysis reveals that structured data, combined with an intrinsic Occam's razor-like inductive bias towards (Kolmogorov) simple functions that is strong enough to counteract the exponential growth of the number of functions with complexity, is a key to the success of DNNs. + Footnote †: These authors contributed equally. + Footnote †: These authors contributed equally. Although deep neural networks (DNNs) have revolutionised modern machine learning [1; 2], a fundamental theoretical understanding of why they perform so well remains elusive [3; 4]. One of their most surprising features is that they work best in the overparameterized regime, with many more parameters than data points. As expressed in the famous quip: "_With four parameters \(I\) can fit an elephant, and with five \(I\) can make _him wiggle his trunk."_ (attributed by Enrico Fermi to John von Neumann [5]), it is widely believed that having too many parameters will lead to overfitting: a model will capture noise or other inconsequential aspects of the data, and therefore predict poorly. In statistical learning theory [6] this intuition is formalised in terms of model capacity. It is not simply the number of parameters, but rather the complexity of the set of hypotheses a model can express that matters. The search for optimal performance is often expressed in terms of bias-variance trade-off. Models that are too simple introduce errors due to bias; they can't capture the underlying processes that generate the data. Models that are too complex are over-responsive to random fluctuations in the data, leading to variance in their predictions. DNNs are famously highly expressive [7; 8; 9], i.e. they have extremely high capacity. Their ability to generalize therefore appears to break basic rules of statistical learning theory. Exactly how, without explicit regularisation, DNNs achieve this feat is a fundamental question that has remained open for decades [3; 4]. Although there has been much recent progress (see Appendix A for a literature overview) there is no consensus for why DNNs work so well in the overparameterized regime. Here we study this conundrum in the context of supervised learning for classification, where inputs \(x_{i}\) are attached to labels \(y_{i}\). Given a training set \(S=\{(x_{i},y_{i})_{i=1}^{m}\}\) of \(m\) input-output pairs, sampled i.i.d. from a data distribution \(\mathcal{D}\), the task is to train a model on \(S\) such that it performs well (has low generalization error) at predicting output labels \(\hat{y}_{i}\) for a test set \(T\) of unseen inputs, sampled from \(\mathcal{D}\). For a DNN \(\mathcal{N}(\Theta)\), with parameters \(\Theta\subseteq\mathbb{R}^{p}\) (typically weights and biases), the accuracy on a training set can be captured by a loss-function \(L(\hat{y}_{i},y_{i})\) that measures how close, for input \(x_{i}\), the prediction \(\hat{y}_{i}\) of the DNN is to the true label \(y_{i}\). Training is typically done via some variant of stochastic gradient descent (SGD) which uses derivatives of \(L(\hat{y}_{i},y_{i})\) to adjust the parameters \(\Theta\) in order to minimise the loss on \(S\). Because DNNs are so highly expressive, and because SGD is typically a highly effective optimiser for DNNs, zero training error (all correct labels after thresholding) on \(S\) is routinely achieved [7]. **Functions and inductive bias** For classification, the question of why overparameterized DNNs don't overfit can conveniently be expressed in terms of functions. For a given training set \(S\) and test set \(T\), a function \(f\) can be defined on a restricted domain \(S+T\). The inputs of \(f\) are the \(x_{i}\in S\cup T\), and the outputs include all possible sets of labels \(\{\hat{y}_{i}\}\). Only one function gives the true labels \(\{y_{i}\}\). For a given set of parameters \(\Theta\), the DNN then represents a particular function \(f\), which can be identified by the labels it outputs on the inputs \(x_{i}\in S\cup T\), after thresholding. Under the assumption that zero training error can be achieved, functions need only be distinguished by how they behave on the test set \(T\). For \(C\) classes there are \(N_{T}=C^{|T|}\) possible functions \(f\) with zero error on the training set, this number is typically unimaginably large. The overwhelming majority of these functions will not generalize well. Since DNNs are highly expressive, they should be able to represent all (or nearly all) of these functions. The fundamental question of overparameterized DNN performance becomes a question of _inductive bias_: Why, from the unimaginably large set of functions that give zero error on \(S\), do DNNs converge on a minuscule subset of functions that generalize well? Here, we will argue that an Occam's razor like inductive bias towards simple functions, combined with structured data, helps answer this question. **Distinguishing questions about generalization** Before proceeding, it is important to distinguish the question above from a different and equally interesting question: Given a DNN that generalizes reasonably well (e.g. it solves the overparameterization/large capacity problem), can we understand how to improve its performance further? This 2nd-order question is what practitioners of deep learning typically care about. Differences in architecture, hyperparameter tuning, data augmentation etc. can indeed lead to important improvements in DNN performance. Exactly why these tweaks and tricks generate better inductive bias is typically not well understood either, and is an important subject of investigation. Because the two questions are often conflated, leading to confusion, we want to emphasise up front that this paper will focus on the 1st-order conundrum shared by all overparameterized DNNs. Understanding this basic problem should help frame important 2nd-order questions about how to improve DNN performance further. **Learning Boolean functions: a model system** Inspired by calls to study model systems [3; 4], we first examine how a fully connected network (FCN) learns Boolean functions \(f:\{0,1\}^{n}\rightarrow\{0,1\}\), which are key objects of study in computer science. Just as the Ising model does for magnetism, this simple but versatile model allows us to capture the essence of the overparameterization problem, while remaining highly tractable. For a system of size \(n\), there are \(2^{n}\) inputs, and \(2^{2^{n}}\) Boolean functions. Given a Boolean target function \(f_{t}\), the DNN is trained on a subset \(S\) of \(m<2^{n}\) inputs, and then provides a prediction on a test set \(T\) which consists of the rest of the inputs. A key advantage of this system is that data complexity can easily be varied by choice of target function \(f_{t}\). Moreover, the model's tractability allows us to calculate the prior \(P(f)\), likelihood, \(P(S|f)\) and posterior \(P(f|S)\) for different functions and targets, and so cast the tripartite schema of architecture, training algorithm, and structured data from [4] into a textbook Bayesian picture. **Quantifying inductive bias with Bayesian priors** The prior over functions, \(P(f)\), is the probability that a DNN \(\mathcal{N}(\Theta)\) expresses \(f\) upon random sampling of parameters over a parameter initialisation distribution \(P_{\text{par}}(\Theta)\): \[P(f)=\int\mathbb{1}\left[\mathcal{N}(\Theta)==f\right]P_{\text{par}}(\Theta)d\Theta, \tag{1}\] where \(\mathbb{1}\) is an indicator function (\(1\) if its argument is true, and \(0\) otherwise), and its argument is true if the parameters of \(\mathcal{N}(\Theta)\) are such that represents \(f\) It was shown in [10] that, for ReLU activation functions, \(P(f)\) for the Boolean system was insensitive to different choices of \(P_{\text{par}}(\Theta)\), and that it exhibits an exponential bias of the form \(P(f)\lesssim 2^{-a\tilde{K}(f)+b}\) towards simple functions with low exceptional complexity \(\tilde{K}(f)\), which is a proxy for the true (but uncomputable) Kolmogorov complexity. We will, as in [10], calculate \(\tilde{K}(f)\) using \(C_{LZ}\), a Lempel-Ziv (LZ) based complexity measure from [11] on the \(2^{n}\) long bitstring that describes the function, taken on an ordered list of inputs. Other complexity measures give similar results [12; 10], so there is nothing fundamental about this particular choice. To simplify notation, we will use \(K(f)\) instead of \(\tilde{K}(f)\). The exponential drop of \(P(f)\) with \(K(f)\) in the map from parameters to functions is consistent with an algorithmic information theory (AIT) coding theorem [13] inspired _simplicity bias_ bound [11] which works for a much wider set of input-output maps. It was argued in [10] that if this inductive bias in the priors matches the simplicity of structured data then it would help explain why DNNs generalize so well. However, the weakness of that work, and related works arguing for such a bias towards simplicity [12; 14; 15; 16; 17; 18; 19; 20; 21], is that it is typically not possible to significantly change the inductive bias towards simplicity, making it hard to conclusively show that it is not some other property of the network that instead generates the good performance. Here we exploit a particularity of tanh activation functions that enable us to significantly vary the inductive bias of DNNs. In particular, for a Gaussian \(P_{\text{par}}(\Theta)\) with standard deviation \(\sigma_{w}\), it was shown [22; 23] that, as \(\sigma_{w}\) increases, there is a transition to a chaotic regime. Moreover, it was recently demonstrated that the simplicity bias in \(P(f)\) becomes weaker in the chaotic regime [24] (see also Appendix C). We will exploit this behaviour to systematically vary the inductive bias over functions in the prior. In Figures 0(a) and 0(b) we depict prior probabilities \(P(f)\) for functions \(f\) defined on all 128 inputs of a \(n=7\) Boolean system upon random sampling of parameters of an FCN with 10 layers and hidden width 40 (which is large enough to be fully expressive for this system [17]), and tanh activation functions. The simplicity bias in \(P(f)\) becomes weaker as the width \(\sigma_{w}\) of the Gaussian \(P_{\text{par}}(\sigma_{w})\) increases. By contrast, for ReLU activations, the bias in \(P(f)\) barely changes with \(\sigma_{w}\) (see Figure S3(a)). The effect of the decrease in simplicity bias on DNN generalization performance is demonstrated in Figure 0(c) for a DNN trained to zero error on a training set \(S\) of size \(m=64\) using advSGD (an SGD variant taken from [10]), and tested on the other 64 inputs \(x_{i}\in T\). The generalization error (the fraction of incorrect predictions on \(T\)) varies as a function of the complexity of the target function. Although all these DNNs exhibit simplicity bias, weaker forms of the bias correspond to significantly worse generalization on the simpler targets (see also Appendix J). For very complex targets, both networks perform poorly. Finally, we also show an unbiased learner, where functions \(f\) are chosen uniformly at random with the proviso that they exactly fit the training set \(S\). Not surprisingly, given the \(2^{64}\approx 2\times 10^{19}\) functions that can fit \(S\), the performance of this unbiased learner is no better than random chance. The scatter plots of Figure 1 (d)-(f) depict a more fine-grained picture of the behaviour of the SGD-trained networks for three different target functions. For each target, 1000 independent initialisations of the SGD opti miser, with initial parameters taken from \(P_{\text{par}}(\sigma_{w})\), are used. The generalization error and complexity of each function found when the DNN first reaches zero training error are plotted. Since there are \(2^{64}\) possible functions that give zero error on the training set \(S\), it is not surprising that the DNN converges to many different functions upon different random initialisations. For the \(\sigma_{w}=1\) network (where \(P(f)\) resembles that of ReLU networks) the most common function is typically simpler than the target. By contrast, the less biased network converges Figure 1: **Priors over functions and over complexity** (a) Prior \(P(f)\) that a \(N_{l}\)-layer FCN with \(\tanh\) activations generates \(n=7\) Boolean functions \(f\), ranked by probability of individual functions, generated from \(10^{8}\) random samples of parameters \(\Theta\) over a Gaussian \(P_{\text{par}}(\Theta)\) with standard deviations \(\sigma_{w}=1\dots 8\). Also compared is a ReLU-activated DNN. The dotted blue line denotes a Zipf’s law prior [10]\(P(f)=1/((128\ln 2)Rank(f))\). (b) \(P(f)\) versus LZ complexity \(K\) for the networks from (a). (c) generalization error versus \(K\) of the target function for an unbiased learner (green), and \(\sigma_{w}=1,8\) tanh networks trained to zero error with advSGD [10] on cross-entropy loss with training set \(S\) of size \(m=64\), for 1000 random initialisations. The error is calculated on the remaining \(|T|=64\) functions. Error bars are one standard deviation. (d), (e), (f) Scatterplots of generalization error versus learned function LZ complexity, from 1000 random initialisations for three target functions from subfigure (c). The dashed vertical line denotes the target function complexity. The black cross represents the mode function. The histograms at the top (side) of the plots show the posterior probability upon training as a function of complexity,\(P_{\text{SGD}}(K|S)\) (error,\(P_{\text{SGD}}(\epsilon_{G}|S)\)). (g) The prior probability \(P(K)\) to obtain a function of LZ complexity \(K\) for uniform random sampling of \(10^{8}\), compared to a theoretical perfect compressor. 90% of the probability mass lies to the right of the vertical dotted lines, and the dash-dot line denotes an extrapolation to low \(K\). (h) \(P(K)\) is relatively uniform on \(K\) for the \(\sigma_{w}=1\) system, while it is highly biased towards complex functions for the \(\sigma_{w}=8\) networks. The large difference in these priors helps explain the significant variation in DNN performance. (i) generalization error for the K-learning restriction for the \(\sigma_{w}=1,8\) DNNs and for an unbiased learner, all for \(|S|=100\). \(\epsilon_{S}\) is the training error and \(\epsilon_{G}\) is the generalization error on the test set. The vertical dashed line is the complexity \(K_{t}\) of the target. Also compared are the standard realisable PAC and marginal-likelihood PAC-Bayes bounds for the unbiased learner. In \(10^{4}\) samples, no solutions were found with \(K\lesssim 70\) for the \(\sigma_{w}=8\) DNN, and with \(K\gtrsim 70\) for \(\sigma_{w}=1\) DNN. on functions that are typically more complex than the target. As the target itself becomes more complex, the relative difference between the two generalization errors decreases, because the strong inductive bias towards simple functions of the first network becomes less useful. No free lunch theorems for supervised learning tell us that when averaged over all target functions, the three learners above will perform equally badly [25, 26] (see also Appendix D.3). To understand why changes in the inductive bias towards simplicity that are small relative to the 38 or so orders of magnitude scale on which \(P(f)\) varies (see also Figure S3(b)) nevertheless lead to such significant differences in generalization performance, we need another important ingredient, namely how the _number_ of functions vary with complexity. Basic counting arguments imply that the number of strings of a fixed length that have complexity \(K\) scales exponentially as \(2^{K}\)[13]. Therefore, the vast majority of functions picked at random will have high complexity. This exponential growth of the number of functions with complexity can be captured in a more coarse-grained prior, the probability \(P(K)\) that the DNN expresses a function of complexity \(K\) upon random sampling of parameters over a parameter initialisation function \(P_{\text{par}}(\Theta)\), which can also be written in terms of functions as \(P(K^{\prime})=\sum_{f\in\mathcal{H}_{K^{\prime}}}P(f)\), the weighted sum over the set \(\mathcal{H}_{K^{\prime}}\) of all functions with complexity \(\tilde{K}(f)=K^{\prime}\). In Figure 1 (g) \(P(K)\) is shown for uniform random sampling of functions for \(10^{8}\) samples using the LZ measure, and also for the theoretical ideal compressor with \(P(K)=2^{K-K_{max}-1}\) over all \(2^{128}\approx 3\times 10^{38}\) functions (see also Appendix I). In (h) we display the dramatic difference between the fully simplicity-biased \(P(K)\) and the less-biased \(P(K)\). For the network with \(\sigma_{w}=1\)\(P(K)\) is nearly flat. This behaviour follows from the fact that the AIT coding-theorem like scaling [10, 11] of the prior over functions \(P(f)\sim 2^{-\tilde{K}(f)}\) counters the \(2^{K}\) growth in the number of functions. By contrast, for the more artefactual \(\sigma_{w}=8\) system, the simplicity bias is less strong. Even though there remains significant simplicity bias (we estimate that for the simplest functions, \(P(f)\) is about \(10^{25}\) times higher than the mean probability \(\left\langle P(f)\right\rangle=2^{-128}\approx 3\times 10^{-39}\)) this DNN is orders of magnitude more likely to throw up complex functions, an effect that SGD is unable to overcome. The fact that the number of complex functions grows exponentially with complexity \(K\) lies at the heart of the classical explanation of why a complex learning agent suffers from variance: It can too easily find many different functions that all fit the data. The marked differences in the generalization performance between the two networks observed in Figure 1 (c)-(f) can be therefore traced to differences in the inductive bias of the networks, as measured by the differences in their priors. The \(\sigma_{w}=8\) network is not simplicity biased enough to overcome the growth in the number of functions, and so it suffers from the classic problem of overfitting and large variance. By contrast, the \(\sigma_{w}=1\) system, and the nearly identical ReLU system, solve this problem because their inductive bias cancels the exponential growth in complex functions. **Artificially restricting model capacity** To further illustrate the effect of inductive bias we create a K-learner that only allows functions with complexity \(\leq K_{M}\) to be learned and discards all others. As can be seen in Figure 1 (i), the learners typically cannot reach zero training error on the training set if \(K_{M}\) is less than the target function complexity \(K_{t}\). For \(K_{M}\geq K_{t}\), zero training error can be reached and not surprisingly, the lowest generalization error occurs when \(K_{M}=K_{t}\). As the upper limit \(K_{M}\) is increased, all three learning agents are more likely to make errors in predictions due to variance. The random learner has an error that grows linearly with \(K_{M}\). This behaviour can be understood with a classic PAC bound [6] where the generalization error (with confidence \(0\geq(1-\delta)\leq 1\)) scales as \(\epsilon_{G}\leq(\ln|\mathcal{H}_{\leq K_{M}}|-\ln\delta)/m\), where \(|\mathcal{H}_{\leq K_{M}}|\ K\leq K_{M}\) is the size of the hypothesis class of all functions with \(K\leq K_{M}\); the bound scales linearly in \(K_{M}\), as the error does (see Appendix G for further discussion including the more sophisticated PAC-Bayes bound [27, 28].). The generalization error for the \(\sigma_{w}=1\) DNN does not change much with \(K_{M}\) for \(K_{M}>K_{t}\) because the strong inductive bias towards simple solutions means access to higher complexity solutions doesn't significantly change what the DNN converges on. **Calculating the Bayesian posterior and likelihood** To better understand the generalization behaviour observed in Figure 1 we apply Bayes' rule, \(P(f|S)=P(S|f)P(f)/P(S)\) to calculate the Bayesian posterior \(P(f|S)\) from the prior \(P(f)\), the likelihood \(P(S|f)\), and the marginal likelihood \(P(S)\). Since we condition on zero training error, the likelihood takes on a simple form. \(P(S|f)=1\) if \(\forall x_{i}\in S,f(x_{i})=y_{i}\), while \(P(S|f)=0\) otherwise. For a fixed training set, all the variation in \(P(f|S)\) for \(f\in U(S)\), the set of all functions compatible with \(S\), comes from the prior \(P(f)\) since \(P(S)\) is constant. Therefore, in this Bayesian picture, the bias in the prior is translated over to the posterior. The marginal likelihood also takes a relatively simple form for discrete functions, since \(P(S)=\sum_{f}P(S|f)P(f)=\sum_{f\in U(S)}P(f)\). It is equivalent to the probability that the DNN obtains zero error on the training set \(S\) upon random sampling of parameters, and so can be interpreted as a measure of the inductive bias towards the data. The Marginal-likelihood PAC-Bayes bound [28] makes a direct link \(P(S)\lesssim e^{-m\epsilon_{G}}\) to the generalization error \(\epsilon_{G}\) which captures the intuition that, for a given \(m\), a better inductive bias towards the data (larger \(P(S)\)) implies better performance (lower \(\epsilon_{G}\)). One can also define the posterior probability \(P_{\text{SGD}}(f|S)\), that a network trained with SGD (or another optimiser) on training set \(S\), when initialised with \(P_{\text{par}}(\Theta)\), converges on function \(f\). For simplicity, we take this probability at the epoch where the system first reaches zero training error. Note that in Figure 1 (d)-(f) it is this SGD-based posterior that we plot in the histograms at the top and sides of the plots, with functions grouped either by complexity, which we will call \(P_{\mathrm{SGD}}(K|S)\), or by generalization error \(\epsilon_{G}\), which we will call \(P_{\mathrm{SGD}}(\epsilon_{G}|S)\). DNNs are typically trained by some form of SGD, and not by randomly sampling over parameters which is much less efficient. However, a recent study [29] which carefully compared the two posteriors has shown that to first order, \(P_{\mathrm{B}}(f|S)\approx P_{\mathrm{SGD}}(f|S)\), for many different data sets and DNN architectures. We demonstrate this close similarity in Figure S12 explicitly for our \(n=7\) Boolean system. This evidence suggests that Bayesian posteriors calculated by random sampling of parameters, which are much simpler to analyze, can be used to understand the dominant behaviour of an SGD-trained DNN, even if, for example, hyperparameter tuning can lead to 2nd-order deviations between the two methods (see also Appendix A). To test the predictive power of our Bayesian picture, we first define the function error \(\epsilon(f)\) as the fraction of incorrect labels \(f\) produces on the full set of inputs. Next, we average Bayes' rule over all training sets \(S\) of size \(m\): \[\langle P(f|S)\rangle_{m}=P(f)\langle\frac{P(S|f)}{P(S)}\rangle_{m}\approx \frac{P(f)\left(1-\epsilon(f)\right)^{m}}{\langle P(S)\rangle_{m}} \tag{2}\] Figure 2: **How training data affects the posteriors:** (a), (b) and (c) depict the mean likelihood \(\langle(1-\epsilon_{G}(K))^{m}\rangle_{5}\) from Equation (3), averaged over training sets, and over the 5 lowest error functions at each \(K\). This term depends on data and is independent of the DNN architecture. With increasing \(m\) it peaks more sharply around the complexity of the target. In (d)–(f) we compare the posteriors over complexity, \(\langle P_{\mathrm{SGD}}(K|S)\rangle_{m}\), for SGD (darker blue and red) averaged over training sets of size \(m\), to the prediction of \(\langle P(K|S)\rangle_{m}\) from Equation (3) (lighter blue and orange), calculated by multiplying the Bayesian likelihood curves in Figs (a)–(c) by the prior \(P(K)\) shown in Figure 1(h). The light (Bayes) and dark (DNN) blue histograms are from the \(\sigma_{w}=1\) system, and the orange (Bayes) and red (DNN) histograms are from the \(\sigma_{w}=8\) system which has less bias towards simple functions. The Bayesian decoupling approximation (Eq. (3) captures the dominant trends in the behaviour of the SGD trained networks as a function of data complexity and training set size. where the mean likelihood \(\langle P(S|f)\rangle_{m}=(1-\epsilon(f))^{m}\) is the probability of a function \(f\) obtaining zero error on a training set of size \(m\). In the second step, we approximate the average of the ratio with the ratio of the averages which should be accurate if \(P(S)\) is highly concentrated, as is expected if the training set is not too small. Equation (2) is hard to calculate, so we coarse-grain it by grouping together functions by their complexity: \[\langle P(K|S)\rangle_{m}=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! There may be links to more formal arguments in AIT relating to the optimality of Solomonoff induction which is based on a universal prior that both takes into account all hypotheses consistent with the data (Epicurus's principle), but prioritises the (Kolmogorov) simpler ones [13; 33]. Similarly, the (nearly) flat \(P(K)\) means the DNN has access to hypotheses with a large range of complexities that can then interact with data. Needless to say, much more work is needed to work out whether these connections are superficial or profound (see also Appendix D.4). We were not able to significantly increase the bias towards simplicity in DNNs. Too strong a bias towards simplicity can mean a loss of versatility because complex functions become hard to find [34; 12]. The methods used in this paper work well for classification of discrete functions. An important further piece of research, where kernel methods may play a more central role, is to study the inductive biases for regression problems with continuous functions. While we have explored some key ingredients, the full question of why DNNs generalize so well remains open. Understanding what the extra 2nd-order effects of hyperparameter tuning etc. are, or how feature learning (which is not present in Bayesian GPs) improves generalization are important problems that our methods are unlikely to solve. Our results do not preclude the fact that SGD optimization itself can also introduce useful inductive biases (see also Appendix A). Simplicity bias is one of many inductive biases that DNNs likely exhibit. Much of the exciting recent progress in foundation models such as GPT-4 likely arises from inductive biases well beyond just simplicity, although simplicity bias plays an important role for such models [12]. Systematically studying and understanding how these biases interact with data remains one of the key challenges in modern machine learning. Finally, our observations about inductive bias can be inverted to reveal properties of the data on which DNNs are successful at learning. For example, data cannot be too complex. Furthermore, the remarkable success of DNNs on a broad range of scientific problems [35; 36; 37; 38] suggests that their inductive biases must recapitulate something deep about the structure of the natural world [15]. Understanding why DNNs choose the solutions they do may, in turn, generate profound insights into nature itself. Figure 3: **MNIST and CIFAR-10 data.** (a) MNIST generalization error for FCNs on a 1000 image training set versus \(\sigma_{w}\) for three depths. (b) CIFAR10 generalization error for FCNs trained on a 5000 image training set versus \(\sigma_{w}\) for three depths. The FCNs, made of multiple hidden layers of width 200, were trained with SGD with batch size 32 and lr=\(10^{-3}\) until 100% accuracy was first achieved on the training set. (c) Complexity prior \(P(K)\), for CSR complexity, for 1000 MNIST images for randomly initialised networks of 10 layers and \(\sigma_{w}=1,2\). Probabilities are estimated from a sample of \(2\times 10^{4}\) parameters. Figs (d), (e) and (f) are scatterplots of generalization error versus the CSR for 1000 networks trained to 100% accuracy on a training set of 1000 MNIST images and tested on 1000 different images. In (d) the training labels are uncorrupted, in (e) and (f) 25% and 50% of the training labels are corrupted respectively. Note the qualitative similarity to the scatter plots in Fig 1 (d)-(f). ## Acknowledgments We thank Satwik Bhattamishra, Kamal Dingle, Nayara Fonseca, and Yoonsoo Nam for helpful discussions.
2304.14274
When Do Graph Neural Networks Help with Node Classification? Investigating the Impact of Homophily Principle on Node Distinguishability
Homophily principle, i.e., nodes with the same labels are more likely to be connected, has been believed to be the main reason for the performance superiority of Graph Neural Networks (GNNs) over Neural Networks on node classification tasks. Recent research suggests that, even in the absence of homophily, the advantage of GNNs still exists as long as nodes from the same class share similar neighborhood patterns. However, this argument only considers intra-class Node Distinguishability (ND) but neglects inter-class ND, which provides incomplete understanding of homophily on GNNs. In this paper, we first demonstrate such deficiency with examples and argue that an ideal situation for ND is to have smaller intra-class ND than inter-class ND. To formulate this idea and study ND deeply, we propose Contextual Stochastic Block Model for Homophily (CSBM-H) and define two metrics, Probabilistic Bayes Error (PBE) and negative generalized Jeffreys divergence, to quantify ND. With the metrics, we visualize and analyze how graph filters, node degree distributions and class variances influence ND, and investigate the combined effect of intra- and inter-class ND. Besides, we discovered the mid-homophily pitfall, which occurs widely in graph datasets. Furthermore, we verified that, in real-work tasks, the superiority of GNNs is indeed closely related to both intra- and inter-class ND regardless of homophily levels. Grounded in this observation, we propose a new hypothesis-testing based performance metric beyond homophily, which is non-linear, feature-based and can provide statistical threshold value for GNNs' the superiority. Experiments indicate that it is significantly more effective than the existing homophily metrics on revealing the advantage and disadvantage of graph-aware modes on both synthetic and benchmark real-world datasets.
Sitao Luan, Chenqing Hua, Minkai Xu, Qincheng Lu, Jiaqi Zhu, Xiao-Wen Chang, Jie Fu, Jure Leskovec, Doina Precup
2023-04-25T09:40:47Z
http://arxiv.org/abs/2304.14274v4
When Do Graph Neural Networks Help with Node Classification: Investigating the Homophily Principle on Node Distinguishability ###### Abstract Homophily principle, _i.e.,_ nodes with the same labels are more likely to be connected, has been believed to be the main reason for the performance superiority of Graph Neural Networks (GNNs) over node-based Neural Networks on Node Classification tasks. Recent research suggests that, even in the absence of homophily, the advantage of GNNs still exists as long as nodes from the same class share similar neighborhood patterns [34]. However, this argument only considers intra-class Node Distinguishability (ND) and neglects inter-class ND, which provides incomplete understanding of homophily. In this paper, we first demonstrate the aforementioned insufficiency with examples and argue that an ideal situation for ND is to have smaller intra-class ND than inter-class ND. To formulate this idea, we propose Contextual Stochastic Block Model for Homophily (CSBM-H) and define two metrics, Probabilistic Bayes Error (PBE) and negative generalized Jeffreys divergence, to quantify ND, through which we can find how intra- and inter-class ND influence ND together. We visualize the results and give detailed analysis. Through experiments, we verified that the superiority of GNNs is indeed closely related to both intra- and inter-class ND regardless of homophily levels, based on which we propose a new performance metric beyond homophily, which is non-linear and feature-based. Experiments indicate it significantly more effective than the existing homophily metrics on revealing the advantage and disadvantage of GNNs on both synthetic and benchmark real-world datasets. ## 1 Introduction Graph Neural Networks (GNNs) have gained popularity in recent years as a powerful tool for graph-based machine learning tasks. By combining graph signal processing and convolutional neural networks, various GNN architectures have been proposed [24; 17; 42; 32; 21], and have been shown to outperform traditional neural networks in tasks such as node classification (**NC**), graph classification, link prediction and graph generation. The success of GNNs is believed to be rooted in the homophily assumption [38], which states that connected nodes tend to have similar attributes [16], providing extra useful information to the aggregated features over the original node features. This relational inductive bias is thought to be a major contributor to the superior performance of GNNs over traditional neural networks in various tasks [4]. On the other hand, the lack of homophily, _i.e.,_ heterophily, is considered as the main cause of the inferiority of GNNs on heterophilic graphs, because nodes from different classes are connected and mixed, which can lead to indistinguishable node embeddings, making the classification task more difficult for GNNs [48; 47; 33]. Numerous models have been proposed to address the heterophily challenge lately [40; 48; 47; 33; 5; 28; 7; 46; 19; 30; 27; 43; 31]. Recently, both empirical and theoretical studies indicate that the relationship between homophily and GNN performance is more complicated than "homophily wins, heterophily loses" [34; 31]. For example, the authors in [34] stated that, as long as nodes within the same class share similar neigh borhood patterns, their embeddings will be similar after aggregation. They provided experimental evidence and theoretical analysis, and concluded that homophily may not be necessary for GNNs to distinguish nodes. The paper [31] studied homophily/heterophily from post-aggregation node similarity perspective and found that heterophily is not always harmful, which is consistent with [34]. Besides, the authors have proposed to use high-pass filter to address some heterophily cases, which is adopted in [7; 5] as well. They have also proposed aggregation homophily, which is a linear feature-independent performance metric and is verified to be better at revealing the performance advantages and disadvantages of GNNs than the existing homophily metrics [40; 48; 28]. Moreover, [6] has investigated heterophily from a neighbor identifiable perspective and stated that heterophily can be helpful for NC when the neighbor distributions of intra-class nodes are identifiable. Inspite that the current literatures on studying homophily principle provide the profound insights, they are still deficient: 1. [34; 6] only consider intra-class node distinguishability (**ND**), but ignore inter-class ND; 2. [31] does not show when and how high-pass filter can help with heterophily problem; 3. There is a lack of a non-linear, feature-based performance metric which can leverage richer information to provide an **accurate threshold value** to indicate whether GNNs are really needed on certain task or not. To address those issues, in this paper: 1. We show that, to comprehensively study the impact of homophily on ND, one needs to consider intra- and inter-class ND together and an ideal case is to have smaller intra-class ND than inter-class ND; 2. To formulate this idea, we propose Contextual Stochastic Block Model for Homophily (CSBM-H) as the graph generative model. It incorporates an explicit parameter to manage homophily, alongside class variance parameters to control intra-class ND, and node degree parameters which are important [34; 46]; 3. To quantify ND of CSBM-H, we propose Probabilistic Bayes Error (**PBE**) and Negative Generalized Jeffreys Divergence (\(D_{\text{NGJ}}\)), through which we can analytically study how intra- and inter-class ND impact ND together. We visualize PBE and \(D_{\text{NGJ}}\) of original features, low-pass (**LP**) filtered features and high-pass (**HP**) filtered features at different homophily levels, discuss how class variances and node degree will influence ND in details; 4. In practice, we verify that the performance superiority of GNNs is indeed related to whether intra-class ND is smaller than inter-class ND, regardless of homophily levels. Based on this, we propose Classifier-based Performance Metric (CPM), a new non-linear feature-based metric that can provide statistical threshold. Experiments show that CPM is significantly more effective than the existing homophily metrics on predicting the performance of GNNs versus NNs. ## 2 Preliminaries We use **bold** font for vectors (_e.g._,\(\mathbf{v}\)) and define an undirected connected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) is the set of nodes with a total of \(N\) elements, \(\mathcal{E}\) is the set of edges without self-loops. \(A\) is the symmetric adjacency matrix with \(A_{i,j}=1\) if there is an edge between nodes \(i\) and \(j\), otherwise \(A_{i,j}=0\). We also define \(D\) as the diagonal degree matrix of the graph, with \(D_{i,i}=d_{i}=\sum_{j}A_{i,j}\). The neighborhood set of a node \(i\), denoted as \(\mathcal{N}_{i}\), is defined as \(\mathcal{N}_{i}=\{j:e_{ij}\in\mathcal{E}\}\). A graph signal is a vector in \(\mathbb{R}^{N}\), whose \(i\)-th entry is a feature of node \(i\). Additionally, we use \(X\in\mathbb{R}^{N\times F}\) to denote the feature matrix, whose columns are graph signals and whose \(i\)-th row \(X_{i,:}=\mathbf{x}_{i}^{T}\) is the feature vector of node \(i\). The label encoding matrix \(Z\in\mathbb{R}^{N\times C}\), where \(C\) is the number of classes, has its \(i\)-th row \(Z_{i,:}\) as the one-hot encoding of the label of node \(i\). We denote \(z_{i}=\operatorname*{arg\,max}_{j}Z_{i,j}\in\{1,2,\ldots C\}\). The indicator function \(\mathbf{1}_{B}\) equals 1 when event \(B\) happens and 0 otherwise. For nodes \(i,j\in\mathcal{V}\), if \(z_{i}=z_{j}\), then they are considered as _intra-class nodes_; if \(z_{i}\neq z_{j}\), then they are considered to be _inter-class nodes_. Similarly, an edge \(e_{i,j}\in\mathcal{E}\) is considered to be an _intra-class edge_ if \(z_{i}=z_{j}\), and an _inter-class edge_ if \(z_{i}\neq z_{j}\). ### Graph-aware Models and Graph-agnostic Models A network that includes the feature aggregation step according to graph structure is called graph-aware (**G-aware**) model, _e.g.,_ GCN [24], SGC-1 [45]; A network that does not use graph structure is called graph-agnostic (**G-agnostic**) model, such as Multi-Layer Perceptron with 2 layers (MLP-2) and MLP-1. A G-aware model is often coupled with a G-agnostic model because when we remove the aggregation step in G-aware model, it becomes exactly the same as its coupled G-agnostic model, _e.g.,_ GCN is coupled with MLP-2 and SGC is coupled with MLP-1 as shown below, \[\text{GCN: }Y=\text{softmax}(\hat{A}_{\text{sym}}\text{ ReLU}(\hat{A}_{\text{sym}}XW_{0})\ W_{1}),\ \ \text{MLP-2: }Y=\text{softmax}(\text{ReLU}(XW_{0})\ W_{1}), \tag{1}\] \[\text{SGC-1: }Y=\text{softmax}(\hat{A}_{\text{sym}}XW_{0}),\ \ \text{MLP-1: }Y=\text{softmax}(XW_{0}),\] where \(\hat{A}_{\text{sym}}=\tilde{D}^{-1/2}\tilde{A}\tilde{D}^{-1/2}\), \(\tilde{A}\equiv A+I\) and \(\tilde{D}\equiv D+I\); \(W_{0}\in\mathbb{R}^{F_{0}\times F_{1}}\) and \(W_{1}\in\mathbb{R}^{F_{1}\times O}\) are learnable parameter matrices. For simplicity, we denote \(y_{i}=\operatorname*{arg\,max}_{j}Y_{i,j}\in\{1,2,\ldots C\}\). The random walk renormalized matrix \(\hat{A}_{\text{rw}}=\tilde{D}^{-1}\tilde{A}\) can also be applied to GCN, which is essentially a mean aggregator commonly used in some spatial-based GNNs [17]. To bridge spectral and spatial methods, we use \(\hat{A}_{\text{rw}}\) in the theoretical analysis, but **self-loops are not added to the adjacency matrix** to maintain consistency with previous literature [34; 31]. To address the heterophily challenge, high-pass (HP) filter [13], such as \(I-\hat{A}_{\text{rw}}\), is often used to replace low-pass (LP) filter [35]\(\hat{A}_{\text{rw}}\) in GCN [5; 7; 31]. In this paper, we use \(\tilde{A}_{\text{rw}}\) and \(I-\hat{A}_{\text{rw}}\) as the LP and HP operators, respectively. The LP and HP filtered feature matrices are represented as \(H=\hat{A}_{\text{rw}}X\) and \(H^{\text{HP}}=(I-\hat{A}_{\text{rw}})X\). For simplicity, we denote \(\mathbf{h}_{i}=(H_{i,:})^{T},\mathbf{h}_{i}^{\text{HP}}=(H_{i,:}^{\text{HP}})^{T}\). **To measure if the G-aware models can outperform its coupled G-agnostic model without training**, a lot of homophily metrics have been proposed and we will introduce the most commonly used ones in the following subsection. ### Homophily Metrics The homophily metric is a way to describe the relation between node labels and graph structure. We introduce five commonly used homophily metrics: edge homophily [1; 48], node homophily [40], class homophily [28], generalized edge homophily [23] and aggregation homophily [31] as follows: \[\text{H}_{\text{edge}}(\mathcal{G})=\frac{\big{|}\{\epsilon_{uv}| \epsilon_{uv}\in\mathcal{E},Z_{u},:=Z_{v,:}\}\big{|}}{|\mathcal{E}|},\text{H}_ {\text{node}}(\mathcal{G})=\frac{1}{|\mathcal{V}|}\sum_{v\in\mathcal{V}}\text {H}_{\text{node}}^{v}=\frac{1}{|\mathcal{V}|}\sum_{v\in\mathcal{V}}\frac{ \big{|}\{\{u|u\in\mathcal{N}_{v},Z_{u},:=Z_{v,:}\}\big{|}\}}{d_{v}},\\ \text{H}_{\text{class}}(\mathcal{G})\!=\!\frac{1}{C\!-\!1}\sum_{k=1}^{C} \bigg{[}h_{k}\!-\!\frac{\big{|}\{i\!+\!2_{v,:}\neq\!1\}}{N}\bigg{]}_{+},\text { where }h_{k}\!=\!\frac{\sum_{v\in\mathcal{V}}\{u\!+\!2_{v,:}\neq\!1,u\in\mathcal{N}_{v },Z_{u},:=Z_{v,:}\}\big{|}}{\sum_{v\in\{v|x_{v,k}=1\}}d_{v}},\\ \text{H}_{\text{def}}(\mathcal{G})=\frac{\sum\limits_{(i,j)\in \mathcal{E}}\cos(\mathbf{x},\mathbf{x}_{i})}{|\mathcal{E}|},\,\text{H}_{\text{agg}}( \mathcal{G})=\frac{1}{|\mathcal{V}|}\times\Big{|}\{\{v\,|\operatorname{Mean} _{u}\big{(}\{S(\hat{A},Z)_{v,u}^{Z_{u},:=Z_{v,:}\}\}\big{)}\geq\operatorname{ Mean}_{u}\big{(}\{S(\hat{A},Z)_{v,u}^{Z_{u},:=Z_{v,:}\}\}\big{)}\}\Big{|}} \tag{2}\] where \(\text{H}_{\text{node}}^{v}\) is the local homophily value for node \(v\); \([a]_{+}=\max(0,a)\); \(h_{k}\) is the class-wise homophily metric [28]; \(\operatorname{Mean}_{u}\big{(}\{\cdot\}\big{)}\) takes the average over \(u\) of a ghost multiset of values or variables and \(S(\hat{A},Z)=\hat{A}Z(\hat{A}Z)^{T}\) is the post-aggregation node similarity matrix. These metrics all fall within the range of \([0,1]\), with a value closer to \(1\) indicating strong homophily and imply that G-aware models are more likely to outperform its coupled G-agnostic model, and vice versa. However, the current homophily metrics are all linear, feature-independent metrics which fail to give an accurate indication of the superiority of G-aware models and cannot provide a threshold value [31] for the superiority. ## 3 Analysis of Homophily on Node Distinguishability (ND) ### Motivation The Problem in Current LiteratureRecent research has shown that heterophily does not always negatively impact the embeddings of intra-class nodes, as long as their neighborhood patterns "corrupt in the same way" [34; 6]. For example, in Figure 1, nodes {1,2} are from class blue and both have the same heterophilic neighborhood patterns. As a result, their aggregated features will still be similar and they can be classified into the same class. However, this is only partially true for ND if we forget to discuss inter-class ND, _e.g.,_ node 3 in Figure 1 are from class green and also has the same neighborhood pattern as nodes {1,2}, which means the inter-class ND will be lost after aggregation. This highlights the necessity for careful consideration of both intra- and inter-class ND when evaluating the impact of homophily on the performance of GNNs and an ideal case for NC would be node {1,2,4}, where we have smaller intra-class "distance" than inter-class "distance". We will formulate the above idea in this section and verify if it really relates to the performance of GNNs in section 4. Figure 1: Example of intra- and inter-class node distinguishability. ### CSBM-H and Optimal Bayes Classifier In order to have more control over the assumptions made about the node embeddings, we consider the Contextual Stochastic Block Model (CSBM) [11]. It is a generative model that is commonly used to create graphs and node features, and it has been widely adopted to study the behavior of GNNs [41, 3, 44]. To investigate the impact of homophily on ND, the authors in [34] simplify CSBM to the two-normal setting, where the node features \(X\) and are assumed to be sampled from two normal distributions and intra- and inter-class edges are generated according to two separate parameters. This simplification does not lose much information about CSBM, but 1. it does not include an explicit homophily parameter to study homophily directly and intuitively; 2. it does not include class variances parameters to study intra-class ND; 3. the authors do not rigorously quantify ND. In this section, we introduce the Contextual Stochastic Block Model for Homophily/Heterophily (CSBM-H), which is a variation of CSBM that incorporates an explicit homophily parameter \(h\) for the two-normal setting and also has class variance parameters \(\sigma_{0}^{2},\sigma_{1}^{2}\) to describe the inner-class ND. We then derive the the optimal Bayes classifier (\(\text{CL}_{\text{Bayes}}\)) and negative generalized Jeffreys divergence for CSBM-H, based on which we can quantify ND for CSBM-H. **CSBM-H(\(\mathbf{\mu}_{0},\mathbf{\mu}_{1},\sigma_{0}^{2}I,\sigma_{1}^{2}I,d_{0},d_{1},h\))** The generated graph consists of two disjoint sets of nodes, \(i\in\mathcal{C}_{0}\) and \(j\in\mathcal{C}_{1}\), corresponding to the two classes. The features of each node are generated independently, with \(\mathbf{x}_{i}\) generated from \(N(\mathbf{\mu}_{0},\sigma_{0}^{2}I)\) and \(\mathbf{x}_{j}\) generated from \(N(\mathbf{\mu}_{1},\sigma_{1}^{2}I)\), where \(\mathbf{\mu}_{0},\mathbf{\mu}_{1}\in\mathbb{R}^{F_{h}}\) and \(F_{h}\) is the dimension of the embeddings. The degrees of nodes in \(\mathcal{C}_{0}\) and \(\mathcal{C}_{1}\) are \(d_{0},d_{1}\in\mathbb{N}\) respectively. For \(i\in\mathcal{C}_{0}\), its neighbors are generated by independently sampling from \(h\cdot d_{0}\) intra-class nodes and \((1-h)\cdot d_{0}\) inter-class nodes. The neighbors of \(j\in\mathcal{C}_{1}\) are generated in the same way. As a result, the FP, LP and HP filtered features are generated as follows, \[\begin{split}& i\in\mathcal{C}_{0}:\mathbf{x}_{i}\sim N(\mathbf{\mu}_{0}, \sigma_{0}^{2}I);\ \mathbf{h}_{i}\sim N(\tilde{\mathbf{\mu}}_{0},\tilde{\sigma}_{0}^{2}I),\ \mathbf{h}_{i}^{\text{HP}}\sim N\left(\tilde{\mathbf{\mu}}_{0}^{ \text{HP}},(\tilde{\sigma}_{0}^{\text{HP}})^{2}I\right),\\ & j\in\mathcal{C}_{1}:\mathbf{x}_{j}\sim N(\mathbf{\mu}_{1},\sigma_{1}^{2 }I);\ \mathbf{h}_{j}\sim N(\tilde{\mathbf{\mu}}_{1},\tilde{\sigma}_{1}^{2}I),\ \mathbf{h}_{j}^{ \text{HP}}\sim N\left(\tilde{\mathbf{\mu}}_{1}^{\text{HP}},(\tilde{\sigma}_{1}^{ \text{HP}})^{2}I\right),\end{split} \tag{3}\] where \(\tilde{\mathbf{\mu}}_{0}=h(\mathbf{\mu}_{0}-\mathbf{\mu}_{1})+\mathbf{\mu}_{1}\), \(\tilde{\mathbf{\mu}}_{1}=h(\mathbf{\mu}_{1}-\mathbf{\mu}_{0})+\mathbf{\mu}_{0}\), \(\tilde{\mathbf{\mu}}_{0}^{\text{HP}}=(1-h)(\mathbf{\mu}_{0}-\mathbf{\mu}_{1})\), \(\tilde{\mathbf{\mu}}_{1}^{\text{HP}}=(1-h)(\mathbf{\mu}_{1}-\mathbf{\mu}_{0})\), \(\tilde{\mathbf{\sigma}}_{0}^{2}=\frac{(h(\sigma_{0}^{2}-\sigma_{0}^{2})+\sigma_{1} ^{2})}{d_{0}}\), \(\tilde{\sigma}_{1}^{2}=\frac{(h(\sigma_{0}^{2}-\sigma_{0}^{2})+\sigma_{0}^{2} )}{d_{1}}\), \((\tilde{\sigma}_{0}^{\text{HP}})^{2}=\sigma_{0}^{2}+\frac{(h(\sigma_{0}^{2}- \sigma_{1}^{2})+\sigma_{1}^{2})}{d_{0}}\), \((\tilde{\sigma}_{1}^{\text{HP}})^{2}=\sigma_{1}^{2}+\frac{(h(\sigma_{1}^{2}- \sigma_{0}^{2})+\sigma_{0}^{2})}{d_{1}}\). If \(\sigma_{0}^{2}<\sigma_{1}^{2}\), we refer to \(\mathcal{C}_{0}\) as the low variation class and \(\mathcal{C}_{1}\) as the high variation class. The variance of each class can reflect the intra-class ND. We abuse the notation \(\mathbf{x}_{i}\in\mathcal{C}_{0}\) for \(i\in\mathcal{C}_{0}\) and \(\mathbf{x}_{j}\in\mathcal{C}_{1}\) for \(j\in\mathcal{C}_{1}\). To quantify the ND of CSBM-H, we first compute the optimal Bayes classifier in the following theorem. The theorem is about \(\mathbf{x}\), but the results are applicable to \(\mathbf{h}\) and \(\mathbf{h}^{\text{HP}}\) when the parameters are replaced according to Equation 3. **Theorem 1**.: Suppose \(\sigma_{0}^{2}\neq\sigma_{1}^{2}\) and \(\sigma_{0}^{2},\sigma_{1}^{2}>0\), the prior distribution for \(\mathbf{x}_{i}\) is \(\mathbb{P}(\mathbf{x}_{i}\in\mathcal{C}_{0})=\mathbb{P}(\mathbf{x}_{i}\in\mathcal{C}_{1 })=1/2\), then the optimal Bayes Classifier (\(\text{CL}_{\text{Bayes}}\)) for CSBM-H (\(\mathbf{\mu}_{0},\mathbf{\mu}_{1},\sigma_{0}^{2}I,\sigma_{1}^{2}I,d_{0},d_{1},h\)) is1 Footnote 1: The Bayes classifier for multiple categories (\(>2\)) can be computed by stacking multiple expectation terms using similar methods as in [12, 14]. We do not discuss the more complicated settings in this paper. \[\text{CL}_{\text{Bayes}}(\mathbf{x}_{i})=\begin{cases}1,\ \eta(\mathbf{x}_{i})\geq 0.5\\ 0,\ \eta(\mathbf{x}_{i})<0.5\end{cases},\ \ \text{and}\ \ \eta(\mathbf{x}_{i})=\mathbb{P}(z_{i}=1|\mathbf{x}_{i})=\frac{1}{1+\exp \left(Q(\mathbf{x}_{i})\right)},\] where \(Q(\mathbf{x}_{i})=a\mathbf{x}_{i}^{T}\mathbf{x}_{i}+\mathbf{b}^{T}\mathbf{x}_{i}+c\), \(a=\frac{1}{2}\left(\frac{1}{\sigma_{1}^{2}}-\frac{1}{\sigma_{0}^{2}}\right),\bm {b}=\frac{\mathbf{\mu}_{0}}{\sigma_{0}^{2}}-\frac{\mathbf{\mu}_{1}}{\sigma_{1}^{2}},c= \frac{\mu_{1}^{T}\mathbf{\mu}_{1}}{2\sigma_{1}^{2}}-\frac{\mu_{2}^{T}\mathbf{\mu}_{0}}{2 \sigma_{0}^{2}}+\ln\left(\frac{\sigma_{1}^{F_{h}}}{\sigma_{0}^{F_{h}}}\right)\). Proof.: See Appendix A. **Advantages of \(\text{CL}_{\text{Bayes}}\) Over the Fixed Linear Classifier in [34]** The classifier proposed in [34] is fixed and depends only on the two centers \(\mathbf{\mu}_{0},\mathbf{\mu}_{1}\). The data centers will shift as \(h\) changes. However, the fixed the classifier cannot capture such distribution movement and thus, is not qualified to measure ND for different \(h\). Besides, we cannot investigate how variances \(\sigma_{0}^{2}\) and \(\sigma_{1}^{2}\) and node degrees \(d_{0}\) and \(d_{1}\) affect ND with the fixed classifier in [34]. In the following subsection, we will define two methods to quantify ND of CSBM-H, one is based on \(\text{CL}_{\text{Bayes}}\), which is a precise measure but hard to be explainable; another is based on KL-divergence, which can give us more intuitive understanding of how intra- and inter-class ND will impact ND at different homophily levels. These two measurements can also be used together to analyze ND. ### Measure Node Distinguishability of CSBM-H The Bayes error rate (BE) is the probability of a node being mis-classified when the true class probabilities given the predictors are known [18]. It can be used to measure the distinguishability of node embeddings and the BE for \(\text{CL}_{\text{Bayes}}\) is defined as follows, **Definition 1** (Bayes Error Rate).: _The Bayes error rate [18] for \(\text{CL}_{\text{Bayes}}\) is defined as_ \[\text{BE}=\mathbb{E}_{\mathbf{x}}[\mathbb{P}(z|\text{CL}_{\text{Bayes}}(\mathbf{x}) \neq z)]=\mathbb{E}_{\mathbf{x}}[1-\mathbb{P}(\text{CL}_{\text{Bayes}}(\mathbf{x}))].\] Specifically, the BE for CSBM-H can be written as \[\text{BE}=\mathbb{P}\left(\mathbf{x}\in\mathcal{C}_{0}\right)(1-\mathbb{P}(\text{ CL}_{\text{Bayes}}(\mathbf{x})=0|\mathbf{x}\in\mathcal{C}_{0}))+\mathbb{P}(\mathbf{x}\in \mathcal{C}_{1})\left(1-\mathbb{P}(\text{CL}_{\text{Bayes}}(\mathbf{x})=1|\mathbf{x} \in\mathcal{C}_{1})\right). \tag{4}\] In order to estimate the above value, we define Probabilistic Bayes Error (PBE). **Probabilistic Bayes Error (PBE)** The random variable in each dimension of \(\mathbf{x}_{i}\) is independently normally distributed. As a result, \(Q(\mathbf{x}_{i})\) defined in Theorem 1 follows a generalized \(\chi^{2}\) distribution [9, 10](See the calculation in Appendix D). Specifically, \[\text{For }\mathbf{x}_{i}\in\mathcal{C}_{0},\ Q(\mathbf{x}_{i})\sim\tilde{\chi}^{2}(w_{0},F_{h},\lambda_{0})+\xi;\ \mathbf{x}_{j}\in\mathcal{C}_{1},\ Q(\mathbf{x}_{j})\sim\tilde{\chi}^{2}(w_{1},F_{h}, \lambda_{1})+\xi,\] where \(w_{0}=a\sigma_{0}^{2},w_{1}=a\sigma_{1}^{2}\), the degree of freedom is \(F_{h}\), \(\lambda_{0}=(\frac{\mathbf{\mu}_{0}}{\sigma_{0}}+\frac{\mathbf{b}}{2a\sigma_{0}})^{T} (\frac{\mathbf{\mu}_{0}}{\sigma_{0}}+\frac{\mathbf{b}}{2a\sigma_{0}}),\ \lambda_{1}=(\frac{\mathbf{\mu}_{1}}{\sigma_{1}}+\frac{\mathbf{b}}{2a\sigma_{1}})^{T} (\frac{\mathbf{\mu}_{1}}{\sigma_{1}}+\frac{\mathbf{b}}{2a\sigma_{1}})\) and \(\xi=c-\frac{\mathbf{b}^{T}\mathbf{b}}{4a}\). Then, by using the Cumulative Distribution Function (CDF) of \(\tilde{\chi}^{2}\), we can calculate the predicted probabilities directly as, \[\mathbb{P}(\text{CL}_{\text{Bayes}}(\mathbf{x})=0|\mathbf{x}\in\mathcal{C}_{0})=1- \text{CDF}_{\tilde{\chi}^{2}(w_{0},F_{h},\lambda_{0})}(-\xi),\ \mathbb{P}(\text{CL}_{\text{Bayes}}(\mathbf{x})=1|\mathbf{x}\in \mathcal{C}_{1})=\text{CDF}_{\tilde{\chi}^{2}(w_{1},F_{h},\lambda_{1})}(-\xi).\] Suppose we have a balanced prior distribution \(\mathbb{P}(\mathbf{x}\in\mathcal{C}_{0})=\mathbb{P}(\mathbf{x}\in\mathcal{C}_{1})=1/2\). Then, the PBE is computed as, \[\frac{\text{CDF}_{\tilde{\chi}^{2}(w_{0},F_{h},\lambda_{0})}(-\xi)+\left(1- \text{CDF}_{\tilde{\chi}^{2}(w_{1},F_{h},\lambda_{1})}(-\xi)\right)}{2}\] To investigate the impact of homophily on the ND of LP filtered and HP filtered embeddings, we just need to replace \(\left(\mathbf{\mu}_{0},\sigma_{0}^{2},\mathbf{\mu}_{1},\sigma_{1}^{2}\right)\) with \(\left(\tilde{\mathbf{\mu}}_{0},\tilde{\sigma}_{0}^{2},\tilde{\mathbf{\mu}}_{1},\tilde{ \sigma}_{1}^{2}\right)\) and \(\left(\tilde{\mathbf{\mu}}_{0}^{\text{HP}},(\tilde{\sigma}_{0}^{\text{HP}})^{2}, \tilde{\mathbf{\mu}}_{1}^{\text{HP}},(\tilde{\sigma}_{1}^{\text{HP}})^{2}\right)\) as equation 3. PBE can be numerically calculated and visualized to show the relation between \(h\) and ND precisely. However, we do not have an analytic expression for PBE, which makes it less explainable and intuitive. To address this issue, we define another metric for ND in the following paragraphs. **Generalized Jeffreys Divergence** The KL-divergence is a statistical measure of how a probability distribution \(P\) is different from another distribution \(Q\)[8]. It offers us a tool to define an explainable ND measure, generalized Jeffreys divergence, as follows. **Definition 2** (Generalized Jeffreys Divergence).: _For a random variable \(\mathbf{x}\) which has either the distribution \(P(\mathbf{x})\) or the distribution \(Q(\mathbf{x})\), the generalized Jeffreys divergence 2 is defined as_ Footnote 2: Jeffreys divergence [22] is defined as \(D_{\text{KL}}(P||Q)+D_{\text{KL}}(Q||P)\) \[D_{\text{GJ}}(P,Q)=\mathbb{P}(\mathbf{x}\sim P)\mathbb{E}_{\mathbf{x}\sim P}\left[ \ln\frac{P(\mathbf{x})}{Q(\mathbf{x})}\right]+\mathbb{P}(\mathbf{x}\sim Q)\mathbb{E}_{\bm {x}\sim Q}\left[\ln\frac{Q(\mathbf{x})}{P(\mathbf{x})}\right]\] With \(\mathbb{P}(\mathbf{x}\sim P)=\mathbb{P}(\mathbf{x}\sim Q)=1/2\), the negative generalized Jeffreys divergence for the two-normal setting in CSBM-H can be computed by (See Appendix C for the calculation) \[D_{\text{NGJ}}(\text{CSBM-H})=\ \ \begin{subarray}{c}-d_{X}^{2}(\frac{1}{4\sigma_{1}^{2}}+ \frac{1}{4\sigma_{0}^{2}})\\ \text{Negative Normalized Distance}\end{subarray}\underbrace{-\frac{F_{h}}{4}(\rho^{2}+ \frac{1}{\rho^{2}}-2)}_{\text{Negative Variance Ratio}}\end{subarray} \tag{5}\] where \(d_{X}^{2}=(\mathbf{\mu}_{0}-\mathbf{\mu}_{1})^{T}(\mathbf{\mu}_{0}-\mathbf{\mu}_{1}),\rho= \frac{\sigma_{0}}{\sigma_{1}}\) is the squared Euclidean distance between centers and since we assume \(\sigma_{0}^{2}<\sigma_{1}^{2}\), we have \(0<\rho<1\). For \(\mathbf{h}\) and \(\mathbf{h}^{\text{HP}}\), we have \(d_{H}^{2}=(2h-1)^{2}d_{X}^{2},d_{\text{HP}}^{2}=4(1-h)^{2}d_{X}^{2}\). The smaller \(D_{\text{NGJ}}\) a CSBM-H has, the more distinguishable the node embeddings are. \(D_{\text{NGJ}}\) relies on two terms, Expected Negative Normalized Distance (ENND) and the Negative Variance Ratio (NVR): 1. ENND depends on how large is the inter-class ND \(d_{X}^{2}\) compared with the normalization term \(\frac{1}{4\sigma_{1}^{2}}+\frac{1}{4\sigma_{0}^{2}}\), which is determined by intra-class ND (variances \(\sigma_{0},\sigma_{1}\)); NVR depends on how different the two intra-class NDs are, _i.e.,_ when the intra-class ND of high-variation class is significantly larger than that of low-variation class (\(\rho\) is close to 0), NVR is small which means the nodes are more distinguishable and vice versa. Now, we can investigate the impact of homophily on ND through the lens of PBE and \(D_{\text{NGJ}}\). Specifically, in the standard CSBM-H setting as shown in Figure 2 with \(\mathbf{\mu}_{0}=[-1,0],\mathbf{\mu}_{1}=[0,1],\sigma_{0}^{2}=1,\sigma_{1}^{2}=2,d_{0}= 5,d_{1}=5\), the PBE and \(D_{\text{NGJ}}\) curves for LP filtered feature \(\mathbf{h}\) are bell-shaped 3, indicating that when the homophily value is extremely low or high, the aggregated node embeddings become more distinguishable than at medium levels of homophily. The PBE and \(D_{\text{NGJ}}\) curves for \(\mathbf{h}^{\text{HP}}\) are monotonically increasing, which means that the high-pass filter works better in heterophily areas than in homophily areas. Moreover, it is observed that \(\mathbf{x}\), \(\mathbf{h}\), and \(\mathbf{h}^{\text{HP}}\) will get the lowest PBE and \(D_{\text{NGJ}}\) in different homophily intervals, which we refer to as the "FP zone _(black)_", "LP zone _(green)_", and "HP zone _(red)_". This indicates that LP filter works better at very low and very high homophily intervals (two ends), HP filter works better at low to medium homophily interval 4, the original (_i.e.,_ full-pass or FP filtered) features works betters at medium to high homophily area. Footnote 3: This is consistent with the empirical results found in [31] that the relation between GNN performance and homophily value is a U-shaped curve. Footnote 4: This verifies the conjecture made in [31] saying that high-pass filter cannot address all kinds of heterophily and only works well for certain heterophily cases. Researchers have always been interested in exploring how node degree relate to the effect of homophily [34, 46]. In the upcoming subsection, besides node degree, we will also take a deeper look at the impact of class variances via the homophily-ND curves and the FP, LP and HP zones. ### Ablation Study on CSBM-H **Increase the Variance of High-variation Class (\(\sigma_{0}^{2}=1,\sigma_{1}^{2}=5\))** From Figure 3, it is observed that as the variance in \(\mathcal{C}_{1}\) increases and the variance between \(\mathcal{C}_{0}\) and \(\mathcal{C}_{1}\) becomes more imbalanced, the PBE and \(D_{\text{NGJ}}\) of the three curves all go up which means the node embeddings become less distinguishable under HP, LP and FP filters. The significant shrinkage of the HP zones and the expansion of the \(X\) zone indicates that the original features are more robust to imbalanced variances especially in the low heterophily area, which can be reflected by the NVR in Figure 3 (d). **Increase the Variance of Low-variation Class (\(\sigma_{0}^{2}=1.9,\sigma_{1}^{2}=2\))** As shown in Figure 9 in Appendix F, when the variance in \(\mathcal{C}_{0}\) increases and the variance between \(\mathcal{C}_{0}\) and \(\mathcal{C}_{1}\) becomes more balanced, PBE and \(D_{\text{NGJ}}\) curves go up which means the node embeddings become less distinguishable. The LP, HP and the FP zones almost stays the same because the magnitude of NVR becomes too small that it almost has no effect to ND as shown in Figure 9 (d). Interestingly, we found the change of variances cause little differences of the 3 zones in ENND and the movement of 3 zones mainly comes from NVR 5 and HP filter is less sensitive to \(\rho\) changes in low homophily area. This insensitivity will have significant impact to the 3 zones when \(\rho\) is close to \(0\) and have trivial effect when \(\rho\) is close to \(1\) because the magnitude of NVR is too small. Figure 3: Comparison of CSBM-H with \(\sigma_{0}^{2}=1,\sigma_{1}^{2}=5\). **Increase the Node Degree of High-variation Class (\(d_{0}=5,d_{1}=25\))** From Figure 4, it can be observed that as the node degree of the high-variation class increases, the PBE and \(D_{\text{NGJ}}\) curves of FP and HP filters almost stay the same while the curves of LP filters go down with a large margin. This leads to a substantial expansion of LP zone and shrinkage of FP and HP zone. This is mainly due to the decrease of ENND of LP filters and the decrease of its NVR in low homophily area also plays an important role. **Increase the Node Degree of Low-variation Class (\(d_{0}=25,d_{1}=5\))** From Figure 5, we have the similar observation as when we increase the node degree of high-variation class. The difference is that the expansion of LP zone and shrinkage of FP and HP zones are not as significant as before. From \(\tilde{\sigma}_{0}^{2},~{}\tilde{\sigma}_{1}^{2}\) we can see that increasing node degree can help LP filter reduce variances of the features so that the ENND will decrease, especially for high-variation class while HP filter is less sensitive to the change of variances and node degree. ### More General Theoretical Analysis In this subsection, we aim to gain a deeper understanding of how LP and HP affect ND in a broader context beyond the two-normal settings. To be consistent with previous literature, we follow the assumptions outlined in [34], which are: 1. The features of node \(i\) are sampled from distribution \(\mathcal{F}_{z_{i}}\), _i.e._, \(\mathbf{x}_{i}\sim\mathcal{F}_{z_{i}}\), with mean \(\mathbf{\mu}_{z_{i}}\in\mathbb{R}^{F_{h}}\); 2. Dimensions of \(\mathbf{x}_{i}\) are independent to each other; 3. Each dimension in feature \(\mathbf{x}_{i}\) is bounded, _i.e._, \(a\leq\mathbf{x}_{i,k}\leq b\); 4. For node \(i\), the labels of its neighbors are independently sampled from neighborhood distribution \(\mathcal{D}_{z_{i}}\) and repeated for \(d_{i}\) times. We refer to a graph that follows the above assumptions as \(\mathcal{G}=\left\{\mathcal{V},\mathcal{E},\left\{\mathcal{F}_{c},c\in \mathcal{C}\right\},\left\{\mathcal{D}_{c},c\in\mathcal{C}\right\}\right\}, \mathcal{C}=\left\{1,\ldots,C\right\}\) and \((b-a)^{2}\) reflects how variation the features are. The authors in [34] analyze the distance between the aggregated node embedding and its expectation, _i.e._, \(\left\|\mathbf{h}_{i}-\mathbb{E}(\mathbf{h}_{i})\right\|_{2}\), which only considers the intra-class ND and has been shown to be inadequate for a comprehensive understanding of ND. Instead, we investigate **how significant the intra-class embedding distance is smaller than the inter-class embedding distance** in the following theorem, which is a better way to understand ND. **Theorem 2**.: Suppose a graph \(\mathcal{G}=\left\{\mathcal{V},\mathcal{E},\left\{\mathcal{F}_{c},c\in \mathcal{C}\right\},\left\{\mathcal{D}_{c},c\in\mathcal{C}\right\}\right\}\) meets all the above assumptions (1-4). For node \(i,j,v\in\mathcal{V}\), suppose \(z_{i}\neq z_{j}\) and \(z_{i}=z_{v}\), then for constants \(t_{x},t_{h},t_{\text{HP}}\) that satisfy \(t_{x}\geq\sqrt{F_{h}D_{x}}(i,j),~{}t_{h}\geq\sqrt{F_{h}}D_{h}(i,j),~{}t_{ \text{HP}}\geq\sqrt{F_{h}}D_{\text{HP}}(i,j)\) we have \[\mathbb{P}\left(\left\|\mathbf{x}_{i}-\mathbf{x}_{j}\right\|_{2}\geq\left\|\mathbf{x}_{i}- \mathbf{x}_{v}\right\|_{2}+t_{x}\right)\leq 2F_{h}\exp\left(-\frac{(D_{x}(v,j)- \frac{t_{x}}{\sqrt{F_{h}}})^{2}}{V_{x}(v,j)}\right),\] \[\mathbb{P}(\left\|\mathbf{h}_{i}-\mathbf{h}_{j}\right\|_{2}\geq\left\|\mathbf{h}_{i}-\mathbf{ h}_{v}\right\|_{2}+t_{h})\leq 2F_{h}\exp\left(-\frac{(D_{h}(v,j)-\frac{t_{h}}{ \sqrt{F_{h}}})^{2}}{V_{h}(v,j)}\right), \tag{6}\] Figure 4: Comparison of CSBM with different \(d_{0}=5,d_{1}=25\) setups. Figure 5: Comparison of CSBM with different \(d_{0}=25,d_{1}=5\) setups. where \(D_{x}(v,j)=\left\|\mathbf{\mu}_{x_{v}}-\mathbf{\mu}_{z_{v}}\right\|_{2},\ V_{x}(v,j)=(b-a)^{ 2},\ D_{h}(v,j)=\left\|\mathbf{\hat{\mu}}_{x_{v}}-\mathbf{\hat{\mu}}_{z_{v}}\right\|_{2},V_{x}(v,j)=\left(\frac{1}{2d_{v}}+\frac{1}{2d_{v}}\right)(b-a)^{2},\) \[D_{\text{HP}}(v,j)=\left\|\mathbf{\mu}_{x_{v}}-\mathbf{\hat{\mu}}_{x_{v}}-\left(\mathbf{ \mu}_{x_{j}}-\mathbf{\hat{\mu}}_{z_{v}}\right)\right\|_{2},\ V_{\text{HP}}(v,j)= \left(1+\frac{1}{2d_{v}}+\frac{1}{2d_{v}}\right)(b-a)^{2},\ \mathbf{\hat{\mu}}_{x_{v}}=\sum_{u\in\mathcal{N}(v)}\mathbb{E}_{x_{v} \sim\mathcal{D}_{x_{v}}^{\sim}}\left[\frac{1}{d_{v}}\mathbf{x}_{u}\right].\] Proof.: See Appendix B. We can see that, the probability upper bound mainly depends on a distance term (inter-class ND) and normalized variance term (intra-class ND). The normalized variance term of HP filter is less sensitive to the changes of node degree than that of LP filter because there is an additional 1 in the constant term. Moreover, we show that the distance term of HP filter actually depends on the **relative center distance** which is a novel discovery. As shown in Figure 6, when homophily decreases, the aggregated centers will move away from the original centers, and the relative center distance (purple) will get larger which means the embedding distance of nodes from different classes will have larger probability to be big. This explains how HP filter work for some heterophily cases. Overall, in a more general setting with weaker assumptions, we can see that ND is also described by the intra- and inter-class ND terms rather than intra-class ND only, which is consistent with CSBM-H. ## 4 Empirical Study of Node Distinguishability Besides theoretical analysis, in this section, we will conduct experiments to verify whether the effect of homophily on the performance of GNNs really relates to its effect on ND. If a strong relation can be verified, then it indicates that we can design new ND-based performance metrics, beyond homophily metrics, to evaluate the superiority and inferiority of G-aware models against its coupled G-agnostic models without training which saves time and computational costs. ### Tests on Real-world Datasets To test whether "intra-class embedding distance is smaller than the inter-class embedding distance" strongly relates to the superiority of G-aware models to their coupled G-agnostic models in practice, we conduct the following hypothesis testing 6. Footnote 6: [29] also conduct hypothesis testing to find out when to use GNNs for node classification, but they test the differences between connected nodes and unconnected nodes instead of intra- and inter-class nodes. **Experimental Setup** We first train two G-aware models GCN, SGC-1 and their coupled G-agnostic models MLP-2 and MLP-1 with fine-tuned hyperparameters provided by [31]. For each trained model, we calculate the pairwise Euclidean distance of the node embeddings in output layers. Next, we compute the proportion of nodes whose intra-class node distance is significantly smaller than inter-class node distance 7_e.g.,_ we obtain Prop(GCN) for GCN. We use Prop to quantify ND and we \begin{table} \begin{tabular}{c|c|c c c c c c c c c} \hline \hline & & Cornell & Wisconsin & Texas & Film & Chandown & Squier & Corn & Citeseer & PubMed \\ \hline \multirow{3}{*}{Baseline} & H\({}_{\text{tr train the models multiple times for samples to conduct the following hypothesis tests: \[\text{H}_{0}:\text{Prop}(\text{G-aware model})=\text{Prop}(\text{G-agnostic model});\ \text{H}_{1}:\text{Prop}(\text{G-aware model})<\text{Prop}(\text{G-agnostic model})\] Specifically, we compare GCN v.s. MLP-2 and SGC-1 v.s. MLP-1 on \(9\) widely used benchmark datasets with different homophily values for 100 times. In each time, we randomly split the data into training/validation/test sets with a ratio of 60%/20%/20%. With the 100 samples, we conduct _T-test for the means of two independent samples of scores_, and obtain the corresponding p-values. The test results and model performance comparisons are shown in Table 1 (See more experimental tests on state-of-the-art model in Appendix G). It is observed that, in most cases (except for GCN v.s. MLP-2 on _PubMedMed_), when \(\text{H}_{1}\) significantly holds, G-aware models will underperform the coupled G-agnostic models and vice versa. This supports our claim that the performance of G-aware models is closely related to "intra-class v.s. inter-class node embedding distances", no matter the homophily levels. It reminds us that the p-value can be a better performance metric for GNNs beyond homophily. Moreover, the p-value can provide a statistical threshold, such as \(p\leq 0.05\). This property is not present in existing homophily metrics. However, it is required to train and fine-tune the models to obtain the p-values, which make it less practical because of computational costs. To overcome this issue, in the next subsection, we propose a classifier-based performance metric that can provide p-values without training. ### Beyond Homophily: Classifier-based Performance Metrics A qualified classifier should not require iterative training. In this paper, we choose Gaussian Naive Bayes (GNB)[18] and Kernel Regression (KR) with Neural Network Gaussian Process (NNGP) [26; 2; 15; 37] to capture the **feature-based linear or non-linear** information. To get the p-value, we first randomly sample 500 nodes from \(\mathcal{V}\) and splits them into 60%/40% as training and test data. The original features \(X\) and aggregated features \(H\) of the sampled training and test nodes can be calculated and are then fed into a given classifier. The predicted results and prediction accuracy of the test nodes will be computed directly. We repeat this process for 100 times to get 100 samples of prediction accuracy for \(X\) and \(H\). Then, for the given classifier, we compute the p-value of the following hypothesis testing, \[\text{H}_{0}:\text{Acc}(\text{Classifier}(H))=\text{Acc}(\text{Classifier}(X ));\ \text{H}_{1}:\text{Acc}(\text{Classifier}(H))<\text{Acc}(\text{Classifier}(X )).\] The p-values can provide a statistical threshold value, such as 0.05, to indicate whether the \(H\) is significantly better than \(X\) for node classification. As seen in Table 1, KR and GNB based metrics significantly outperform the existing homophily metrics, reducing the errors from at least \(5\) down to just \(1\) out of 18 cases. Besides, we only need a small set of the labels to calculate the p-value, which makes it better for sparse label scenario. Table 2 summarizes its advantages over the existing metrics. (See Appendix G for more details on classifier-based performance metrics, experiments on synthetic datasets, more detailed comparisons on small-scale and large-scale datasets, results for symmetric renormalized affinity matrix and running time.) ## 5 Conclusions In this paper, we provide a complete understanding of homophily by studying intra- and inter-class ND together. To theoretically investigate ND, we study the PBE and \(D_{\text{NGJ}}\) of the proposed CSBM-H and analyze how class variances and node degree will influence the PBE and \(D_{\text{NGJ}}\) curves and 3 zones of the original, LP and HP filtered features. Empirically, through hypothesis testing, we corroborate that the performance of GNNs versus NNs is closely related to whether intra-class node embedding "distance" is smaller than inter-class node embedding "distance". We find that the p-value is a much more effective performance metric beyond homophily metrics on revealing the advantage and disadvantage of GNNs. Based on this observation, we propose classifier-based performance metric, which is a non-linear feature-based metric and can provide statistical threshold value. \begin{table} \begin{tabular}{l|c c c c} \hline \hline \begin{tabular}{c} Performance \\ Metrics \\ \end{tabular} & \begin{tabular}{c} Linear or \\ Non-linear \\ \end{tabular} & \begin{tabular}{c} Feature \\ Dependency \\ \end{tabular} & \begin{tabular}{c} Sparse \\ Labels \\ \end{tabular} & \begin{tabular}{c} Statistical \\ Threshold \\ \end{tabular} \\ \hline \(\text{H}_{\text{data}}\) & linear & \(\mathcal{K}\) & \(\mathcal{K}\) & \(\mathcal{K}\) \\ \(\text{H}_{\text{data}}\) & linear & \(\mathcal{K}\) & \(\mathcal{K}\) & \(\mathcal{K}\) \\ \(\text{H}_{\text{data}}\) & linear & \(\mathcal{K}\) & \(\mathcal{K}\) & \(\mathcal{K}\) \\ \(\text{H}_{\text{data}}\) & linear & \(\mathcal{K}\) & \(\mathcal{V}\) & \(\mathcal{K}\) \\ \(\text{H}_{\text{data}}\) & linear & \(\mathcal{V}\) & \(\mathcal{V}\) & \(\mathcal{K}\) \\ Classifier & both & \(\mathcal{V}\) & \(\mathcal{V}\) & \(\mathcal{V}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Property comparisons of performance metrics
2308.13816
Homological Convolutional Neural Networks
Deep learning methods have demonstrated outstanding performances on classification and regression tasks on homogeneous data types (e.g., image, audio, and text data). However, tabular data still pose a challenge, with classic machine learning approaches being often computationally cheaper and equally effective than increasingly complex deep learning architectures. The challenge arises from the fact that, in tabular data, the correlation among features is weaker than the one from spatial or semantic relationships in images or natural language, and the dependency structures need to be modeled without any prior information. In this work, we propose a novel deep learning architecture that exploits the data structural organization through topologically constrained network representations to gain relational information from sparse tabular inputs. The resulting model leverages the power of convolution and is centered on a limited number of concepts from network topology to guarantee: (i) a data-centric and deterministic building pipeline; (ii) a high level of interpretability over the inference process; and (iii) an adequate room for scalability. We test our model on 18 benchmark datasets against 5 classic machine learning and 3 deep learning models, demonstrating that our approach reaches state-of-the-art performances on these challenging datasets. The code to reproduce all our experiments is provided at https://github.com/FinancialComputingUCL/HomologicalCNN.
Antonio Briola, Yuanrong Wang, Silvia Bartolucci, Tomaso Aste
2023-08-26T08:48:51Z
http://arxiv.org/abs/2308.13816v2
# Homological Convolutional Neural Networks ###### Abstract Deep learning methods have demonstrated outstanding performances on classification and regression tasks on homogeneous data types (e.g., image, audio, and text data). However, tabular data still poses a challenge with classic machine learning approaches being often computationally cheaper and equally effective than increasingly complex deep learning architectures. The challenge arises from the fact that, in tabular data, the correlation among features is weaker than the one from spatial or semantic relationships in images or natural languages, and the dependency structures need to be modeled without any prior information. In this work, we propose a novel deep-learning architecture that exploits the data structural organization through topologically constrained network representations to gain spatial information from sparse tabular data. The resulting model leverages the power of convolutions and is centered on a limited number of concepts from network topology to guarantee (i) a data-centric, deterministic building pipeline; (ii) a high level of interpretability over the inference process; and (iii) an adequate room for scalability. We test our model on \(18\) benchmark datasets against \(5\) classic machine learning and \(3\) deep learning models demonstrating that our approach reaches state-of-the-art performances on these challenging datasets. The code to reproduce all our experiments is provided at [https://github.com/FinancialComputingUCL/HomologicalCNN](https://github.com/FinancialComputingUCL/HomologicalCNN). ## 1 Introduction We are experiencing a tremendous and inexorable progress in the field of deep learning. Such a progress has been catalyzed by the availability of increasing computational resources and always larger datasets. The areas of success of deep learning are heterogeneous. However, the three application domains where superior performances have been detected are the ones involving the usage of images [1; 2], audio [3; 4] and text [5; 6; 7] data. Despite their inherent diversity, these data types share a fundamental characteristic: they exhibit homogeneity with notable inter-feature correlations and evident spatial or semantic relationships. On the contrary, tabular data represent the "unconquered castle" of deep neural network models [8]. Tabular data are heterogeneous and present a mixture of continuous, categorical, and ordinal values which can be either independent or correlated. They are characterized by the absence of any inherent positional information, and tabular models have to handle features from multiple discrete and continuous distributions. Tabular data are the most common data format and are ubiquitous in many crucial applications, such as medicine [9, 10, 11, 12], finance [13, 14, 15, 16], recommendation systems [17, 18, 19, 20], cybersecurity [21, 22], anomaly detection [23, 24, 25, 26] and so forth. During the last decade, traditional machine learning methods dominated tabular data modeling, and nowadays, tree ensemble algorithms (e.g. XGBoost, LightGBM, CatBoost) are considered the recommended option to solve real-life problems of this kind [27, 28, 29]. In this paper, we introduce a novel deep learning architecture for tabular numerical data classification and we name it "Homological Convolutional Neural Network" (HCNN). The building process is entirely centered on the structural organization of input data obtained through network representations that allow gaining spatial information from tabular data. A network (or graph) represents components of a system as nodes (or vertices) and interactions among them as links (or edges). The number of nodes defines the size of the network, and the number of links determines the network's sparsity (or, conversely, density). Reversible interactions between components are represented through undirected links, while non-reversible interactions are represented as direct links [30]. In this research work, we exploit a class of information filtering networks [31], namely the Triangulated Maximally Filtered Graph [32], to model the inner sparsity of tabular data and obtain a geometrical organization of input features. The choice of the network representation is not binding, even if limited to the family of so-called simplicial complexes [33]. Simplicial complexes are generalized network structures that allow capturing many-body interactions between the constituents of complex systems [34]. They are formed by sets of simplices such as nodes, links, triangles, tetrahedra, and so on, glued to each other along their faces forming higher-order graphs [33]. These graphs connect not only vertices (\(0\)-dimensional simplices) with edges (\(1\)-dimensional simplices) but also higher-order simplices (e.g. triangles, \(2\)-dimensional simplices, and tetrahedra, \(3\)-dimensional simplices). The study of networks in terms of the relationship between structures at different dimensionality is a form of "homology" and HCNNs keep into account higher-order interactions in data dependency structure as homological priors. During the neural network's building process, given a proper network representation of input data, we isolate all the simplicial structures with dimension \(\geq 1\) and we process them at two granularity levels: (i) across single representatives of each simplicial structure (i.e. convolution over each edge, triangle, tetrahedron); and (ii) across representatives of each simplicial structure (i.e. convolution over all the transformed edges, all the transformed triangles, and all the transformed tetrahedra). In doing so, we capture both the simplicial and the homological structure of input data also searching for non-trivial structural data relationships. This methodology allows to find localities in tabular data and leverages the power of Convolutional Neural Networks (CNNs) to effectively model their sparsity. Compared to its state-of-the-art (SOTA) machine learning alternatives, our method (i) maintains an equivalent level of explainability; (ii) has a comparatively low level of computational complexity; and (iii) can be scaled to a higher number of learning tasks (e.g. time series forecasting) without structural changes. Compared to its SOTA deep-learning alternatives, our method (i) is data-centric (i.e. the architecture depends on the data describing the system under analysis), (ii) presents an algorithmic data-driven building pipeline, (ii) it has a lower complexity replacing complex architectural modules (e.g. attention-based mechanisms) with elementary computational units (e.g. convolutional layers). We provide a comparison between HCNNs, simple-to-advanced machine learning algorithms and SOTA deep tabular architectures using a heterogeneous battery of small-sized numerical benchmark datasets. We observe that HCNN always ties SOTA performances on the proposed tasks, providing, at the same time, structural and computational advantages. The rest of the paper is organized as follows. In Section 2 we review the previous research on information filtering networks, sparsity handling in deep learning, and automated learning for tabular data. In Section 3.1 we discuss the data acquisition and transformation pipeline. In Section 3.2 we introduce the basic concepts about network science and information filtering networks. In Section 3.3 we provide the background for Homological Neural Networks. In Section 3.4 we present the working mechanism of Homological Convolutional Neural Networks. In Section 3.5, we provide the mathematical justification for the proposed methodology. In Section 4, we explore the effectiveness of Homological Convolutional Neural Networks compared to SOTA machine learning and deep learning models. Finally, in Section 5, we interpret our results and discuss future research lines in this area. Related Work **Information Filtering Networks**. The search for increasingly sophisticated sparse network representations of heterogeneous data types is an active area of research. During the past three decades, Information Filtering Networks (IFNs) [35; 36; 31; 32; 37] emerged as an effective tool in this research field. Their effectiveness has been demonstrated in many application domains, including but not limited to finance [38; 39; 40; 41; 42], psychology [43; 44], medicine [45; 46] and biology [47; 48]. However, in many cases, the power of IFNs has been limited to descriptive tasks. More recently, considerable efforts have been spent to make them active modeling tools. In this sense, the work by [49] suggests to use IFNs to perform topological regularization in multivariate probabilistic modeling with both linear and non-linear multivariate probability distributions; the work by [50] proposes a new unsupervised feature selection algorithm entirely based on the study of the relative position of nodes inside the above mentioned constrained network representations; while the work by [39] suggests a first integration of IFNs into articulated pipelines involving also complex deep learning architectures. The latest milestone is represented by the introduction of Homological Neural Networks (HNN) [51], where the authors propose a pioneering methodology to extract a versatile computational unit directly from the IFNs' network representation. **Sparsity in Deep Learning**. Recent advances in many deep learning related fields [52; 53; 54; 55] came with an increasing demand for computational resources. The growing energy costs have driven the community to search for new models with reduced size which heavily rely on selective pruning of redundant connections. Indeed, sparse neural networks have been found to generalize just as well (sometimes even better) than original dense networks while reducing the memory footprint and shortening training time [56]. Even if large, the landscape of approaches to sparsify deep neural network models can be schematically organized considering six main categories: (i) down-sizing models [57; 58; 59]; (ii) operator factorization [60; 61; 62]; (iii) value quantization [63; 64; 65], (iv) value compression [66; 67]; (v) parameter sharing [68]; and (vi) sparsification [69; 70; 71; 72]. All these approaches intend sparsity as a concept to refer to the proportion of neural network weights that are zero-valued. Higher sparsity corresponds to fewer weights, and smaller computational and storage requirements. Based on this, the weights pruning phase can be at (i) at initialization [73]; (ii) after training [74]; or (iii) while training [75]. The current research work introduces a unique approach to neural network sparsification, which emphasizes the pruning of weak relationships during the data modeling stage. This approach involves constructing a lightweight neural network architecture that adapts its structure to a sparse representation of input data. In this sense, the sparsification process occurs before the initialization stage. The most similar solution to this is represented by Simplicial NNs [76] and Simplicial CNNs [77]. Indeed, these architectures constitute the very first attempt to exploit the topological properties of sparse graph representations to capture higher-order data relationships. Despite their novelty, the design of these neural network architectures limits them to pre-designed network data, without the possibility to easily scale to more general data types (e.g., tabular data). **Tabular Learning**. Traditionally the field of tabular data learning has been widely dominated by classic machine learning methods. Among them, ensembles of decision trees (DTs), such as GBDT (Gradient Boosting Decision Tree) [27; 78], represent the top choice for both practitioners and academics. The prominent strength of DTs is the efficient picking of global features with a high rate of statistical information gain [79], while their ensemble guarantees a generalized performance improvement by reducing variance [80]. GBDT is an algorithm in which new week learners (i.e. decision trees) are created from previous models' residuals and then combined to make the final prediction. Several GBDT variations exist, including XGBoost [78], LightGBM [81] and CatBoost [28]. Extended studies demonstrated how, despite their differences, the performance of these algorithms on many tasks is statistically equivalent [28]. In the last decade, several studies proposed novel deep learning architectures explicitly designed to solve tabular problems [82; 83; 84; 85; 86; 87; 80; 88]. These models can be roughly categorized into five groups: (i) differentiable trees; (ii) attention-based model; (iii) explicit modeling of multiplicative interactions; (iv) regularization methods; and (v) convolutions-based approaches. Differentiable tree models leverage the power of classical decision trees proposing smoother decision functions, which make them differentiable [83; 86; 87]. Attention-based models suggest exploiting the power of attention mechanisms [88; 89] by integrating it into tabular deep learning architectures [80; 84; 90; 91]. Methods that explicitly model multiplicative interactions try to incorporate feature products into Multilayer Perceptron models [92; 93; 94]. Regularization methods leverage large-scale hyper-parameter tuning schemes to learn a "regularization strength" for every neural weight [95; 8; 96]. Finally, convolutions-based approaches leverage the power of CNNs in tabular learning problems. The two most significant attempts in this sense are the work by [97], where tabular data are reshaped directly into a multi-channel image forman letting the model learn the correct features sorting through back-propagation, and the work by [98], where tabular data are transformed into images by minimizing the difference between the ranking of distances between features and the ranking of distances between their assigned pixels in the image. Despite these attempts, there is still an active debate over whether or not deep neural networks generally outperform gradient-boosted decision trees on tabular data, with multiple works arguing either for [80; 95; 87; 99] or against [11; 100; 101; 29] neural networks [102]. ## 3 Data and Methods ### Data To provide a fair comparison between HCNN and SOTA models, we use a collection of \(18\) tabular numerical datasets (see Appendix A) from the open-source "OpenML-CC18" benchmark suite [103]. Following the selection criteria in [104], all the datasets contain up to \(2000\) samples, \(100\) features, and \(10\) classes. A deep overview on the properties of this first set of data is provided in Appendix A. Training/validation/test split is not provided. For all the datasets, the \(50\%\) of the raw dataset is used as a training set, the \(25\%\) as validation set, and the remaining \(25\%\) as a test set. To prove the statistical significance of results presented in the current research work, all the analyses are repeated on \(10\) different combinations of training/validation/test splits. The reproducibility of results is guaranteed by a rigorous usage of seeds (i.e. \([12,190,903,7687,8279,9433,12555,22443,67822,9822127]\)). Following [101], we focus on small datasets because of two main reasons: (i) small datasets are often encountered in real-world applications [105] and (ii) existing deep learning methods are limited in this domain. It is worth noting that, differently from other deep learning architectures (e.g. [104; 80]), the applicability of HCNNs is not limited to small tabular data problems and can easily scale to medium-to-large problems. To provide evidence of this, we use a collection of 9 numerical tabular datasets (see Appendix A) from the "OpenML tabular benchmark numerical classification" suite [101]. All these datasets violate at least one of the selection criteria in [104] (i.e. they are characterized by a number of samples \(>2000\) or they are characterized by a number of features \(>100\)). A more detailed overview of the properties of this second set of data is provided in Appendix A. ### Information Filtering Networks The HCNN's building process is entirely centered on the structural organization of data emerging from the underlying network representation. The choice of the network representation is not binding even if limited to the family of simplicial complexes [33]. In this paper, we exploit the power of a class of information filtering networks (IFNs) [35; 36; 31; 32; 37], namely the Triangulated Maximally Filtered Graph (TMFG) [32], to model the inner sparsity of tabular data and obtain a structural organization of input features. IFNs are an effective tool to represent and model dependency structures among variables characterizing complex systems while imposing topological constraints (e.g. being a tree or a planar graph) and optimizing specific global properties (e.g. the likelihood) [49]. Starting from a system characterized by \(n\) features and \(T\) samples, arranged in a matrix \(\mathbf{X}\), this methodology builds a \(n\times n\) similarity matrix \(\hat{\mathbf{C}}\) which is filtered to obtain a sparse adjacency matrix \(A\) retaining only the most structurally significant relationships among variables. The introduction of TMFG is a milestone in the IFNs' research area. The building process of TMFG (see Appendix B) is based on a simple topological move that preserves planarity (i.e. a graph is planar if it can be embedded on the surface of a sphere without edges crossing): it adds one node to the center of three-nodes cliques by using a score function that maximizes the sum of the weights of the three edges connecting the existing vertices. This addition transforms three-node cliques (i.e. triangles) into four-node cliques (i.e. tetrahedra) characterized by a chord (i.e. an edge that is not part of the cycle but connects two vertices of the cycle itself) that is not part of the clique but connects two nodes in the clique, forming two triangles and generating a chordal network (a graph is said to be chordal if all cycles made of four or more vertices have a chord, reducing the cycle to a set of triangles [106]) [43]. As with all chordal graphs, the TMFG fulfills the independence assumptions of Markov and Bayesian networks [107; 43]. It has \(n\) nodes, where \(n\) is the cardinality of the set of input features and \(3n-6\) edges. A nested hierarchy emerges from its cliques [108]: compared to the fully connected graph represented by \(\hat{\mathbf{C}},\boldsymbol{A}\)'s density is reduced in a deterministic manner while the global hierarchical structure of the original network is retained. The TMFG presents three main advantages: (i) it can be used to generate sparse probabilistic models as a form of topological regularization [49]; (ii) it is computationally efficient and (iii) allows to find maximal cliques in polynomial time although the problem is NP-complete for general graphs. On the other hand, the two main limitations of chordal networks are that (i) they may add unnecessary edges to satisfy the property of chordality; and (ii) their building cost can vary based on the chosen optimization function. Working with numerical-only, tabular data, in the current paper, \(\hat{\mathbf{C}}\) corresponds to a matrix of squared correlation coefficients. It is worth noting that, while characterizing cross-correlations, one could face statistical uncertainty due to many reasons including, but not limited to the noise in the data and the intrinsic complexity of interactions among variables of the system. Attempts to overcome these problems may require filtering out statistically reliable information from the correlation matrix. Spectral analysis [109; 110; 111], clustering [112] and graph theory [113] demonstrated to be fruitful approaches to efficiently handle this problem [114; 35; 115]. In line with the work by [116], in the current paper, we use the bootstrapping approach [117; 118]. This technique requires to build a number \(r\) of replicas \(X_{i}^{*}\), \(i\in 1,\ldots,r\) of the data matrix \(\mathbf{X}\). Each replica \(X_{i}^{*}\) is constructed by randomly selecting \(T\) rows from the matrix \(\mathbf{X}\) allowing for repetitions. For each replica \(X_{i}^{*}\) the correlation matrix \(\hat{\mathbf{C}}_{i}^{*}\) is then computed. We highlight that (i) the bootstrap approach does not require the knowledge of the data distribution and (ii) it is particularly useful to deal with high dimensional systems where it is difficult to infer the joint probability distribution from data. Once obtained replicas-dependent correlation matrices, we treat them in two different ways: * We compute \(\hat{\mathbf{C}}\) as the entry-wise mean of correlation matrices \(\hat{\mathbf{C}}_{i\in 1,\ldots,r}^{*}\). * Based on each replica-dependent correlation matrix \(\hat{\mathbf{C}}_{i}^{*}\), we compute a TMFG\({}_{i}^{*}\) and we obtain the final TMFG by taking only the links that appear in all the TMFGs with a frequency higher than a specified threshold. In the rest of the paper, we refer to the first configuration as MeanSimMatrix and to the second one as BootstrapNet. These two approaches lead to widely different results. In the former case, the final TMFG will be a sparse, connected graph that necessarily maintains all the topological characterization of the family of IFNs it belongs to (i.e. planarity and chordality). In the latter case, instead, there will be no guarantee on the connectedness of the graph. Indeed, the chosen threshold could lead to disconnected components and to the removal of edges assuring the graph's chordality. ### Homological Neural Networks The main idea behind IFNs is to explicitly model higher-order sub-structures, which are crucial for the representation of the underlying system's interactions. In the case of TMFG, a simple higher-order representation can be obtained by adding triplets (triangles) and quadruplets (tetrahedra) to the set of nodes in the network. However, the associated higher-order graph is hard to be handled both visually and computationally. As a solution to this problem, in the work by [51], the authors start from a layered representation (i.e. the Hasse diagram), which explicitly takes into account higher-order sub-structures and their interconnections, and show how to easily convert this representation into a stand-alone computational unit named Homological Neural Network (HNN). Specifically, to represent the complexity of a higher order network (i.e. a TMFG), the authors propose to adopt a layered structure. As shown in Figure 1 nodes in layer \(d\) represent \(d\)-dimensional simplices (i.e. \(0\)-dimensional simplices are nodes, \(1\)-dimensional simplices are edges, \(2\)-dimensional simplices are triangles, \(3\)-dimensional simplices are tetrahedra). The structure starts with the vertices in layer \(0\); couples of vertices connect to edges, which are represented in layer \(1\); edges connect to triangles, which are represented in layer \(2\); triangles connect to tetrahedra, which are represented in layer \(3\), and so on. The resulting deep neural network is a sparse Multilayer Perceptron (MLP) with a one-to-one correspondence with the original network representation, explicitly retaining the simplices and their interconnection in the structure. All information about the network at all dimensions is explicitly encoded in this representation, including elements such as maximal cliques, separators, and their multiplicity. ### Homological Convolutional Neural Networks Despite the undeniable advantages deriving from the sparse structure provided by HNNs, results in [51] suggest that the choice of the Multilayer Perceptron as deep learning architecture to process the information encoded in the underlying network representation is sub-optimal (especially for tabular data problems). In addition to this, HNNs impose the chordality of the underlying network and the building process of the deep neural network architecture implies the usage of non-native components inducing a substantial computational overhead. In this research work, we propose an alternative computational architecture that aims to solve these issues and we name it "Homological Convolutional Neural Network" (HCNN). Given the adjacency matrix \(\mathbf{A}\) constructed using IFNs (see Section 3.2), to model the complexity embedded in the network representation, we isolate \(3\) different simplicial families: (i) maximal cliques with size \(4\) (i.e. the \(3\)-dimensional simplices or tetrahedra), (ii) maximal cliques with size \(3\) (i.e. the \(2\)-dimensional simplices or triangles) and (iii) maximal cliques with size \(2\) (i.e. the \(1\)-dimensional simplices or edges). When using the TMFG as network representation, these \(3\) structures are sufficient to capture all the higher-order dependency structures characterizing the underlying system. Each input of the novel deep learning architecture is hence represented by \(3\) different \(1\)-\(d\) vectors that we call \(H\) (i.e. realizations of the input features belonging to at least one tetrahedron), \(R\) (i.e. realizations of the input features belonging to at least one triangle), \(E\) (i.e. realizations of the input features belonging to at least one edge). As a first step, in HCNN, we perform a \(1\)-\(d\) convolution across each set of features defining a realization of a simplicial family. We use a kernel size and a stride equals to \(d+1\) (i.e. the dimension of the simplicial structure itself), and a number of filters \(\zeta\in[4,8,12,16]\). This means that, given the three input vectors \(H\), \(R\) and \(E\) representing the three simplicial families characterizing a TMFG, we compute a \(1\)-\(d\) convolution with a kernel size and a stride of \(2\), \(3\) and \(4\) respectively for edges, triangles, and tetrahedra. The usage of stride is necessary to prevent the "parameter sharing". While generally considered an attractive property as fewer parameters are estimated and overfitting is avoided, in our case parameter sharing leads to inconsistencies. Indeed, geometrical structures belonging to the same simplicial family (i.e. edges, triangles, and tetrahedra) but independent in the hierarchical dependency structure of the system would share parameters, which is obviously wrong. After the \(1^{st}\)-level Figure 1: Pictorial representation of an HNN and its building pipeline. From left to right, (i) we start from a chordal graph representing the dependency structures of features in the underlying system, (ii) we re-arrange the network’s representation to highlight the underlying simplicial complex structures (i.e. edges, triangles, tetrahedra), and (iii) we finally report a layered representation, which explicitly takes into account higher order sub-structures and their interconnections, and can be easily converted into a computational unit (i.e. a sparse MLP). convolutions, which extract element-wise information from geometrical structures belonging to the same simplicial family, we apply a \(2^{nd}\)-level convolutions extracting homological insights. Indeed, the convolution is applied to the output of the first layer, extracting information related to entities belonging to the same simplicial family, which are not necessarily related in the original network representation. In this case, we use a kernel with a size equal to the cardinality of the simplicial family (i.e. \(|E|\), \(|R|\), \(|H|\) respectively) and a number of filters \(\xi\in[32-64]\) with a skip factor of \(4\). The final layer of the HCNN architecture is linear and maps the outputs from the \(2^{nd}\)-level convolutions to the output. It is worth noting that each level of convolution is followed by a regularization layer with a dropout rate equal to \(0.25\) and the non-linear activation function is the classic Rectified Linear Unit (ReLU). Even if HNN and HCNN are built on the concept of homology and exploit it in the construction of a data-centric neural network unit, it is worth noting that their design aims to capture different data relationships. In the case of HNN different aggregation layers aim to capture relationships between increasingly complex geometrical structures, which are linked together through at least one edge in the network representation of the system under analysis. If two geometrical structures are not linked, then any potential relationship is missed. This architectural philosophy is maintained in HCNN and is fully captured in the \(1^{st}\)-level of convolution, where we model interactions embedded in unitary geometrical structures. In so doing, we capture the information contained in all the representatives of each simplicial family since the convolution is iterated for each size of higher-order structures. This step is highly eased if the input network is chordal. Indeed, this allows to have increasingly complex structures containing all the possible substructures. The chordality property has also additional advantages. The building process of this first layer of the data-centric unit can be built in polynomial time. This property is based on the fact that the recognition of maximal cliques of different sizes and the maximal clique of a chordal graph can be found in polynomial time although the problem is NP-complete for general graphs [119]. In the second and third layer of aggregation, in HCNNs, we aim to capture homological relationships characterizing the underlying system. Specifically, they Figure 2: Pictorial representation of an HCNN and its building pipeline. From left to right, (i) we start from a chordal graph representing the dependency structures of features in the underlying system (the choice of the network representation is not binding), (ii) we isolate the maximal cliques corresponding to \(1\)-, \(2\)- and \(3\)-dimensional simplices (i.e. edges, triangles, tetrahedra) and we group them into \(1-d\) vectors containing features’ realizations, (iii) we compute a \(1\)-\(d\) convolution which extracts simplicial-wise non-linear relationships, (iv) we compute a \(2^{nd}\)-level convolution, which operates on the output of the previous level of convolution across all the representatives of each simplicial family extracting a first class of non-trivial homological insights, (v) we finally apply a linear map from the \(2^{nd}\)-level convolutions to the output extracting a second class of cross-network homological insights. allow us to overcome the limits imposed by any network structure, by capturing potential hidden data dependency structures. ### On the learning process of network's representation In the problem setting described in Section 3.4, we are dealing with a computational system \(\mathcal{M}_{\mathcal{G}}\), the HCNN, which depends on a network representation \(\mathcal{G}\). To discover the best network representation, in principle, one needs to explore the ensemble of all possible networks and identify the one that makes the model perform best. This problem is known to be NP-hard [119]. However, one can restrict the search space and identify a priori the kind of optimal network by analysing the dependency structure of the features of the system under analysis. From an information theoretic perspective, the general problem consists in finding the multivariate probability density function with representation structure \(\mathcal{G}\), \(\hat{f}(\mathbf{X}|\mathcal{G})\), that best describes the "true" underlying distribution \(f(\mathbf{X})\) (which is unknown). To quantify the distance between a model, \(\hat{f}(\mathbf{X}|\mathcal{G})\), and the true distribution, \(f(\mathbf{X})\), one can use the Kullback-Leibler divergence [120] \[D_{KL}(f\parallel\hat{f})=\mathbb{E}(\log f(\mathbf{X}))-\mathbb{E}(\log\hat {f}(\mathbf{X}|\mathcal{G})), \tag{1}\] which must be minimized. The first term of Equation 1 is independent of the model and therefore its value is irrelevant to the purpose of discovering the representation network. The second term, \(-\mathbb{E}(\log\hat{f}(\mathbf{X}|\mathcal{G}))\) (note the minus), instead depends on \(\mathcal{G}\) and must be minimized. This term is the estimate of the entropy of the multivariate system of variables \(\mathbf{X}\) by using the model \(\hat{f}(\mathbf{X}|\mathcal{G})\): \[\hat{H}(\mathbf{X}|\mathcal{G})=-\mathbb{E}(\log\hat{f}(\mathbf{X}|\mathcal{G })) \tag{2}\] and corresponds to the so-called cross-entropy. Given that the true underlying distribution is unknown, the expectation cannot be computed exactly, however, it can be estimated with arbitrary precision using the sample mean. Such a sample mean approximates the expected value of the negative log-likelihood of the model \(\hat{f}(\mathbf{X}|\mathcal{G})\). Therefore, the construction of the representation network must aim to maximize the likelihood of the model, which is indeed a typical quantity that is maximized when training a model. The network associated with the largest model's likelihood can be constructed step-by-step by joining disconnected parts that share the largest mutual information. Indeed, in a graph, the gain achieved by joining two variables \(X_{a}\) and \(X_{b}\), is approximately given by the mutual information shared by the two variables \(\simeq I(X_{a};X_{b})\). In turn, at the second-order approximation, the mutual information is approximated by the square of the correlation coefficient between the two variables. Therefore, the gain in the model's likelihood is \(I(X_{a};X_{b})\simeq\rho_{a,b}^{2}\)[121], and the TMFG construction with \(\rho^{2}\) weights implies a graph that aims to maximize the model's likelihood itself. ## 4 Experiments In this section, we compare the performance of HCNN classifier in its MeanSimMatrix and BootstrapNet configuration (see Section 3.2) against \(8\) machine learning and deep learning SOTA classifiers under homogeneous evaluation conditions. We consider LogisticRegression, RandomForest, XGBoost, LightGBM and CatBoost as representatives of machine learning classifiers, and MLP, TabNet and TabPFN as representatives of deep learning classifiers. For each of them, the inference process is structured into two different phases: (i) the hyper-parameters search stage and (ii) the training/test stage with optimal hyper-parameters. Both stages are repeated \(10\) times with fixed seeds that guarantee a full reproducibility of results. For each run, we allow for a maximum of \(500\) hyper-parameters search iterations allocating \(8\) CPUs each with \(2\)GB memory and a time budget of \(48\) hours. Experiments are entirely run on the University College London HPC CS Cluster [122]. The hyper-parameters search phase consists of a Sequential Model-Based Optimization with the Tree Parzen Estimator [123], where we maximize the F1_score on each validation set. In Appendix C, we describe the hyper-parameters' search space for each classifier. We use three metrics to evaluate the classifiers on out-of-sample datasets: the F1_score, the Accuracy, and the Matthews Correlation Coefficient (MCC) [124; 125]. The results obtained are statistically validated using the Wilcoxon significance test, a standard metric for comparing classifiers across multiple datasets [126]. As second stage of the analysis, we investigate the scalability of each model in tackling extensive numerical tabular classification tasks. In so doing, we use an ad-hoc suite of datasets (see Section 3.1), while maintaining the inference process described earlier in this section. A model converges (i.e. it is able to scale to larger classification problems) once completing the learning task using the given computational resources in the allocated time budget for all the \(10\) seeds. ### Small tabular classification problems Table 4.1 reports a cross-datasets, out-of-sample comparison of classifiers previously listed in this Section. For each model, we provide (i) the average and (ii) the best/worst ranking position considering three different evaluation metrics, (iii) the average value for each evaluation metric, and (iv) the time required for the hyper-parameters tuning and for the training/test run with optimal hyper-parameters. On average, the TabPFN model occupies a ranking position higher than the one of HCNN both in its MeanSimMatrix and in BootstrapNet configuration. However, it is worth noting that when we evaluate models' performance through F1_Score and MCC (i.e. the two performance metrics that are less prone to bias induced by unbalanced datasets), the HCNN in the MeanSimMatrix configuration occupies a ranking position for the worst performance, which is better than the one of its immediate competitor (i.e. \(7\) and \(6\) of HCNN MeanSimMatrix vs \(10\) and \(8\) of TabPFN). The same happens in the case of HCNN BootstrapNet with F1_score. These findings highlight an evident robustness of the HCNN model, which is superior not only to TabPFN model but also to all the other deep learning and machine learning alternatives. More generally, both TabPFN and HCNN show superior performance compared to the other two deep learning models (i.e. MLP and TabNet), which occupy an average ranking position equal to \(~{}7\) and \(~{}9\) respectively on all the three different evaluation metrics. Among machine learning models, CatBoost achieves the highest performance with an average ranking position equal to \(~{}4\) considering the F1_Score and the MCC, and equal to \(~{}5\) considering the Accuracy (in this case the position number \(4\) is occupied by LogisticRegression). All these findings can be visualized in Figure 4.1. Specifically, the highest robustness of HCNN model in MeanSimMatrix configuration compared to TabPFN model can be observed in Figure 3(a) and Figure 3(c). They represent the ranking position of each model on each dataset using the F1_Score and the MCC as performance metrics respectively. In the first case, we notice that the ranking position of the worst performance by HCNN is \(7\) when dealing with dataset "climate-model-simulation-crashes" (OpenML ID \(40994\)), while the one occupied by TabPFN is \(10\) with dataset "pc_1" (OpenML ID \(1068\)). In the second case, we notice that the worst performance by HCNN has ranking \(6\) when dealing with datasets "mfeat-karhunen" (OpenML ID \(16\)), "steel-plates-fault" (OpenML ID \(40982\)) and "climate-model-simulation-crashes" (OpenML ID \(40994\)), while the one occupied by TabPFN is \(8\) with dataset "pc_1" (OpenML ID \(1068\)). Except for "mfeat-karhunen" dataset (OpenML ID \(16\)), all the datasets listed before are strongly unbalanced. Models' numerical performances for each evaluation metric enforce all the findings discussed above. It is, however, clear that the differences in performance are very reduced. This evidence suggests a potential statistical equivalence of the models and this hypothesis is verified through a specific statistical test discussed later in this Section. The final comparison to be performed is the one related to the models' running time. In this sense, machine learning models still represent the SOTA with CatBoost being an exception. Among deep \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline & **LogileRegression** & **RandomForest** & **XCBoost** & **LightGBM** & **Cellboard** & **MLP** & **TabNet** & **TabPFN** & **BICNN** \\ \hline **3.7.max T1.Score** & 5.533 & 5.333 & 5.380 & 5.277 & 4.500 & 9.500 & 7.584 & 2.538 & 4.588 & 3.722 \\ **3.7.max Accuracy** & 4.838 & 5.972 & 6.914 & 5.916 & 5.388 & 9.646 & 7.166 & 1.833 & 4.694 & 3.277 \\ **3.7.max MCC** & 5.166 & 6.388 & 5.611 & 5.646 & 4.833 & 9.500 & 7.313 & 2.166 & 4.777 & 3.556 \\ \hline **3.7.max T1.Score** & 1.79 & -1.00 & 1.79 & -1.8 & 1.5 & 8.10 & -2.10 & -1.70 & -1.79 & -2.7 \\ **2.8/8.5** & -1.10 & -1.0 & 3.9 & -2.8 & -1.8 & 8.10 & -2.10 & -1.5 & -1.9 & -1.6 \\ **3.7.max MAC** & -1.10 & -2.00 & -1.9 & 1.8 & 1.8 & 8.10 & -2.10 & -1.8 & 1.9 & -2.6 \\ \hline **3.7.max T1.Score** & 0.791 & 0.771 & 0.771 & 0.771 & 0.771 & 0.771 & 0.771 & 0.771 & 0.771 & 0.511 & 0.301 & 0.301 & 0.301 \\ **3.7.max T1.Score** & 0.861 & 0.861 & 0.879 & 0.874 & 0.862 & 0.861 & 0. learning models, however, it is worth noting that HCNN has a running time that is comparable with the one of TabPFN and much lower than the other attention-based model, TabNet. This result is relevant since it legitimates the architecture proposed in this paper as a strong competitor of TabPFN. Indeed, the proposed architecture reaches comparable results without pre-training, with a higher level of explainability in the architectural building process and with a much lower number of parameters. Also among deep learning models, there is an exception represented by the MLP: its SOTA running time heavily depends on the number of layers and on the number of neurons per layer emerging from the hyper-parameter search. Figure 4.1 reports the relationship between the number of features and the total number of parameters in the HCNN MeanSimMatrix configuration, the relationship between the number of features and the total number of parameters in the HCNN BootstrapNet and the relationship between the difference in the number of features and the difference in the total number of parameters in the two configurations. Looking at Figure 4(a), it is possible to conclude that a strong linear relationship exists between the number of features and the total number of parameters of the HCNN model in the MeanSimMatrix. This finding was expected since the proposed model's architecture totally depends on the complete homological structure of the underlying system. This means that each time a new feature is introduced, we could potentially observe an increase in the number of edges, triangles, and tetrahedra which in turn determines a proportional increase in the number of parameters of the HCNN itself. Figure 3: Out-of-sample model- and dataset-dependent average ranking considering (a) F1_Score (b) Accuracy (c) MCC evaluation metrics. This representation allows to clearly assess the higher robustness of HCNN model to datasets’ unbalance, over all its deep learning and machine learning competitors. On this point, we need to underline that the magnitude of the slope of the regression line heavily depends on the optimal hyper-parameters describing the number of filters in the two convolutional layers. Looking at Figure 4(b), we observe again a relatively strong linear relationship between the number of features and the total number of parameters of the HCNN model in the BootstrapNet. The difference in r_value between the two configurations is equal to \(0.19\) and depends on the fact that, in the second case, the optimal threshold value, which maximizes the model's performance, is different across datasets, it does not depend on the number of inputs features and determines an ablation of features that has no dependence on any other factor. More generally, in the BootstrapNet configuration, we observe a number of parameters that is, on average, one order of magnitude below the one in the HCNN MeanSimMatrix configuration. To better study this finding, in Figure 4(c) we report, on the \(x\)-axis the difference in the number of features \(\Delta_{f}\) and on \(y\)-axis the difference in the number of parameters \(\Delta_{p}\). As one can see, the linear relationship is strong only when the two deltas are low. For higher deltas, specifically for the three datasets "mfeat-fourier" (OpenML ID \(14\)), "mfeat-karhunen" (OpenML ID \(16\)), and "analcaddata_authorship" (OpenML ID \(458\)), even if the decrement is significant for both parameters, the relationship is not linear. To assess the statistical significance of the difference in models' performance, we use the Critical Difference (CD) diagram of the ranks based on the Wilcoxon significance test (with \(p\)-values below \(0.1\)), a standard metric for comparing classifiers across multiple datasets [126]. The overall empirical Figure 4: Study of the relationship between the number of total features (\(x\)-axis) and a number of total parameters (\(y\)-axis) for the (a) HCNN MeanSimMatrix and (b) HCNN BootstrapNet configuration. Figure (c) reports the relationship structure among the difference in the number of features (\(x\)-axis) and the difference in the number of total parameters (\(y\)-axis) when using the two above-mentioned configurations. comparison of the methods is given in Figure 4.1. We notice that the performance of HCNN and TabPFN is not statistically different. This finding is coherent across the three different evaluation metrics. This result is particularly relevant because makes these deep learning architectures the only two which are really comparable with the SOTA machine learning ones. Indeed, MLP and TabNet are statistically different from other models in the majority of cases. These findings legitimate the methodology proposed in the current research work as a SOTA one both in terms of performance and in terms of computational complexity (i.e. number of parameters). We cannot assert the same for the TabPFN, which is among the SOTA models in terms of performance but the worst model in terms of computational (and architectural) complexity. ### Models' scalability to larger tabular numerical classification problems All the models considered in the current research work are primarily designed to handle small tabular classification problems. As described in [104], a dataset is defined as "small" if contains up to \(2000\) samples and \(100\) features. In this section, we explore the ability of the models to scale to larger problems. In so doing, we use benchmark datasets characterized, in turn, by a number of samples greater than \(2000\) or a number of features greater than \(100\). In Table 2, we mark the success in solving the corresponding tabular classification task with a () symbol, while a failure to solve the problem is denoted by an () symbol. As one can notice, the proposed datasets are sufficient in underlining the criticalities of two models: the TabPFN model and the HCNN model in its MeanSimMatrix configuration. In the first case, the model is unable to scale to problems with a larger number of samples and features. This limitation was already pointed out in the original work by [104] and directly depends on the model's architecture, which strongly leverage the power of attention-based mechanisms. Indeed, the runtime and memory usage of the TabPFN architecture scales quadratically with the number of inputs (i.e. training samples passed) and the fitted model cannot work with datasets with a number of features \(>100\). The authors propose a potential solution to these problems by recommending the incorporation of attention mechanisms that exhibit linear scalability with the number of inputs [127; 128], while simultaneously maintaining satisfactory Figure 5: Critical Difference plots on out-of-sample average ranks with a Wilcoxon significance analysis. In (a) the test is run considering the F1_Score, in (b) the test is run considering the Accuracy, in (c) the test is run considering the MCC. performance outcomes. However, no evidence is presented to support this suggestion. In the case of HCNN MeanSimMatrix, instead, the proposed architecture demonstrates a limit in handling problems characterised by a large number of features (but not samples). Also in this case, the reason of the failure should be searched in the model's architectural design choices. Indeed, as underlined in Figure 4(a), there is a strong linear relationship between the number of features and the number of parameters, meaning that when the first parameter is large, convolving across all representatives of each simplicial complex family becomes computationally demanding. A solution to this problem can be found in employing the BootstrapNet configuration, which disrupts the linear relationship discussed earlier, resulting in a significant reduction in the number of parameters when dealing with a large number of features. While this approach demonstrates considerable efficacy, it remains reliant on a threshold parameter (see Section 3.2), suggesting the need for more advanced and parameter-free alternatives. For the seek of completeness, in Appendix E, we partially repeat the analyses presented in Section 4.1 on the newly introduced datasets. Because of the fragmentation caused by the increased size, we report only the dataset-dependent analyses, excluding cross-datasets ones. ## 5 Conclusion In this paper, we introduce the Homological Convolutional Neural Network (HCNN), a novel deep learning architecture that revisits the simpler Homological Neural Network (HNN) to gain abstraction, representation power, robustness, and scalability. The proposed architecture is data-centric and arises from a graph-based higher-order representation of dependency structures among multivariate input features. Compared to HNN, our model demonstrates a higher level of abstraction since we have higher flexibility in choosing the initial network's representation, as we can choose from the universe simplicial complexes and we are not restricted to specific sub-families. Looking at geometrical structures at different granularity levels, we propose a clear-cut way to leverage the power of convolution on sparse data representations. This allows to fully absorb the representation power of HNN in the very first level of HCNN, leaving room for additional data transformations at deeper levels of the architecture. Specifically, in the current research work we build the HCNN using a class of information filtering networks (i.e. the TMFG) that uses squared correlation coefficients to maximize the likelihood of the underlying system. We propose two alternative architectural solutions: (i) the MeanSimMatrix configuration and (ii) the BootstrapNet configuration. Both of them leverage the power of bootstrapping to gain robustness toward data noise and the intrinsic complexity of interactions among the underlying system's variables. We test these two modeling solutions on a set of tabular numerical classification problems (i.e. one of the most challenging tasks for deep learning models and the one where HNN demonstrates the poorest performances). We compare HCNN with different machine- and deep-learning architectures, always teeing SOTA performances and demonstrating superior robustness to data unbalances. Specifically, we demonstrate that HCNN is not only able to compete with the latest transformer architectures (e.g. TabPFN) by using a considerably lower and easily controllable number of parameters (especially in the BootstrapNet configuration), guaranteeing a higher level of explicability in the neural network's building process and having a comparable running time without the need for pre-training. We finally propose a study on models' scalability to dataset with increasing size. We underline the fragility of transformer models and we also demonstrate that HCNN in its MeanSimMatrix configuration is unable to manage datasets \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline **OpenML ID** & **\# Samples** & **\# Features** & & & & & & & & & \\ & & **LogisticRegression** & **RandomForest** & **XGBon** & **LogicGBM** & **MLP** & **ThaNet** & **ThaPFN** & **HCNN** & **HCNN** \\ \hline 361055 & 16714 & 10 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ & ✓ \\ 361052 & 10825 & 26 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ & ✓ \\ 361053 & 13488 & 16 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ 361055 & 13276 & 10 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ 361046 & 10578 & 27 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ 361275 & 13272 & 209 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ 361277 & 3664 & 8 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ 361278 & 10000 & 22 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 2: Study on models’ ability to scale to larger problems. Considered datasets belong to the OpenML benchmark suite “Tabular benchmark numerical classification” [101]. For each of them, we report the OpenML ID, the number of samples, and the number of features. We indicate the success in solving the corresponding tabular classification task with a \(\bigcirc\) symbol, while a failure to solve the problem is denoted by an \(\bigcirc\) symbol. characterized by a large number of input features. On the other side, we show that the design choice adopted for the BootstrapNet configuration offers a parametric solution to the problem. Despite significant advances introduced by HCNNs, this class of neural networks remains in an embryonic phase. Further studies on underlying network representations should propose alternative metrics that replace squared correlation coefficients for mixed data-types (i.e. categorical and numerical or categorical only data-types), and further work is finally required to better understand low-level interactions captured by the proposed neural network model. This final point would certainly lead to a class of non-parametric parsimonious HCNN.
2309.00570
Mechanism of feature learning in convolutional neural networks
Understanding the mechanism of how convolutional neural networks learn features from image data is a fundamental problem in machine learning and computer vision. In this work, we identify such a mechanism. We posit the Convolutional Neural Feature Ansatz, which states that covariances of filters in any convolutional layer are proportional to the average gradient outer product (AGOP) taken with respect to patches of the input to that layer. We present extensive empirical evidence for our ansatz, including identifying high correlation between covariances of filters and patch-based AGOPs for convolutional layers in standard neural architectures, such as AlexNet, VGG, and ResNets pre-trained on ImageNet. We also provide supporting theoretical evidence. We then demonstrate the generality of our result by using the patch-based AGOP to enable deep feature learning in convolutional kernel machines. We refer to the resulting algorithm as (Deep) ConvRFM and show that our algorithm recovers similar features to deep convolutional networks including the notable emergence of edge detectors. Moreover, we find that Deep ConvRFM overcomes previously identified limitations of convolutional kernels, such as their inability to adapt to local signals in images and, as a result, leads to sizable performance improvement over fixed convolutional kernels.
Daniel Beaglehole, Adityanarayanan Radhakrishnan, Parthe Pandit, Mikhail Belkin
2023-09-01T16:30:02Z
http://arxiv.org/abs/2309.00570v1
# Mechanism of feature learning in convolutional neural networks ###### Abstract Understanding the mechanism of how convolutional neural networks learn features from image data is a fundamental problem in machine learning and computer vision. In this work, we identify such a mechanism. We posit the Convolutional Neural Feature Ansatz, which states that covariances of filters in any convolutional layer are proportional to the average gradient outer product (AGOP) taken with respect to patches of the input to that layer. We present extensive empirical evidence for our ansatz, including identifying high correlation between covariances of filters and patch-based AGOPs for convolutional layers in standard neural architectures, such as AlexNet, VGG, and ResNets pre-trained on ImageNet. We also provide supporting theoretical evidence. We then demonstrate the generality of our result by using the patch-based AGOP to enable deep feature learning in convolutional kernel machines. We refer to the resulting algorithm as (Deep) ConvRFM and show that our algorithm recovers similar features to deep convolutional networks including the notable emergence of edge detectors. Moreover, we find that Deep ConvRFM overcomes previously identified limitations of convolutional kernels, such as their inability to adapt to local signals in images and, as a result, leads to sizable performance improvement over fixed convolutional kernels. ## 1 Introduction Neural networks have achieved impressive empirical results across various tasks in natural language processing [8], computer vision [39], and biology [50]. Yet, our understanding of the mechanisms driving the successes of these models is still emerging. One such mechanism of central importance is that of _neural feature learning_, which is the ability of networks to automatically learn relevant input transformations from data [37, 43, 55, 56]. An important line of work [5, 14, 23, 32, 35, 43, 52, 55] has demonstrated how feature learning in fully connected neural networks provides an advantage over classical, non-feature-learning models such as kernel machines. Recently, the work [37] identified a connection between a mathematical operator, known as average gradient outer product (AGOP) [17, 21, 47, 48], and feature learning in fully connected networks. This work subsequently demonstrated that the AGOP could be used to enable similar feature learning in kernel machines operating on tabular data. In contrast to the case for fully connected networks, there are few prior works [3, 24] analyzing feature learning in convolutional networks, which have been transformative in computer vision [19, 39]. The work [24] demonstrates an advantage of feature learning in convolutional networks by showing that these models are able to threshold noise and identify signal in image data unlike convolutional kernel methods including Convolutional Neural Tangent Kernels [4]. The work [3] analyzes how deep convolutional networks can correct features in early layers by simultaneous training of all layers. While these prior works identify advantages of feature learning in convolutional networks, they do not identify a general operator that captures such feature learning. The connection between AGOP and feature learning in fully connected neural networks [37] suggests that a similar connection should exist for feature learning in convolutional networks. Moreover, such a mechanism could be used to learn analogous features with any machine learning model such as convolutional kernel machines. In this work, we establish a connection between convolutional neural feature learning and the AGOP, which we posit as the Convolutional Neural Feature Ansatz (CNFA). Unlike the fully connected case from [37] where feature learning is characterized by AGOP with respect to network inputs, we demonstrate that convolutional feature learning is characterized by AGOP with respect to patches of network inputs. We present empirical evidence for the CNFA by demonstrating high average Pearson correlation (in most cases \(>.9\)) between AGOP on patches and the covariance of filters across all layers of pre-trained convolutional networks on ImageNet [40] and across all layers of SimpleNet [18] trained on several standard image classification datasets. We additionally prove that the CNFA holds for one step of gradient descent for deep convolutional networks. To demonstrate the generality of our identified convolutional feature learning mechanism, we leverage the AGOP on patches to enable feature learning in convolutional kernel machines. We refer to the resulting algorithm as ConvRFM. We demonstrate that ConvRFM captures features similar to those learned by the first layer of convolutional networks. In particular, on various image classification benchmark datasets such as SVHN [33] and CIFAR10 [26], we observe that ConvRFM recovers features corresponding to edge detectors. We further enable deep feature learning with convolutional kernels by developing a layerwise training scheme with ConvRFM, which we refer to as Deep ConvRFM. We demonstrate that Deep ConvRFM learns features similar to those learned by deep convolutional neural networks. Furthermore, we show that Deep ConvRFM overcomes limitations of convolutional kernels identified in [24] and exhibits _local feature adaptivity_. Lastly, we demonstrate that Deep ConvRFM provides improvement over CNTK and ConvRFM on several standard image classification datasets, indicating a benefit to deep feature learning. Our results advance understanding of how convolutional networks automatically learn features from data and provide a path toward integrating convolutional feature learning into general machine learning models. ## 2 Convolutional Neural Feature Ansatz (CNFA) Let \(f:\mathbb{R}^{c\times P\times Q}\rightarrow\mathbb{R}\) denote a convolutional neural network (CNN) operating on \(P\times Q\) resolution images with \(c\) color channels. The \(\ell^{th}\) convolutional layer of a CNN involves applying a function \(h_{\ell}:\mathbb{R}^{c_{\ell-1}\times P_{\ell-1}\times Q_{\ell-1}}\rightarrow \mathbb{R}^{c_{\ell}\times P_{\ell}\times Q_{\ell}}\) defined recursively as \(h_{\ell}(x)=\phi(\widetilde{W}_{\ell}*h_{\ell-1}(x))\) with \(h_{1}=x\), \(\widetilde{W}_{\ell}\in\mathbb{R}^{c_{\ell}\times c_{\ell-1}\times q\times q}\) denoting \(c_{\ell}\) filters of size \(c_{\ell-1}\times q\times q\), \(*\) denoting the convolution operation, and \(\phi\) denoting an elementwise activation function. To understand how features emerge in convolutional networks, we abstract a convolutional network to a function of the form \[f(x)=g(W_{1}x[1,1],\ldots,W_{1}x[i,j],\ldots,W_{1}x[P,Q]),\quad i \in[P],j\in[Q]\ ; \tag{1}\] where \(W_{1}\in\mathbb{R}^{c_{1}\times cq^{2}}\) is a matrix of \(c_{1}\) stacked filters of size \(cq^{2}\) and \(x[i,j]\in\mathbb{R}^{cq^{2}}\) denotes the patch of \(x\) centered at coordinate \((i,j)\). This abstraction is helpful since it allows us to consider feature learning in convolutional networks with arbitrary architecture (e.g., pooling layers, batch normalization, etc.) after any given convolutional layer. Up to rotation and reflection by the left singular vectors, the feature extraction properties of \(W_{1}\) are determined by the singular values and right singular vectors of \(W_{1}\). These singular values and vectors can be recovered from the matrix \(W_{1}^{T}W_{1}\), which is the empirical (uncentered) covariance of filters in the first layer. This argument extends to analyze features selected at layer \(\ell\) of a CNN by considering a function of the form \(f(x)=g_{\ell}(W_{\ell}h_{\ell-1}(x)[1,1],\ldots,W_{\ell}h_{\ell-1}(x)[P_{\ell- 1},Q_{\ell-1}])\). We refer to the matrix \(W_{\ell}^{T}W_{\ell}\) as a _Convolutional Neural Feature Matrix_ (CNFM) and note that this matrix is proportional to the (uncentered) empirical covariance matrix of filters in layer \(\ell\). We use the form of convolutional networks presented in Eq. (1) to state our Convolutional Neural Feature Ansatz (CNFA). Let \(G_{\ell}(x):=g_{\ell}(W_{\ell}h_{\ell-1}(x)[1,1],\ldots,W_{\ell}h_{\ell-1}(x)[P _{\ell-1},Q_{\ell-1}])\). Then, after training \(f\) for at least one epoch of (stochastic) gradient descent on standard loss functions: \[W_{\ell}^{\top}W_{\ell}\propto\sum_{p=1}^{n}\sum_{(i,j)\in S} \nabla_{h_{\ell-1}(x)[i,j]}G_{\ell}(x)\left(\nabla_{h_{\ell-1}(x)[i,j]}G_{ \ell}(x)\right)^{\top}; \tag{2}\] where \(S=\{(i,j)\}_{i\in[P_{\ell-1}],j\in[Q_{\ell-1}]}\) denotes the set of indices of patches utilized in the convolution operation in layer \(\ell\). The CNFA (Eq. 2) mathematically implies that the convolutional neural feature matrices are proportional to the average gradient outer product (AGOP) with respect to the patches of the input to layer \(\ell\). The CNFA implies that the structure of covariance matrices of filters in convolutional networks, an object studied in prior work [49], corresponds to AGOP over patches. Intuitively, the CNFA implies that convolutional features are constructed by identifying and amplifying those pixels in any patch that most change the output of the network. We now present extensive empirical evidence corroborating our ansatz. We subsequently present supporting theoretical evidence. ### Empirical evidence for CNFA We now provide empirical evidence for the ansatz by computing the correlation between CNFMs and the AGOP for each convolutional layer in various CNNs. We provide three lines of evidence by computing correlations for the following models: (1) AlexNet [27], all VGGs [46], and all ResNet [19] models pre-trained on ImageNet [40] ; (2) SimpleNet models [18] trained on SVHN [33], GTSRB [20], CIFAR10 [26], CIFAR100, and ImageNet32 [10]; and (3) shallow CNNs across 10 standard computer vision datasets from PyTorch upon varying pooling and patch size of convolution operations. The first set of experiments provides evidence for the ansatz in large-scale state-of-the-art models on ImageNet. The second set provides evidence for the ansatz across standard computer vision datasets. The last set provides evidence for the ansatz holding across architecture choices. CNFA verification for pre-trained state-of-the-art models on ImageNet. We begin by providing evidence for the ansatz on pre-trained state-of-the-art models on ImageNet. In Fig. 1, we present these correlations for AlexNet, all VGG models and all ResNet models pre-trained on ImageNet, which are available for download from the PyTorch library [36].4 As a control, we verify that weights at the end of training are far from initialization (see the red bars in Fig. 1A). Note that despite the complexity involved in training these models (e.g., batch normalization, skip connections, custom optimization procedures, data augmentation) the Pearson correlation between the AGOP and CNFMs are remarkably high (\(>.9\) for each layer of AlexNet and VGG13). In Fig. 1B, we additionally visualize the AGOP and CNFM for the first convolutional layer in AlexNet, VGG11, and ResNet18 to demonstrate the qualitative similarity between these matrices. In addition, in Appendix Fig. 7, we verify that these correlations are lower at initialization than at the end of training indicating that the ansatz is, in fact, a consequence of training. Footnote 4: We evaluate all correlations between AGOP and CNFMs for all convolutional layers of AlexNet and all VGGs. To simplify computation on ResNets, we evaluate correlations between AGOP and CNFMs for the first layer in each BasicBlock and each Bottleneck, as defined in PyTorch. We note that for ResNet152, this computation involves computing correlation between matrices in 50 Bottleneck blocks. CNFA verification for SimpleNet on CIFAR10, CIFAR100, ImageNet32, SVHN, GTSRB.To verify the ansatz on other datasets, we also trained the SimpleNet model on five datasets including Figure 1: **A.** Correlation between initial CNFM and trained CNFM (red) and trained CNFM with AGOP (green) for convolutional layers in VGG, AlexNet, and ResNet on ImageNet (\(224\times 224\) resolution color images). **B.** Initial CNFM, trained CNFM, and AGOP matrices for the first convolutional layer of ResNet18, VGG11, and AlexNet on ImageNet. CIFAR10/100, ImageNet32, SVHN, and GTSRB. We note SimpleNet had achieved state-of-the-art results on several of these tasks at the time of its release (e.g., \(>95\%\) test accuracy on CIFAR10). We train SimpleNet models using the same optimization procedure provided from [18] (i.e., Adadelta [57] with weight decay and manual learning rate scheduling). We use a small initialization scheme of normally distributed weights with a standard deviation of \(10^{-4}\) for convolutional layers. We note that we were able to recover high test accuracies across all datasets consistent with the results from [18] (see test accuracies for these trained SimpleNet models in Appendix Fig. 8). As shown in Appendix Fig. 8, we observe consistently high correlation between AGOPs and CNFMs across layers of SimpleNet. CNFA is robust to hyperparameter choices.We lastly study the effect of patch size and architecture choices on the CNFA for networks trained using the Adam optimizer [25]. We generally observe that larger patch sizes slightly reduce the correlation between AGOP and CNFMs, and that max pooling layers (in contrast to no pooling or average pooling) lead to higher correlation (Appendix Fig. 9). Interestingly, these results indicate that the choices used in state-of-the-art CNNs (max pooling layers and patch size of 3) are consistent with those that lead to highest correlation between AGOP and CNFMs. ### Visualizing features captured by CNFM and AGOP We now visualize how the CNFM operates on patches of images to select features and demonstrate that AGOP over patches captures similar features. Both the CNFM and AGOP yield an operator on patches of images. Thus, to visualize how these matrices select features, we expand input images into individual patches, then apply either the CNFM or the AGOP to each patch. We then reduce the expanded image back to its original size by taking the norm over the spatial dimensions of each expanded patch. Formally, the Figure 2: Comparison of features extracted by CNFMs and AGOPs across layers of VGG11 and AlexNet for two input images. These visualizations provide further supporting evidence that the CNFMs and AGOPs of early layers are performing an operation akin to edge detection. value for each coordinate \((i,j)\in P_{\ell-1}\times Q_{\ell-1}\) is replaced with \(\|M_{\ell}^{\frac{1}{2}}h_{\ell-1}(X)[i,j]\|\) where \(M_{\ell}:=W_{\ell}^{T}W_{\ell}\). Our visualization reflects the magnitude of the patch in the image of the patch transformation. For example, if \(M_{\ell}\) is an edge detector, then \(\|M_{\ell}^{\frac{1}{2}}h_{\ell-1}(X)[i,j]\|\) will be large, if and only if the patch centered at coordinate \((i,j)\) contains an edge. This visualization technique emerges naturally from the convolution operation in CNNs, where a post-activation hidden unit is generated by applying a filter to each patch independently of the others. Further, this visualization characterizes how a trained CNN extracts features across patches of any image. This is in contrast to visualization techniques based on saliency maps [41, 44, 45, 59], which consider gradients with respect to an entire input image and for a single sample. In addition to the high correlation between AGOP and CNFMs in the previous section, in Fig. 2, we observe that the AGOP and CNFMs transform input images similarly at any given layer of the CNN. For \(224\times 224\) images from ImageNet, CNFMs and AGOPs extracted from a pre-trained VGG11 model both emphasize objects and their edges in the image. We note these visualizations corroborate hypotheses from prior work that the first layer weights of deep CNNs learn an operator corresponding to edge detection [58]. Moreover, our results imply that the mathematical origin of edge detectors in convolutional neural networks is the average gradient outer product. In the following section, we will corroborate this claim by demonstrating that such edge detectors can be recovered without the use of any neural network through estimating the average gradient outer product of convolutional kernel machines. ### Supporting Theoretical Evidence for CNFA The following theorem (proof in Appendix A) proves the ansatz for general convolutional networks after 1 step of full-batch gradient descent. **Theorem 1**.: _Let \(f\) denote a function that operates on \(m\) patches of size \(q\), i.e., let \(f(v_{1},v_{2},\ldots,v_{m}):\mathbb{R}^{q}\times\ldots\times\mathbb{R}^{q} \rightarrow\mathbb{R}\) with \(f(v_{1},v_{2},\ldots,v_{m})=g(Wv_{1},Wv_{2},\ldots,Wv_{m})\) where \(W\in\mathbb{R}^{k\times q}\) and \(g(z_{1},\ldots,z_{m}):\mathbb{R}^{k}\times\ldots\times\mathbb{R}^{k} \rightarrow\mathbb{R}\). Assume \(g(\mathbf{0})=0\) and \(\frac{\partial g(\mathbf{0})}{\partial z_{\ell}}=\frac{\partial g(\mathbf{0} )}{\partial z_{\ell^{\prime}}}\neq 0\) for all \(\ell,\ell^{\prime}\in[m]\). If \(W\) is trained for one step of gradient descent with mean squared loss on data \(\{((v_{1}^{(p)},\ldots v_{m}^{(p)}),y_{p})\}_{p=1}^{n}\) from initialization \(W^{(0)}=\mathbf{0}\), then for the point \((u_{1},\ldots,u_{m})\):_ \[{W^{(1)}}^{T}W^{(1)}\propto\sum_{r=1}^{m}\frac{\partial f^{(1)}(u_{1},\ldots, u_{m})}{\partial v_{r}}\frac{\partial f^{(1)}(u_{1},\ldots,u_{m})}{\partial v _{r}}^{T}\;; \tag{3}\] _where \(f^{(1)}(v_{1},v_{2},\ldots v_{m}):=g(W^{(1)}v_{1},W^{(1)}v_{2},\ldots,W^{(1)}v _{m})\)._ We note the assumptions of Theorem 1 hold for several types of convolutional networks. As a simple example, the assumptions hold for convolutional networks with activation function \(\phi\) satisfying \(\phi(0)=0\) and \(\phi^{\prime}(0)\neq 0\) (e.g., tanh activation) with remaining layers initialized as constant matrices. Furthermore, we note that while the above theorem is stated for the first layer of a convolutional network, the same proof strategy applies for deeper layers by considering the subnetwork \(G_{\ell}(x)\). ## 3 CNFA as a general mechanism for convolutional feature learning We now show that the CNFA allows us to introduce a feature learning mechanism in any machine learning model on patches to capture features akin to those of convolutional networks. Given recent work connecting neural networks to kernel machines [22], we focus on convolutional kernels given by the Convolutional Neural Tangent Kernel (CNTK) [4] as our candidate model class. Intuitively, these models can be thought of as combining kernels evaluated across pairs of patches in images. While such models have achieved impressive performance [1, 6, 7, 29, 38, 42], these models do not automatically learn features from data unlike CNNs. Thus, as demonstrated in prior work [24, 52], there are tasks where CNTKs are significantly outperformed by corresponding CNNs. A major consequence of the CNFA is that we can now enable feature learning in CNTKs by leveraging the AGOP over patches. In particular, we can first solve kernel regression with the CNTK and then use the AGOP of the trained predictor over patches of images to learn features. We call our method the _Convolutional Recursive Feature Machine (ConvRFM)_, as it is the convolutional variant of the original RFM [37]. We will demonstrate that ConvRFM accurately captures first layer feature learning in CNNs and can recover edge detectors as features when trained on standard image classification datasets. To account for deep convolutional feature learning, we extend ConvRFM to Deep ConvRFMs by sequentially learning features in a manner similar to layerwise training in CNNs. We show that Deep ConvRFM: (1) improves performance of CNTKs on local signal adaptivity tasks considered in [24] ; and (2) improves performance of CNTKs on several image classification tasks. ### Convolutional Recursive Feature Machine (ConvRFM) We present the algorithm for ConvRFM in Algorithm 1. The ConvRFM algorithm recursively learns a feature extractor on patches of a given image by implementing the AGOP across patches of training data. Namely, the ConvRFM first builds a predictor with a fixed convolutional kernel. Then, we compute the AGOP of the trained predictor with respect to image patches, which we denote as the _feature matrix_, \(M\). Lastly, we transform image patches with \(M\) and then repeat the previous steps. We provide a concrete example of this algorithm for the convolutional neural network Gausssian process (CNNGP) [9, 28] of a one hidden layer convolutional network with fully connected last layer operating on black and white images below. The CNNGP of a one hidden layer convolutional network with fully connected last layer, activation \(\phi\), and filter size \(q\) is given by \[K(x,z)=\frac{1}{PQ}\sum_{i=1}^{P}\sum_{j=1}^{Q}\check{\phi}(x[i,j]^{T}z[i,j], \|x[i,j]\|,\|z[i,j]\|)\ ;\] where \(x,z\in\mathbb{R}^{P\times Q}\), \(x[i,j]\in\mathbb{R}^{q^{2}}\) denotes the vectorized \(q\times q\) patch of \(x\) centered at coordinate \((i,j)\), and \(\check{\phi}(a^{T}b,\|a\|,\|b\|)\) denotes the dual activation [15] of \(\phi\). For the case of ReLU activation, this dual activation has a well known form [9] and is given by \[\check{\phi}(a^{T}b,\|a\|,\|b\|)=\frac{1}{\pi}\left(a^{T}b\left(\pi-\arccos \left(\frac{a^{T}b}{\|a\|\|b\|}\right)\right)+\sqrt{\|a\|^{2}\|b\|^{2}-a^{T}b} \right)\.\] In ConvRFM, we modify the inner product in the kernel above to be a Mahalanobis inner product, constructing kernels of the form \[K_{M}(x,z):=\frac{1}{PQ}\sum_{i=1}^{P}\sum_{j=1}^{Q}\check{\phi}(x[i,j]^{T}Mz [i,j],x[i,j]^{T}Mx[i,j],z[i,j]^{T}Mz[i,j])\ ;\] where \(M\) is a learned positive semi-definite matrix. In particular, \(M\) is updated as the AGOP of the estimator constructed by solving kernel regression with \(K_{M}\). In our experiments, we analyze performance when replacing \(\check{\phi}\) with the Mahanolobis Laplace kernel used in [37] and with the CNTK of a deep convolutional ReLU network with fully connected last layer. We will make clear our choice of \(\check{\phi}\) by denoting our method as CNTK-ConvRFM or Laplace-ConvRFM. ConvRFM captures first layer features of convolutional neural networks.We now demonstrate that ConvRFM recovers features similar to those learned by first layers of CNNs. In Fig. 3A, we visualize the top eigenvectors of the feature matrix of CNTK-ConvRFM (filter size \(3\times 3\)) and Laplace-ConvRFM (filter size \(7\times 7\)) trained on SVHN. Training details for all methods are presented in Appendix B. We observe that these top eigenvectors resemble edge detectors [16]. In Fig. 3B, we visualize how the feature matrix of the CNTK-ConvRFM and the CNFM of the corresponding finite width CNN trained on SVHN transform SVHN images. Even though both operators arise from vastly different training procedures (solving kernel regression vs. training a CNN), we observe that both operators appear to extract similar features (corresponding to edges of digits) from SVHN images. We provide additional evidence for similarity between ConvRFM and CNN features in Appendix Fig. 10. To demonstrate further evidence of the universality of edge detector features arising from AGOP of CNTK-ConvRFM and Laplace-ConvRFM, we analyze how these AGOPs transform arbitrary images. In particular, in Fig. 3C, we apply these operators extracted from models trained on SVHN to images on ImageNet. We again observe that these operators remarkably extract edges from corresponding ImageNet images, which are of vastly different resolution (\(224\times 224\) instead of \(32\times 32\)) and contain vastly different objects. Such experiments provide conclusive evidence that AGOP with respect to patches of convolutional kernels recovers features akin to edge detectors. We present further experiments demonstrating emergence of edge detectors from convolutional kernels trained on CIFAR10 and GTSRB in Appendix Figs. 11 and 12. In particular, the eigenvectors of the AGOP often resemble Gabor filters with different orientations. In Figure 11, we see that horizontally, vertically, and diagonally aligned eigenvectors identify edges of the same alignment. ### Deep feature learning with Deep ConvRFM ConvRFM is capable of only extracting features by linearly transforming patches of input images, which is analogous to extracting such features using the first layer of a CNN. In contrast, the CNFA implies that deep convolutional networks are capable of learning features in intermediate layers. To enable deep feature learning, we introduce Deep ConvRFM (see Algorithm 2) by sequentially learning features with AGOP in a manner similar to layerwise training in CNNs. In particular, Deep ConvRFM iterates the following steps: 1. Construct a predictor, \(\widehat{f}\), by training a convolutional kernel machine with kernel \(K_{M}\). Figure 3: Features extractors learned by ConvRFM using CNTK (CNTK-ConvRFM) and Laplace kernel (Laplace-ConvRFM), which appear to operate as universal edge detectors. **A.** Top 8 eigenvectors of CNTK-ConvRFM and Laplace-ConvRFM trained on SVHN. We use \(3\times 3\) patches for CNTK-ConvRFM and \(7\times 7\) patches for Laplace-ConvRFM. **B.** Comparison of patch operators learned by CNTK-ConvRFM (given by the AGOP taken with respect to patches) and CNNs (given by the CNFM). **C.** Applying patch-based AGOP operators from ConvRFMs trained on SVHN to images from ImageNet. 2. Update \(M\) to be the AGOP with respect to patches of the trained predictor. 3. Transform the data, \(x\), with random features given by \(\phi(Wx)\) where \(W\) denotes a set of convolutional filters with weights sampled according to \(\mathcal{N}(0,M)\) and \(\phi\) is a nonlinearity. Note that while we utilize random features and sample convolutional filters in Deep ConvRFM, we never utilize backpropgation to learn features or train models. Features are learned via the AGOP and models are trained by solving kernel regression, which is a convex optimization problem. For the base kernel for Deep ConvRFM, we utilize the deep CNTK [4] as implemented in the Neural Tangents library [34].5 Footnote 5: In order to take gradient with respect to patches using Neural Tangents, we used a workaround that involved expanding images into their patch representations. This workaround unfortunately leads to heavy memory utilization, which limited our analysis of Deep ConvRFMs. Deep ConvRFM learns similar features to deep CNNs.We now present evidence that Deep ConvRFMs learn similar features to those learned by deep CNNs. We analyze features learned by deep ConvRFM and the corresponding CNN on the local signal adaptivity synthetic tasks from [24] and SVHN. For the synthetic task from [24], we consider classification of MNIST digits embedded in a larger image of i.i.d. Gaussian noise. Dataset and training details are presented in Appendix B. In Fig. 4, we observe that AGOPs at each layer of Deep ConvRFM and and CNFMs at each layer of the corresponding CNN transform examples from both datasets similarly. Figure 4: Visualizations of features for each layer of Deep ConvRFM and the corresponding CNN on SVHN and the noisy digits task from [24]. Deep ConvRFM overcomes limitations of convolutional kernels.In the work [24] the authors posited local signal adaptivity, the ability to suppress noise and amplify signal in images, as a potential explanation for the superiority of convolutional neural networks over convolutional kernels. As supporting evidence, [24] demonstrated that convolutional networks generalized far better than convolutional kernels on image classification tasks in which images were embedded in a noisy background. We now demonstrate that by incorporating feature learning through patch-AGOPs, Deep ConvRFM exhibits local signal adaptivity on the tasks considered in [24] and thus, similar to CNNs, yield significantly improved performance over convolutional kernels. In particular, we begin by comparing performance of CNTK, Conv RFM, Deep ConvRFM, and corresponding CNNs on the following two image classification tasks from [24]: (1) images of black and white horizontal bars placed in a random position on larger images of Gaussian noise ; (2) MNIST images placed in a random position on larger images of Gausssian noise. The work [24] demonstrated that CNNs, unlike CNTK, could learn to threshold the background noise and amplify the signal in these tasks thus far outperforming CNTKs when the amount of background noise was large. In Fig. 5, we demonstrate that for these tasks CNNs, ConvRFMs, and Deep ConvRFMs all extract local signals and dim background noise through the AGOP, and thus far outperform CNTKs. Moreover, we observe that Deep ConvRFMs can provide up to a 5% improvement in performance over ConvRFM on the synthetic MNIST task, indicating a benefit to deep feature learning. Benefit of deep feature learning on real-world image classification tasks.Lastly, we analyze performance of CNTK, ConvRFM, Deep ConvRFM, and the corresponding three convolutional layer CNN on standard image classification datasets available for download from PyTorch. Consistent with our observations for synthetic tasks from [24], we observe in Fig. 6A that ConvRFM and Deep ConvRFM provide an improvement over CNTK across almost all tasks. Moreover, we observe that ConvRFM and Deep ConvRFM outperform CNTKs consistently when the corresponding CNN outperforms the CNTK. In Fig. 6B, we analyze the impact of deep feature learning by increasing the number of feature learning layers in Deep ConvRFM, i.e., the number of layers for which we utilize the AGOP to learn features. We observe that adding more layers of feature learning leads to consistent performance boost in the local signal adaptivity tasks from [24] and on select datasets such as SVHN and EMNIST [13]. ## 4 Discussion In this work, we identified a mathematical mechanism of feature learning in deep convolutional networks, which we posited as the Convolutional Neural Feature Ansatz (CNFA). Namely, the ansatz stated that features selected by convolutional networks, given by empirical covariance matrices of filters at any given layer, can be recovered by computing the average gradient outer product (AGOP) of the trained network Figure 5: Test accuracy of CNTK, ConvRFM, Deep ConvRFM, and the corresponding CNN on local signal adaptivity tasks from [24] as a function of noise level. **A.** Identifying black and white bars in noisy images. **B.** MNIST digits placed randomly in noisy background image. with respect to image patches. We presented empirical and theoretical evidence for the ansatz. Notably, we showed that convolutional filter covariances of neural networks pre-trained on ImageNet (AlexNet, VGG, ResNet) are highly correlated with AGOP with respect to patches (in many cases, Pearson correlation \(>.9\)). Since the AGOP with respect to patches can be computed on any function operating on image patches, we could use the AGOP to enable feature learning in any machine learning model operating on image patches. Thus, building on the RFM algorithm for fully connected networks from [37], we integrated the AGOP to enable deep feature learning in convolutional kernel machines, which could not apriori learn features, and referred to the resulting algorithms as ConvRFM and Deep ConvRFM. We demonstrated that ConvRFM and Deep ConvRFM recover features similar to those of deep convolutional neural networks, including evidence that features learned by these models can serve as universal edge detectors, akin to features learned in convolutional networks. Moreover, we demonstrated that ConvRFM and Deep ConvRFM overcome prior limitations of convolutional kernels, including the Convolutional Neural Tangent Kernel (CNTK), such the inability to adapt to localized signals in images [24]. Lastly, we showed a benefit to deep feature learning by demonstrating improvement in performance of Deep ConvRFM over ConvRFM and the CNTK on standard image classification benchmarks. We now conclude with a discussion of implications of our results and future directions. Identifying mechanisms driving success of deep learning.Understanding the mechanisms driving success of neural networks is an important problem for developing effective, interpretable and safe machine learning models. The complexities of training deep neural networks such as custom training procedures and layer structures (batch normalization, dropout, residual connections, etc.) can make it difficult to pinpoint overarching principles leading to effectiveness of these models. The fact that correlation between convolutional neural feature matrices (CNFMs) and AGOPs is high for convolutional networks pre-trained on ImageNet with all of these inherent complexities baked in, provides strong evidence that the connection between AGOP and CNFMs is key to identifying the core principles making these networks successful. Emergence of universal edge detectors with average gradient outer product.Detecting edges in images is a well-studied task in computer vision and classical approaches involved applying fixed convolutional filters to detect edges in images [2, 16, 54]. For example, AlexNet automatically learned filters in its first convolutional layer that were remarkably similar to Gabor filters [30]. Similarly, there was evidence that other convolutional networks pre-trained on ImageNet learned features akin to edge detection in the first layer [58]. Yet, it had been unclear how such filters automatically emerge through training. We demonstrated that the AGOP with respect to patches of a large class of convolutional models (convolutional neural networks and convolutional kernels) trained on various standard image classification tasks consistently recovered edge detectors (see Fig. 2, Fig. 3A, B). We further showed the universality of these edge detector features by demonstrating that features learned by ConvRFM on SVHN automatically identified edges in ImageNet Figure 6: **A. Performance comparison of Deep ConvRFM with the corresponding CNTK and CNN on benchmark image classification datasets from PyTorch. B. Effect of number of feature learning layers on Deep ConvRFM performance.** images. This strongly suggests that edge detectors emerge from the underlying nature of the task rather than specific properties of architectures. Our findings indicate that understanding connections between AGOP and classical edge detection approaches is a promising direction for understanding emergence of features in the first layer of convolutional neural networks and for identifying simple algorithms to capture deeper convolutional features. Reducing computational complexity of convolutional kernels.In this work, we provided an approach for enabling feature learning in convolutional kernels by iteratively training convolutional kernel machines and computing AGOP of the trained predictor. Given that convolutional kernels are able to achieve impressive accuracy on standard datasets without any feature learning [1, 6, 7, 29, 42], these methods have the potential to provide state-of-the-art results upon incorporating feature learning. Yet, in contrast to the case of classical kernel machines such as those used in [37], evaluating the kernel for an effective CNTK (such as those with Global Average Pooling [4]) can be a far more computationally intensive process than simply training a convolutional neural network. For example, according to Neural Tangents [34], the CNTK of a Myrtle kernel [42] can take anywhere from 300 to 500 GPU hours for CIFAR10. Given that Deep ConvRFM involves constructing a kernel matrix and computing AGOP to capture features at each layer, reducing the evaluation time of convolutional kernels through strategies such as random feature approximations is key to making these approaches scalable. ## Acknowledgements A.R. is supported by the Eric and Wendy Schmidt Center at the Broad Institute. We acknowledge support from the National Science Foundation (NSF) and the Simons Foundation for the Collaboration on the Theoretical Foundations of Deep Learning6 through awards DMS-2031883 and #814639 as well as the TILOS institute (NSF CCF-2112665). This work used the programs (1) XSEDE (Extreme science and engineering discovery environment) which is supported by NSF grant numbers ACI-1548562, and (2) ACCESS (Advanced cyberinfrastructure coordination ecosystem: services & support) which is supported by NSF grants numbers #2138259, #2138286, #2138307, #2137603, and #2138296. Specifically, we used the resources from SDSC Expanse GPU compute nodes, and NCSA Delta system, via allocations TG-CIS220009. Footnote 6: [https://deepfoundations.ai/](https://deepfoundations.ai/) ## Code Availability All code is available at [https://github.com/aradha/convrfm](https://github.com/aradha/convrfm).
2310.19142
MAG-GNN: Reinforcement Learning Boosted Graph Neural Network
While Graph Neural Networks (GNNs) recently became powerful tools in graph learning tasks, considerable efforts have been spent on improving GNNs' structural encoding ability. A particular line of work proposed subgraph GNNs that use subgraph information to improve GNNs' expressivity and achieved great success. However, such effectivity sacrifices the efficiency of GNNs by enumerating all possible subgraphs. In this paper, we analyze the necessity of complete subgraph enumeration and show that a model can achieve a comparable level of expressivity by considering a small subset of the subgraphs. We then formulate the identification of the optimal subset as a combinatorial optimization problem and propose Magnetic Graph Neural Network (MAG-GNN), a reinforcement learning (RL) boosted GNN, to solve the problem. Starting with a candidate subgraph set, MAG-GNN employs an RL agent to iteratively update the subgraphs to locate the most expressive set for prediction. This reduces the exponential complexity of subgraph enumeration to the constant complexity of a subgraph search algorithm while keeping good expressivity. We conduct extensive experiments on many datasets, showing that MAG-GNN achieves competitive performance to state-of-the-art methods and even outperforms many subgraph GNNs. We also demonstrate that MAG-GNN effectively reduces the running time of subgraph GNNs.
Lecheng Kong, Jiarui Feng, Hao Liu, Dacheng Tao, Yixin Chen, Muhan Zhang
2023-10-29T20:32:21Z
http://arxiv.org/abs/2310.19142v1
# MAG-GNN: Reinforcement Learning Boosted Graph Neural Network ###### Abstract While Graph Neural Networks (GNNs) recently became powerful tools in graph learning tasks, considerable efforts have been spent on improving GNNs' structural encoding ability. A particular line of work proposed subgraph GNNs that use subgraph information to improve GNNs' expressivity and achieved great success. However, such effectivity sacrifices the efficiency of GNNs by enumerating all possible subgraphs. In this paper, we analyze the necessity of complete subgraph enumeration and show that a model can achieve a comparable level of expressivity by considering a small subset of the subgraphs. We then formulate the identification of the optimal subset as a combinatorial optimization problem and propose Magnetic Graph Neural Network (MAG-GNN), a reinforcement learning (RL) boosted GNN, to solve the problem. Starting with a candidate subgraph set, MAG-GNN employs an RL agent to iteratively update the subgraphs to locate the most expressive set for prediction. This reduces the exponential complexity of subgraph enumeration to the constant complexity of a subgraph search algorithm while keeping good expressivity. We conduct extensive experiments on many datasets, showing that MAG-GNN achieves competitive performance to state-of-the-art methods and even outperforms many subgraph GNNs. We also demonstrate that MAG-GNN effectively reduces the running time of subgraph GNNs. ## 1 Introduction Recent advances in Graph Neural Networks (GNNs) greatly assist the rapid development of many areas, including drug discovery [2], recommender systems [31], and autonomous driving [6]. The power of GNNs has primarily been attributed to their Message-Passing Paradigm [12]. Message-Passing Paradigm simulates a 1-dimensional Weisfeiler-Lehman (1-WL) algorithm for graph isomorphism testing. Such a simulation allows GNN to encode rich structural information. In many fields, structural information is crucial to determine the properties of a graph. However, as Xu _et al._[32] pointed out, GNN's structure encoding capability, or its expressivity, is also upper-bounded by the 1-WL test. Specifically, a message-passing neural network (MPNN) cannot recognize many substructures like cycles and paths and fails to properly learn and distinguish regular graphs. Meanwhile, these substructures are significant in areas including chemistry and biology. To overcome this limitation, considerable effort was spent on investigating more-expressive GNNs. A famous line of work is _subgraph GNNs_[33; 34; 37]. Subgraph GNNs extract rooted subgraphs around every node in the graph and apply MPNN onto the subgraphs to obtain subgraph representations. The subgraph representations are summarized to form the final representation of the graph. Such an approach is theoretically proved to be more expressive than MPNN and achieved superior empirical results. Later work found that subgraph GNNs are still bounded by the 3-dimensional WL test (3-WL) [10]. higher expressivity? For example, an MPNN fails to distinguish graphs A and B in Figure 1, as they are 2-regular graphs with identical 1-hop subtrees. Meanwhile, a subgraph GNN will see different subgraphs around nodes in the two graphs. These subgraphs are distinguishable by MPNN, allowing a subgraph GNN to differentiate between graphs A and B. However, we can observe that many subgraphs from the same graph are identical. Specifically, graph A has two types of subgraphs, while graph B only has triangle subgraphs. As a result, locating a non-triangle subgraph in the top graph enables us to run MPNN once on it to discern the difference between the two graphs. On the contrary, a subgraph GNN takes eight extra MPNN runs for the remaining nodes. This graph pair shows that we can obtain discriminating power equal to that of a subgraph GNN without enumerating all subgraphs. We also include advanced examples with more complex structures in Section 3. Therefore, we propose Magnetic Graph Neural Network (MAG-GNN), a reinforcement learning (RL) based method, to leverage this property and locate the discriminative subgraphs effectively. Specifically, we start with a candidate set of subgraphs randomly selected from all rooted subgraphs. The root node features of each subgraph are initialized uniquely. In every step, each target subgraph in the candidate set is substituted by a new subgraph with more distinguishing power. MAG-GNN achieves this by mapping each target subgraph to a Q-Table, representing the expected reward of replacing the target subgraph with another potential subgraph. It then selects the subgraph that maximizes the reward. MAG-GNN repeats the process until it identifies the set of subgraphs with the highest distinguishing power. The resulting subgraph set is then passed to a prediction GNN for downstream tasks. MAG-GNN reduces subgraph GNN's exponentially complex enumeration procedure to an RL searching process with constant steps. This potently constrains the computational cost while keeping the expressivity. We conduct extensive experiments on synthetic and real-world graph datasets and show that MAG-GNN achieves competitive performance to state-of-the-art (SOTA) methods and even outperforms subgraph GNNs on many datasets with a shorter runtime. Our work shows that partial subgraph information is sufficient for good expressivity, and MAG-GNN smartly locates expressive subgraphs and achieves the same goal with better efficiency. ## 2 Preliminaries A graph can be represented as \(G=\{V,E,X\}\), where \(V\) is the set of nodes and \(E\subseteq V\times V\) is the set of edges. Let \(V(G)\) and \(E(G)\) represent the node and edge sets of \(G\), respectively. Nodes are associated with features \(X=\{\mathbf{x}_{v}|\forall v\in V\}\). An MPNN \(g\) can be decomposed into \(T\) layers of COMBINE and AGGREGATE functions. Each layer uses the COMBINE function to update the current node embedding from its previous embedding and the AGGREGATE function to process the node's neighbor embeddings. Formally, \[\mathbf{m}_{v}^{(t)}=\text{AGGREGATE}^{(t)}(\{\{\mathbf{h}_{u}^{(t-1)},u\in\mathcal{ N}(v)\}\}),\quad\mathbf{h}_{v}^{(t)}=\text{COMBINE}^{(t)}(\mathbf{m}_{v}^{(t)},\mathbf{h}_{v}^{ (t-1)}) \tag{1}\] where \(\mathbf{h}_{v}^{(t)}\) is the node representation after \(t\) iterations, \(\mathbf{h}_{v}^{(0)}=\mathbf{x}_{v}\), \(\mathcal{N}(v)\) is the set of direct neighbors of \(v\), \(\mathbf{m}_{u}^{(t)}\) is the message embedding. \(\mathbf{h}_{v}^{(T)}\) is used to form node, edge, and graph-level representations. We use \(H=g(G)\) to denote the generated node embeddings of MPNN. MPNN's variants differ mainly by their AGGREGATE and COMBINE functions but are all bounded by 1-WL in expressivity. This paper adopts the following formulation for subgraph GNNs. For a graph \(G\), a subgraph GNN first enumerates all \(k\)-order node tuples \(\{\mathbf{v}|\mathbf{v}\in V^{k}(G)\}\) and creates \(|V^{k}(G)|\) copies of the graph. A Figure 1: Comparison of two simple graphs. graph associated with node tuple \(\mathbf{v}\) is represented by \(G(\mathbf{v})\). Node-rooted subgraph GNNs adopt a 1-order policy and have \(O(V(G))\) graphs; edge-rooted subgraph GNNs adopt a 2-order policy and have \(O(V^{2}(G))\) graphs. Note that we are not taking exact subgraphs here, so we need to mark the node tuples on the copied graphs to maintain the subgraph effect. Specifically, \[[X(\mathbf{v})]_{l,p}=\begin{cases}c^{+}&\text{if $v_{l}=[\mathbf{v}]_{j}$ and $p=j$}\\ c^{-}&\text{otherwise}\end{cases}\quad\mathbf{v}\in V^{k}(G), \tag{2}\] \(X(\mathbf{v})\in\mathbb{R}^{|V|\times k}\) and \(G(\mathbf{v})=\{V,E,X\oplus X(\mathbf{v})\}\), where \(\oplus\) means row-wise concatenation. We use square brackets to index into a sequence. All entries in \(X(\mathbf{v})\) are labeled as \(c^{-}\) except those appearing at corresponding positions of the node tuple. An MPNN \(g\) is applied to every graph, and we use a pooling function to obtain the collective embedding of the graph: \[f_{s}(G)=R^{(P)}(\{g(G(\mathbf{v}))|\forall\mathbf{v}\in V^{k}(G)\}),\quad P\in\{G,N\}. \tag{3}\] \(f_{s}(G)\) can be a vector of graph representation if \(R^{(P)}\) is a graph-level pooling function (\(P\) equals \(G\)) or a matrix of node representations if \(R^{(P)}\) is node-level (\(P\) equals \(N\)). This essentially implements \(k\)-dimensional ordered subgraph GNN (\(k\)-OSAN) defined in [26] and captures most of the popular subgraph GNNs, including NGNN[35] and ID-GNN[33]. Furthermore, the expressivity of subgraph GNNs increases with larger values of \(k\). Since node tuples, node-marked graphs, and subgraphs refer to the same thing, we use these terms interchangeably. **The Weisfeiler-Lehman hierarchy (WL hierarchy).** The \(k\)-dimensional Weisfeiler-Lehman algorithm is vital in graph isomorphism testing. Earlier work established the hierarchy where the \((k+1)\)-WL test is more expressive than the \(k\)-WL test [4]. Xu _et al._[32] and Morris _et al._[23] connected GNN expressivity to the WL test and proved that MPNN is bounded by the 1-WL test. Later work discovered that all node-rooted subgraphs are bounded by 3-WL expressivity [10], which cannot identify strongly regular graphs and structures like 4-cycles. Qian _et al._[26] introduced \(k\)-dimensional ordered-subgraph WL (\(k\)-OSWL) hierarchy which is comparable to the \(k\)-WL test. **Deep Q-learning (DQN).** DQN [19] is a robust RL framework that uses a deep neural network to approximate the Q-values, representing the expected rewards for a specific action in a given state. Accumulating sufficient experience with the environment, DQN can make decisions in intricate state spaces. For a detailed introduction to DQN, please refer to Appendix B. ## 3 Motivation Figure 2 shows that while an MPNN encodes rooted subtrees, a subgraph GNN encodes rooted subgraphs around each node. This allows subgraph GNN to differentiate more graphs at the cost of \(O(|V|)\) MPNN runs. Meanwhile, subgraph GNNs are still bounded by 3-WL tests. Hence, we may need at least 2-node-tuple-rooted (e.g., edge-rooted) subgraph GNNs, requiring \(O(|V^{2}|)\) MPNN runs to obtain better expressivity. In fact, the required complexity of subgraph GNNs to break beyond \(k\)-WL expressivity grows exponentially with \(k\). The high computational cost of high expressivity models prevents them from being widely applied to real-world datasets. A natural question is, _can we consider only a small subset of all the subgraphs to obtain similar expressivity_, just like in Figure 2, where one subgraph is as powerful as the collective information of all subgraphs? This leads us to the following experiment. We focus on the SR25 dataset. It contains 15 different strongly-regular graphs with the same configuration, each of 25 nodes. The goal is to do multi-class classification to distinguish all pairs of graphs. Since node-rooted subgraph GNNs are upper-bounded by 3-WL and 3-WL cannot distinguish any strongly regular graphs with the same configuration, node-rooted subgraph GNNs will generate identical representations for the 15 graphs while performing 25 MPNN runs each. 2-node-tuple subgraph GNNs have expressivity beyond 3-WL and can distinguish any pair of graphs from the dataset, but it takes 625 MPNN runs. Figure 2: Sorted scores. To test if every subgraph is required, we train an MPNN on _randomly sampled 2_-node-marked graphs to minimize the expected loss to label \(y\) of the unmarked graph \(G\), \[\min_{g_{p}}\mathbb{E}_{\mathbf{v}}[\mathcal{L}(\mathrm{MLP}(f_{r}(G,\mathbf{v})),y)], \quad f_{r}(G,\mathbf{v})=R^{(G)}(g_{p}(G(\mathbf{v}))),\quad\mathbf{v}\in V^{2}(G), \tag{4}\] where \(\mathcal{L}\) is the loss function, MLP is a multi-layer perceptron, \(g_{p}\) is an MPNN, and \(R^{(G)}\) pools the node representations to graph representations. Unlike 2-node-tuple-rooted subgraph GNNs that run the MPNN \(|V^{2}|\) times for each graph, this model runs the MPNN exactly once. During testing, for each of the 15 graphs, we randomly sample one 2-node-marked graph for classification. We perform ten independent tests, and the average test accuracy is 66.8%. Using 2-node-marked graphs, with only one GNN run, it already outperforms node-rooted subgraph GNNs that fail on this dataset. More interestingly, for each graph \(G\), we can sort the classification score of its \(|V^{2}|\) possible node-marked graphs and plot them in Figure 2 (C14 and C15 are the plots for 14-th and 15-th graphs in the dataset). Note that the horizontal axis is not the number of subgraphs; it is the index of subgraphs after sorting by their _individual_ classification scores. We see that each original graph has many marked graphs with a classification score close to one. That means even in one of the most difficult datasets, we still can find particular node-marked graphs that uniquely distinguish the original graph from others. Moreover, unlike the example in Figure 1 with only two types of subgraphs, these marked graphs fall into many different isomorphism groups, meaning that the same observation holds in more complex graphs and can be applied to a wide range of graph classes. We prove that such a phenomenon exists in most regular graphs, which cannot be distinguished by MPNNs (proof in Appendix A). **Theorem 1**.: _Let \(G_{1}\) and \(G_{2}\) be two graphs uniformly sampled from all \(n\)-node, \(r\)-regular graphs where \(3\leq r<\sqrt{2\log n}\). Given an injective pooling function \(R^{(G)}\) and an MPNN \(g_{p}\) of 1-WL expressivity, with a probability of at least \(1-o(1)\), there exists a node (tuple) \(\mathbf{v}\in V(G)\) whose corresponding node marked graph's embedding, \(f_{r}(G_{1},\mathbf{v})\), is different from any node marked graph's embedding in \(G_{2}\)._ These observations show that by finding discriminative subgraphs effectively, we only need to apply MPNN to a much smaller subset of the large complete subgraph set to get a close level of expressivity. ## 4 Magnetic graph neural network We formulate the problem of finding the most discriminative subgraphs as a combinatorial optimization problem. Given a budget of \(m\) as the number of subgraphs, \(k\) as the order of node tuples, and \(g_{p}\) an MPNN that embeds individual subgraphs, we minimize the following individual loss to graph \(G\), \[\min_{U=(\mathbf{v}_{1},\dots,\mathbf{v}_{m})\in(V^{k}(G))^{m}} \mathcal{L}(\mathrm{MLP}(f_{p}(G,U)),\mathbf{y}) \tag{5}\] \[f_{p}(G,U)=R^{(P)}(\{g_{p}(G(\mathbf{v}))|\forall\mathbf{v}\in U\}),\] Note that this formulation resembles that of subgraph GNNs in Equation (3); we are only substituting \(V^{k}\) with \(U\) to reduce the number of MPNN runs. Witnessing the great success of deep RL in combinatorial optimization, we adopt Deep Q-learning (DQN) to our setting. We introduce each component of our DQN framework as follows. **State Space:** For graph \(G\), a state is \(m=|U|\) node tuples, their corresponding node-marked graphs, and a \(m\)-by-\(w\) matrix \(W\) to record the state of \(m\) node tuples. Our framework should generalize to arbitrary graphs. Hence, a state \(s\) is defined as, \[s=(G,U,W)=(G,(\mathbf{v}_{1},...,\mathbf{v}_{m}),W),s\in S=\mathcal{G}\times(\mathcal{ V}^{k})^{m}\times(\mathbb{R}^{m\times w}) \tag{6}\] \(S\) is the state space, \(\mathcal{G}\) is the set of all graphs, and \(\mathcal{V}^{k}\) is the set of all possible \(k\) node tuples of \(\mathcal{G}\). To generate the initial state during training, we sample one graph \(G\) from the training set and randomly sample \(m\) node tuples from \(V^{k}(G)\). The state matrix \(W\) is initialized to \(\mathbf{0}\). The expressivity grows as \(k\) grows. Generally, MAG-GNN with larger \(k\) produces more unique graph embeddings, which is harder to train but might require smaller \(m\) and fewer RL steps to represent the graph, leading to better inference time. However, for some datasets, such expressivity is excessive and poses great challenges to training. A smaller \(k\) can reduce the sample space and stabilize training in this case. **Action Space:** We define one RL agent action as selecting one index from one node tuple and replacing the node on that index with another node in the graph. This replaces the node-marked graph corresponding to the original node tuple with the one corresponding to the modified node tuple. Specifically, an action \(a_{i,j,l}\) on state \(s=(G,U,W)\) does the following on \(U\): \[U^{\prime}=a_{i,j,l}(U)=(\mathbf{v}_{1}...,\mathbf{v}_{i-1},\mathbf{v}^{\prime}_{i},\mathbf{v}_{i +1},...\mathbf{v}_{m}),\mathbf{v}^{\prime}_{i}=([\mathbf{v}_{i}]_{1},...,[\mathbf{v}_{i}]_{j-1},v_{l},[\mathbf{v}_{i}]_{j+1},...[\mathbf{v}_{i}]_{k}) \tag{7}\] The agent selects a target node tuple \(\mathbf{v}_{i}\), whose \(j\)-th node is replaced with node \(v_{l}\in V\). \(W\) is then updated by an arbitrary state update function \(W^{\prime}=f_{W}(s,U^{\prime})\) depending on the old state and new node tuples. The update function \(f_{W}\) is not necessarily trainable (e.g., It can simply be a pooling function on embeddings of the marked nodes over states). The next state is \(s^{\prime}=(G,U^{\prime},W^{\prime})\). The action space is then \(A=[m]\times[k]\times\mathcal{V}\). Actions only change the node tuple while keeping the graph structure, and the state matrix serves as a tracker of past states and actions. Unlike stochastic RL systems, our RL actions have deterministic outcomes. The intuition behind the design of the action space is that it limits the number of actions for each node tuple to \(O(m|V|k)\), which is linear in the number of nodes, and \(k\) is usually small (\(k=2\) is already more expressive than most subgraph GNNs). We can further reduce the action space to \(O(|V|k)\) if the agent does not need to select the update target but uses a Q-network to do Equation (7) on a given \(\mathbf{v}_{i}\). In such a case, we either sequentially or simultaneously update all node tuples. Since the agent learns to be stationary when a node tuple should not be updated, we do not lose the power of our agent by the reduction. The overall action space design allows efficient computation of Q-values. We include a detailed discussion on the action space and alternative designs in Appendix C.1. **Reward:** In Section 3, we show that a proper node-marked graph significantly boosts the expressivity. Hence, an optimal reward choice is the increased expressivity from the action. However, expressivity is itself vaguely defined, and we can hardly quantify it. Instead, since the relevant improvement in expressivity should affect the objective value, we choose the objective value improvement as the instant reward. Specifically, let \(s=(G,U,W)\) be the current state and let \(s^{\prime}=(G,U^{\prime},W^{\prime})=a(s)\) be the outcome state of action \(a\), the reward \(r\) is \[r(s,a,s^{\prime})=\mathcal{L}(\text{MLP}(f_{p}(G,U)),\mathbf{y})-\mathcal{L}( \text{MLP}(f_{p}(G,U^{\prime})),\mathbf{y}) \tag{8}\] This reward correlates the action directly with the objective function, allowing our RL agent to be task-dependent and more flexible for different levels of tasks. **Q-Network:** Because our state includes graphs, we require an equivariant Q-network to output consistent Q-tables for actions. Hence, we choose MPNN to parameterize the Q-network. Specifically, for current state \(s=(G,U,W)\) and the target node tuple \(\mathbf{v}\in U\), we have the Q-table as, \[[Q(s,\mathbf{v})]_{l}=\text{MLP}([g_{rl}(G(\mathbf{v}))]_{l}\oplus\sum_{\mathbf{v}\in U}R ^{(G)}(g_{rl}(G(\mathbf{v})))\oplus R^{(W)}(W)) \tag{9}\] Row \(l\) in the Q-table is computed by the embedding of node \(v_{l}\) in the node-marked graph by an MPNN \(g_{rl}\), the current overall graph representation across all node tuples, and the state matrix \(W\) summarized by a pooling function \(R^{(W)}\). \([Q]_{l,j}\) represents the expected reward of replacing the node on index \(j\) of node tuple \(\mathbf{v}\) with node \(v_{l}\). The best action \(a_{j,l}\) is then chosen by, \[\operatorname*{arg\,max}_{j,l}[Q(s,\mathbf{v})]_{l,j} \tag{10}\] Note that because we assign different initial embeddings based on the node tuple, the MPNN distinguishes otherwise indistinguishable graphs. Figure 3: MAG-GNN’s pipeline. An RL agent iteratively updates node tuples for better expressivity. As demonstrated in Figure 3, our agent starts with a set of random node tuples and their corresponding subgraphs. In each step, the agent uses an MPNN-parameterized Q-network to update one slot in one node tuple such that the new node tuple set results in a better objective value. The agent repeats for a fixed number of steps \(t\) to find discriminative subgraphs. We do not assign a terminal state during training. Instead, the Q-Network will learn to be stationary when all other action decreases the objective value. This process is like throwing iron balls (marked nodes) into a magnetic field (geometry of the graph) and computing the force that the balls receive along with the interactions among balls (Q-network). We learn to move the balls and reason about the magnetic field's properties. Hence, we dub our method Magnetic GNN (MAG-GNN). To show the effectiveness of our method in locating discriminative node tuples, we prove the following theorem (proof in Appendix A). **Theorem 2**.: _There is a MAG-GNN whose action is more powerful in identifying discriminative node tuples than random actions._ MAG-GNN is at least as powerful as random actions since we can adopt a uniform-output MPNN for the MAG-GNN, yielding random actions. The superiority proof identifies cases where MAG-GNN requires fewer steps to locate the discriminative node tuples. The overall inference complexity is the number of MPNN runs, \(O(mtT|V^{2}|)\). A more detailed complexity analysis is in Appendix D. Some previous works also realize the complexity limitation of subgraph GNNs and propose sampling-based methods, and we discuss their relationship to MAG-GNN. PF-GNN [7] uses particle-filtering to sample from canonical labeling tree. MAG-GNN and PF-GNN do not overlap exactly. However, we show that using the same resampling process, MAG-GNN captures PF-GNN (Appendix A). **Theorem 3**.: _MAG-GNN captures PF-GNN using the same resampling method in PF-GNN._ k-OSAN [26] proposes a data-driven subgraph sampling strategy to find informative subgraphs by another MPNN. This strategy reduces to random sampling when the graph requires higher expressivity (e.g., regular graphs) and has no features because the MPNN will generate the same embedding for all nodes and hence cannot identify subgraphs that most benefit the prediction like MAG-GNN can. MAG-GNN does not solely depend on the data and finds more expressive subgraphs even without node features. Moreover, sampled subgraphs in previous methods are essentially independent. In contrast, MAG-GNN also models their correlation using RL. This allows MAG-GNN to obtain better expressivity with fewer samples and better consistency (More discussions in Appendix C.3). ### Training MAG-GNN With the state and action space, reward, and Q-network defined, we can use any DQN techniques to train the RL agent. However, to evaluate the framework's capability, we select the basic Experience Replay method [19] to train the Q-network. MAG-GNN essentially consists of two systems, an RL agent and a prediction MPNN. Making them coordinate is more critical to the method. The most natural way to train our system is first to train \(g_{p}\), as introduced in Section 3 with random node tuples. We then use \(g_{p}\) as part of \(f_{p}\), the marked-graphs encoder, and treat \(f_{p}\) as the fixed environment to train our Q-network. The advantage of the paradigm is that the environment is stable. Consequently, all experiences stored in the memory have the correct reward value for the action. This encourages stability and fast convergence during RL training. We term this paradigm ORD for ordered training. However, not all \(g_{p}\) trained from random node tuples are _good_. When we train \(g_{p}\) for the ZINC dataset and evaluate all node-marked graphs to as in Section 3. The average standard deviation among all scores of node-marked graphs is \(\sim\)\(0.003\), and the average difference between the worst and best score is \(\sim\)\(0.007\). Hence, the maximal improvement is minimal if we use this MPNN as the environment. Intuitively, when the graph has informative initial features, \(X\), like those in the ZINC dataset, the MPNN quickly recognizes patterns from these features while gradually learning to ignore node marking features \(X(\mathbf{v}_{i})\), as not all node markings carry helpful information. In such cases, we need to adjust \(g_{p}\) while the RL agents become more powerful in finding discriminative subgraphs. One way is to train the RL agent and \(g_{p}\) simultaneously. Concretely, we sample a state \(s\), run the RL agent \(t\) steps to update it to state \(s^{t}\), and train \(g_{p}\) on the marked graphs of \(s^{t}\). Then, in the same step, the RL agent samples a different state and treats the current \(g_{p}\) as the environment to generate experience. Lastly, the RL agent is optimized on the sampled previous experiences. Because we adjust \(g_{p}\) to capture node tuple information better, the score standard deviation of the node-marked ZINC graphs is kept at \(\sim\)\(0.036\). We term this paradigm SIMUL. Compared to ORD, SIMUL makes the RL agent more difficult to train when \(g_{p}\) evolves rapidly. Nevertheless, we observe that as \(g_{p}\) gradually becomes stable, the RL agent can still learn effective policies. One of the critical goals of MAG-GNN is to identify complex structural information that MPNN cannot. Hence, instead of training the agent on real-world datasets from scratch, we can transfer the knowledge from synthetic expressivity data to real-world data. As mentioned above, training MAG-GNN on graphs with features is difficult. Alternatively, we first use the ORD paradigm to train the agent on expressivity data without node features such as SR25. On these datasets, \(g_{p}\) relies on the node markings to make correct predictions. We then fix the RL agent and only use the output state from the agent to train a new \(g_{p}\) for the actual tasks, such as ZINC graph regression. Using this paradigm, we only need to train one MAG-GNN with good expressivity and adapt it to different tasks without worrying about overfitting and the aforementioned stability issue, we name this paradigm PRE. We experimented with different schemes in Section 6. ## 5 Related work **More expressive GNNs.** A substantial amount of work strive to improve the expressivity of GNNs. They can be classified into the following categories: (1) Just like MPNN simulates the 1-WL test, **Higher-order GNNs** design GNNs to simulate higher-order WL tests. They include k-GNN [18], RingGNN [5], and PPGN [20]. These methods perform message passing on node tuples and have complexity that grows exponentially with \(k\) and hence does not scale well to large graphs. (2) Realizing the symmetry-breaking power of subgraphs, **Subgraph GNNs**, including NGMN [35], GNN-AK [37], KPGNN [8], and ID-GNN [33], use MPNNs to encode subgraphs instead of subtrees around graph nodes. Later works, like I\({}^{2}\)-GNN [15], further use 2-order(edge) subgraph information to improve the expressivity of subgraph GNNs beyond 3-WL. Recent works, such as OSAN [26] and SUN [10], unify subgraph GNNs into the WL hierarchy, showing improvement in expressivity for subgraph GNNs also requires exponentially more subgraphs. (3) **Substrcuture counting** methods, including GSN [3] and MotifNet [22], employ substructure counting in GNNs. They count predefined substructures undetectable by the 1-WL test as features for MPNN to break its expressivity limit. However, the design of the relevant substructures usually requires human expertise. (4) Many previous works also realize the complexity issue of more expressive GNNs and strive to reduce it. SetGNN [38] proposes to reduce node tuple to set and thus reduce the number of nodes in message passing. GDGNN [16] designs geodesic pooling functions to have strong expressivity without running MPNN multiple times. (5) **Non-equivariant GNNs.** Abboud _et al._[1] proves the universality of MPNN with randomly initialized node features, but due to the large search space, such expressivity is hardly achieved. DropGNN [24] randomly drops out nodes from the graph to break symmetries in graphs. PF-GNN [7] implements a neural version of canonical labeling and uses particle filtering to sample branches in the labeling process. OSAN [26] proposes to use input features to select important subgraphs. Agent-based GNN [21] initializes multiple agents on a graph without the message-passing paradigm to iteratively update node embeddings. MAG-GNN also falls into this category. **Reinforcement Learning and GNN.** There has been a considerable effort to combine RL and GNN. Most work is on application. GraphNAS [11] uses GNN to encode neural architecture and use reinforcement learning to search for the best network. Wang _et al._[30] uses GNN to model circuits and RL to adjust the transistor parameters. On graph learning, most works use RL to optimize particular parameters in GNN. For example, SUGAR [27] uses Q-learning to learn the best top-k subgraphs for aggregations; Policy-GNN [17] learns to pick the best number of hops to aggregate node features. GPA [13] uses Deep Q-learning to locate the valuable nodes in active search. These works leverage the node feature to improve the empirical performance but fail to identify graphs with symmetries, while MAG-GNN has good expressivity without node features. To the best of our knowledge, our work is the first to apply reinforcement learning to the graph expressivity problem. ## 6 Experimental results In the experiment 1, we answer the following questions: **Q1**: Does MAG-GNN have good expressivity, and is the RL agent output more expressive than random ones? **Q2**: MAG-GNN is graph-level; can the expressivity generalize to node-level tasks? **Q3**: How is the RL method's performance on real-world datasets? **Q4**: Does MAG-GNN have the claimed computational advantages? For the experiment, we update all node tuples simultaneously as it allows more parallelizable computation. More experiment and dataset details can be found in Appendix F. ### Discriminative power **Dataset.To answer **Q1**, we use synthetic datasets (Accuracy) to test the expressivity of models. (1) EXP contains 600 pairs of non-isomorphic graphs that cannot be distinguished by 1-WL/2-WL bounded GNN. The task is to differentiate all pairs. (2) SR25 contains 15 strongly regular graphs with the same configuration and cannot be distinguished by 3-WL bounded GNN. (3) CSL contains 150 circular skip link graphs in 10 isomorphism classes. (4) BREC contains 400 pairs of synthetic graphs to test the fine-grained expressivity of GNNs (Appendix E). We use the ORD training scheme. **Results.** We compare to MPNN [32], Random Node Marked (RNM) GNN with the same hyperparameter search space as MAG-GNN, subgraph GNNs[35; 15; 36; 37], and Non-equivariant GNNs [1] as baselines. Table 1 shows that MAG-GNN achieved a perfect score on all datasets, which verifies our observation and theory. Note that subgraph GNNs like NGNN take at least \(|V|\) MPNN runs, while MAG-GNN takes constant times of MPNN runs. However, MAG-GNN successfully distinguished all strongly regular graphs in SR25, which NGNN failed. RNI, despite being universal, is challenging to train and can only make random guesses on SR25. Compared to the RNM approach, MAG-GNN finds more representative subgraphs for the SR25 dataset and performs better. Figure 4 shows a graph in the EXP dataset. We observe that MAG-GNN moves the initial node marking on the same component to different components, allowing better structure learning. Another example of strongly regular graphs is in Appendix E. In Figure 5, we also plot the performance of MAG-GNN against the number of MAG-GNN search steps when we only use one node tuple of size two. We see that node tuples from MAG-GNN are significantly more expressive than random node tuples (step 0). On EXP and CSL datasets, MAG-GNN can achieve perfect scores in one or two steps, whereas in SR25, it takes six steps but with a consistent performance increase over steps. We plot the reward curve during training in Appendix E. **Results.** Following Huang _et al._[15], we say a model has the required counting power if the error is below \(0.01\) and report the result in Table 2. We compare to MPNN baseline [32], RNM GNN, node-level subgraph GNNs [33; 37; 35], and edge-level subgraph GNNs [15]. We can see that MAG-GNN successfully counts (3,4,5)-cycles, which indicates that node-marking also helps non-marked nodes to count cycles. We also note that MAG-GNN does not count 6-cycles well, although we use \(>2\)-order node tuples. We suspect this is because MAG-GNN takes the average score improvement of nodes as the reward, which might not be the best reward for a node-level task. Even so, we can still observe MAG-GNN's performance improvement over NGNN, which shows that MAG-GNN with larger node tuples indeed increases expressivity. We leave the design of a more proper node-level reward to future works. ### Real-world datasets **Datasets.** To answer **Q3**, we adopt the following real-world molecular property prediction datasets: (1) QM9 (MAE) contains 130k molecules for twelve graph regression targets. (2) ZINC and ZINC-FULL (MAE) include 12k and 250k chemical compounds for graph regression. (3) OGBG-MOLHIV (AUROC) contains 41k molecules, and the goal is to classify whether a molecule inhibits HIV. We use the SIMUL training scheme for a fair comparison to other methods. **Results.** On QM9 targets, MAG-GNN significantly outperforms NGNN on all targets with an average of \(33\%\) MAE reduction. It also outperforms I\({}^{2}\)-GNN, with partially \(>\)3-WL expressivity, on most of the targets (\(16\%\) average MAE reduction), showing that with much fewer MPNN runs, MAG-GNN can still achieve better performance. This is because despite using fewer subgraphs, we use node tuples with a size greater than two, which grants MAG-GNN the power to distinguish graphs that require higher expressivity. MAG-GNN also performs comparably to 1-2-3-GNN, which simulates 3-WL. We observe that 1-2-3-GNN performs well on the last five targets (which are global properties hard for subgraph GNNs) while outcompeted by MAG-GNN on the rest of the targets. We suspect that 1-2-3-GNN has a constant high expressivity, which might easily lead to overfitting during training. At the same time, MAG-GNN can automatically adjust the expressivity by subgraph selection, reducing the risk of overfitting. The results on other molecular datasets are shown in Table 6. We see that MAG-GNN outperforms base GNN by a large margin showing its better expressivity. Another important comparison is between MAG-GNN and other non-equivariant GNNs, including PF-GNN, k-OSAN, and RNM. We see that MAG-GNN achieves significantly better results on ZINC, where only models with high expressivity get good performance. This verifies that using an RL agent to capture the inter-subgraph relation is essential in finding expressive subgraphs. MAG-GNN does not perform as well on the \begin{table} \begin{tabular}{l c c c} \hline \hline Time (ms) & ZINC-FULL & CYCLE & QM \\ \hline MPNN & 100.1 & 58.4 & 222.9 \\ NGNN & 402.9 & 211.7 & 776.8 \\ PGNN & 1864.1 & 1170.4 & 3524.0 \\ PFGN & 2097.3 & 1196.8 & 4108.7 \\ \hline MAG-GNN & 385.8 & 155.1 & 704.9 \\ \hline \hline \end{tabular} \end{table} Table 4: Inference time. \begin{table} \begin{tabular}{l|c c c c c|c} \hline \hline Target & RNM & 1-2-3-GNN [23] & PPGN [20] & NGNN [35] & I\({}^{2}\)-GNN [15] & MAG-GNN \\ \hline Comp. & \(O(kT|V^{2}|)\) & \(O(T|V^{4}|)\) & \(O(T|V^{3}|)\) & \(O(T|V^{3}|)\) & \(O(T|V^{4}|)\) & \(O(mtT|V^{2}|)\) \\ \hline \(\mu\) & 0.426 & 0.476 & **0.231** & 0.428 & 0.428 & 0.353 \\ \(\alpha\) & 0.306 & 0.27 & 0.382 & 0.230 & 0.230 & **0.226** \\ \(\epsilon_{\text{HOMO}}\) & 0.00258 & 0.00337 & 0.00276 & 0.00265 & 0.00261 & **0.00257** \\ \(\epsilon_{\text{LUMO}}\) & 0.00269 & 0.00351 & 0.00287 & 0.00297 & 0.00267 & **0.00252** \\ \(\Delta\epsilon\) & 0.0047 & 0.0048 & 0.00406 & 0.0038 & 0.0038 & **0.0035** \\ \(\langle R^{2}\rangle\) & 20.9 & 22.9 & 16.07 & 20.5 & 18.64 & **15.44** \\ \(ZYVE\) & 0.0002 & 0.00019 & 0.0064 & 0.0002 & **0.00014** & 0.0002 \\ \(U_{0}\) & 0.281 & **0.0427** & 0.234 & 0.295 & 0.211 & 0.111 \\ \(U\) & 0.193 & 0.111 & 0.234 & 0.361 & 0.206 & **0.105** \\ \(H\) & 0.384 & **0.0419** & 0.229 & 0.305 & 0.269 & 0.089 \\ \(G\) & 0.250 & **0.0469** & 0.238 & 0.489 & 0.261 & 0.116 \\ \(C_{v}\) & 0.177 & 0.0944 & 0.184 & 0.174 & **0.073** & 0.093 \\ \hline \hline \end{tabular} \end{table} Table 5: QM9 experimental results on all targets. (\(\downarrow\)) OGBG-MOLHIV dataset. We observe that 1-WL bounded methods PNA also achieves good results on the datasets, meaning that the critical factor determining the performance on this dataset is likely the implementation of the base GNN but not the expressivity. MAG-GNN is highly adaptable to any base GNN, potentially improving MAG-GNN's performance further. We leave this to future work. We use the PRE training scheme to conduct transfer learning tasks on ZINC, ZINC-FULL, and OGBG-MOLHIV datasets. We pre-train the expressivity datasets shown on the left column of Table 3 and train the attached MPNN head using the datasets on the top row. We see that pretraining consistently brings performance improvement to all datasets. Models pre-trained on CYCLE are generally better than the one pretrained on SR25, possibly due to the abundance of cycles in molecular graphs. ### Runtime comparison We conducted runtime analysis on previously mentioned datasets. Since it is difficult to match the number of parameters for all models strictly, we fixed the number of GNN layers to five and the embedding dimension to 100. We set a 1 GB memory budget for all models and measured their inference time on the test datasets. We use \(m=2\) and \(T=2\) for MAG-GNN. The results in Table 4 show that MAG-GNN is more efficient than all subgraph GNNs and is significantly faster than the edge-rooted subgraph GNN, I\({}^{2}\)GNN. NGNN achieves comparable efficiency to MAG-GNN because it takes a fixed-hop subgraph around nodes, reducing the subgraph size. However, MAG-GNN outperforms NGNN on most targets with its better expressivity. **Limitations.** Despite the training schemes in Section 4.1, MAG-GNN is harder to train. Also, MAG-GNN's design for node-level tasks might not be proper. This motivates research on extending MAG-GNN to node-level or even edge-level tasks. We discuss this further in Appendix C.4. ## 7 Conclusions In this work, we closely examine one popular GNN paradigm, subgraph GNN, and discover that a small subset of subgraphs is sufficient for obtaining high expressivity. We then design MAG-GNN, which uses RL to locate such a subset, and propose different schemes to train the RL agent effectively. Experimental results show that MAG-GNN achieved very competitive performance to subgraph GNNs with significantly less inference time. This opens up new pathways to design efficient GNNs. **Acknowledgement.** Lecheng Kong, Jiarui Feng, Hao Liu, and Yixin Chen are supported by NSF grant CBE-2225809. Muhan Zhang is partially supported by the National Natural Science Foundation of China (62276003) and Alibaba Innovative Research Program. \begin{table} \begin{tabular}{l|c|c c c} \hline \hline & \# Params & ZINC (\(\downarrow\)) & ZINC-FULL (\(\downarrow\)) & OGBG-MOLHIV (\(\uparrow\)) \\ \hline GIN & - & 0.163\(\pm\)0.004 & 0.088\(\pm\)0.002 & 77.07\(\pm\)1.49 \\ PNA & - & 0.188\(\pm\)0.004 & - & 79.05\(\pm\)1.32 \\ k-OSAN & - & 0.155\(\pm\)0.004 & - & - \\ PF-GNN & - & 0.122\(\pm\)0.010 & - & 80.15\(\pm\)0.68 \\ RNM & 453k & 0.128\(\pm\)0.027 & 0.062\(\pm\)0.004 & 76.79\(\pm\)0.94 \\ GSN & - & 0.115\(\pm\)0.012 & - & 78.80\(\pm\)0.82 \\ CIN & \(\sim\)100k & 0.079\(\pm\)0.006 & **0.022\(\pm\)**0.002 & **80.94\(\pm\)**0.57 \\ NGNN & \(\sim\)500k & 0.111\(\pm\)0.003 & 0.029\(\pm\)0.001 & 78.34\(\pm\)1.86 \\ GNAK+ & \(\sim\)500k & 0.080\(\pm\)0.001 & - & 79.61\(\pm\)1.19 \\ SUN & 526k & 0.083\(\pm\)0.003 & - & 80.03\(\pm\)0.55 \\ KPGNN & 489k & 0.093\(\pm\)0.007 & - & - \\ I\({}^{2}\)GNN & - & 0.083\(\pm\)0.001 & 0.023\(\pm\)0.001 & 78.68\(\pm\)0.93 \\ SSWL+ & 387k & **0.070\(\pm\)**0.005 & **0.022\(\pm\)**0.002 & 79.58\(\pm\)0.35 \\ \hline MAG-GNN & 496k & 0.106\(\pm\)0.014 & 0.030\(\pm\)0.002 & 77.12\(\pm\)1.13 \\ MAG-GNN-PRE & 496k & 0.096\(\pm\)0.009 & 0.023\(\pm\)0.002 & 78.30\(\pm\)1.08 \\ \hline \hline \end{tabular} \end{table} Table 6: Molecular datasets results.(\(\downarrow\))
2308.07650
EQ-Net: Elastic Quantization Neural Networks
Current model quantization methods have shown their promising capability in reducing storage space and computation complexity. However, due to the diversity of quantization forms supported by different hardware, one limitation of existing solutions is that usually require repeated optimization for different scenarios. How to construct a model with flexible quantization forms has been less studied. In this paper, we explore a one-shot network quantization regime, named Elastic Quantization Neural Networks (EQ-Net), which aims to train a robust weight-sharing quantization supernet. First of all, we propose an elastic quantization space (including elastic bit-width, granularity, and symmetry) to adapt to various mainstream quantitative forms. Secondly, we propose the Weight Distribution Regularization Loss (WDR-Loss) and Group Progressive Guidance Loss (GPG-Loss) to bridge the inconsistency of the distribution for weights and output logits in the elastic quantization space gap. Lastly, we incorporate genetic algorithms and the proposed Conditional Quantization-Aware Accuracy Predictor (CQAP) as an estimator to quickly search mixed-precision quantized neural networks in supernet. Extensive experiments demonstrate that our EQ-Net is close to or even better than its static counterparts as well as state-of-the-art robust bit-width methods. Code can be available at \href{https://github.com/xuke225/EQ-Net.git}{https://github.com/xuke225/EQ-Net}.
Ke Xu, Lei Han, Ye Tian, Shangshang Yang, Xingyi Zhang
2023-08-15T08:57:03Z
http://arxiv.org/abs/2308.07650v1
# EQ-Net: Elastic Quantization Neural Networks ###### Abstract Current model quantization methods have shown their promising capability in reducing storage space and computation complexity. However, due to the diversity of quantization forms supported by different hardware, one limitation of existing solutions is that usually require repeated optimization for different scenarios. How to construct a model with flexible quantization forms has been less studied. In this paper, we explore a one-shot network quantization regime, named Elastic Quantization Neural Networks (EQ-Net), which aims to train a robust weight-sharing quantization supernet. First of all, we propose an elastic quantization space (including elastic bit-width, granularity, and symmetry) to adapt to various mainstream quantitative forms. Secondly, we propose the Weight Distribution Regularization Loss (WDR-Loss) and Group Progressive Guidance Loss (GPG-Loss) to bridge the inconsistency of the distribution for weights and output logits in the elastic quantization space gap. Lastly, we incorporate genetic algorithms and the proposed Conditional Quantization-Aware Accuracy Predictor (CQAP) as an estimator to quickly search mixed-precision quantized neural networks in supernet. Extensive experiments demonstrate that our EQ-Net is close to or even better than its static counterparts as well as state-of-the-art robust bit-width methods. Code can be available at [https://github.com/xuke225/EQ-Net](https://github.com/xuke225/EQ-Net). ## 1 Introduction Deploying intricate deep neural networks(DNN) on edge devices with limited resources, such as smartphones or IoT devices, poses a significant challenge due to their demanding computational and memory requirements. Model quantization [13, 28, 33] has emerged as a highly effective strategy to mitigate the aforementioned challenge. This technique involves transforming the floating-point values into fixed-point values of lower precision, thereby reducing the memory requirements of the DNN model without altering its original architecture. Additionally, computationally expensive floating-point matrix multiplications between weights and activations can be executed more efficiently on low-precision arithmetic circuits, leading to reduced hardware costs and lower power consumption. Despite the evident advantages in terms of power and costs, quantization incurs added noise due to the reduced precision. However, recent research has demonstrated that neural networks can withstand this noise and maintain high accuracy even when quantized to 8-bits using post-training quantization (PTQ) techniques [26, 30, 27, 24, 46]. PTQ is typically efficient and only requires access to a small calibration dataset, but its effectiveness declines when applied to low-bit quantization (\(\leq\) 4-bits) of neural networks. In contrast, quantization-aware training (QAT) [52, 7, 14, 11, 4, 21, 29] has emerged as the prevailing method for achieving low-bit quantization while preserving near full-precision accuracy. By simulating the quantization operation during training or fine-tuning, the network can adapt to the quantization noise and yield better solutions than PTQ. Currently, most AI accelerators support model quantization, but the forms of quantization supported by different hardware platforms are not exactly the same [25]. For example, NVIDIA's GPU adopts channel-wise symmetric quantization in TensorRT [31] inference engine, while Quantum's DSP adopts per-tensor asymmetric quantization in SNPE [34] inference engine. For conventional QAT methods, the different quantization forms supported by hardware platforms may require repeated optimization of the model during deployment on multiple devices, leading to extremely low efficiency of model quantization deployment. To address the problem of repeated optimization in model quantization resulting from discrepancies in quantization schemes, this paper proposes an elastic quantization space design that encompasses the current mainstream quantization scenarios and classifies them into elastic quantization bit-width (2-bit, 4-bit, 8-bit, etc.), elastic quantization granularity (per-layer quantization, per-channel quantization), and elastic quantization symmetry (symmetric quantization, asymmetric quantization), as shown in Figure 1. This approach enables flexible deployment models under different quantization scenarios by designing a unified quantization formula that integrates various model quantization forms and implementing elastic switching of quantization bit-width, granularity, and symmetry through parameter splitting. Inspired by one-shot neural architecture search [5, 51, 44, 48], this paper attempts to train a robust elastic quantization supernet based on the constructed elastic quantization space. Unlike neural architecture search, the elastic quantization supernet is fully parameter-shared, and there is no additional weight parameter optimization space with network structure differences. Therefore, training the elastic quantization supernet may encounter the problem of negative gradient suppression [41, 49] due to different quantization forms. In other words, samples with inconsistent predictions between quantization configuration A (e.g., 8-bit/per-channel/asymmetric) and quantization configuration B (e.g., 2-bit/per-tensor/symmetric) are considered negative samples by each other, which slows down the convergence speed of the supernet during training. To solve the aforementioned problem, this paper proposes an efficient training strategy for elastic quantization supernet. Our goal is to reduce negative gradients by establishing consistency in weight and logits distributions: (1) introducing the Weight Distribution Regularization (WDR) to perform skewness and kurtosis regularization on shared weights, to better align the elastic quantization space and establish weight distribution consistency; (2) introducing the Group Progressive Guidance (GPG) to group the quantization sub-networks and guide them with progressive soft labels during the supernet training stage to establish consistency in output logits distributions. As shown in Figure 1, the trained elastic quantization supernet can achieve both uniform and mixed-precision quantization (MPQ). Compared with previous MPQ works [45, 16, 10, 9, 20], our method can specify any quantization bit-width and forms in the elastic quantization space and quickly obtain a quantized model with the corresponding accuracy. With these features, we propose a Conditional Quantization-Aware Accuracy Predictor (CQAP), combined with a genetic algorithm to efficiently search for the Pareto solution on mixed-precision quantization models under the target quantization bit-width and forms. ## 2 Related Works One-Shot Network Architecture Search.The goal of Neural Architecture Search (NAS) is to search an optimal architecture within a large architecture search space. The term 'one-shot' alludes to the fact that the subnet population only needs to be trained once. Regarding one-shot NAS methods, Cai et al. [5] proposed a once-for-all (OFA) model that facilitates various architectural settings by decoupling the training and search stages, thereby reducing the computational cost. BigNAS [51] challenges the conventional pipeline by training the supernet using the sandwich rule, constructing a big single-stage model without extra retraining or post-processing. AttentiveNAS [44] improves the quality of the subnet by replacing the original uniform sam Figure 1: A conceptual overview of EQ-Net approach. pling strategy with a Pareto-aware sampling strategy during the training stage, and uses the Monte Carlo sampling to accelerate the sampling process. AlphaNet [43] enhances the performance of the subnet by utilizing Alpha divergence to tackle the issue of overestimating the uncertainty of teacher networks that arise from KL divergence. Inspired by this OFA NAS approach, we construct a weight-sharing elastic quantization supernet which includes elastic quantization bit-width, symmetry, and granularity. By training an elastic quantization supernet, a variety of quantized networks with different forms can be obtained to suit different scenarios. Multi-Bit Quantization of Neural Networks.Recently, several research works on multi-bit quantization have caught our attention. For robustness of weights, Milad et al. [1] propose a regularization scheme applied during regular training, which models quantization noise as an additive perturbation bounded by the \(\ell_{\infty}\) norm, constrained above the first-order term of the perturbation applied to the network from the \(\ell_{1}\) norm of the gradients; RobustQuant [38] prove that uniformly distributed weights have a higher tolerance to quantization with lower sensitivity to specific quantizer implementation compared to normally-distributed weights, and proposes Kurtosis regularization to enhance their quantization robustness. For robust quantization training strategies, AnyPrecision [50] employs DoReFa [52] quantization constraints to train a model but saves it in floating-point form. During runtime, the floating-point model can be directly set to different bit-widths by truncating the least significant bits; CoQuant [39] introduce a collaborative knowledge transfer approach to train a multi-bit quantization network; OQAT [37] presents the bit inheritance mechanism under the OFA framework to progressively reduce the bit-width, allowing higher bit-width models to guide the search and training of lower bit-width models. However, this method limits its quantization policy search space to fixed-precision quantization policies, which may reduce the flexibility of the model; BatchQuant [2] proposes a quantizer to stabilize single-shot supernet training for joint mixed-precision quantization and architecture search; MultiQuant [49] enhances supernet training by using an adaptive soft label strategy to overcome the vicious competition between high bit-width and low bit-width quantized networks. The previous studies mainly focused on the robustness of multi-bit quantization, while this paper incorporates the granularity and symmetry of quantization into the search space from the perspective of hardware deployment. In addition, by establishing similarity constraints on the weight distribution and output logits distribution, the training efficiency of the supernet is improved. ## 3 Approach In this section, we will give a comprehensive and detailed analysis of our proposed method, mainly including the design of elastic quantization search space, the modeling of quantization supernet, and the training strategy. ### Quantization Preliminaries To help modeling elastic quantization neural networks, we start by introducing common notations for quantization. We introduce \(\mathbf{w}\) and \(\mathbf{x}\) to represent the weight matrix and activation matrix in the neural network. A complete uniform quantization process consists of quantization and de-quantization operations, which can be represented as follows: \[\left\{\begin{array}{l}\hat{\mathbf{w}}=\text{clip}\left(\left\lfloor\frac{\mathbf{w }}{s}\right\rfloor+z,-2^{b-1},2^{b-1}-1\right)\\ \overline{\mathbf{w}}=s\cdot(\hat{\mathbf{w}}-z)\end{array}\right. \tag{1}\] where \(s\) and \(z\) are called quantization step size and zero-point, respectively. \(\left\lfloor\cdot\right\rceil\) rounds the continuous numbers to the nearest integers. \(b\) represents the predetermined quantization bit-width. Given a quantization weight matrix \(\hat{\mathbf{w}}\) and activation matrix \(\hat{\mathbf{x}}\), the product is given by \[\mathbf{o}_{ij}=s_{w}s_{x}\sum_{c=1}^{C}\left(\hat{\mathbf{w}}_{ic}\hat{\mathbf{x}}_{cj}- z_{w}\hat{\mathbf{x}}_{cj}-z_{x}\hat{\mathbf{w}}_{ic}+z_{w}z_{x}\right) \tag{2}\] where \(\mathbf{o}\) is the convolution output or the pre-activation, \(C\) represents the number of weights channels. ### Elastic Quantization Space Design Our elastic quantization search space consists of three parts, elastic quantization bit width, elastic quantization symmetry, and elastic quantization granularity. Elastic Quantization Bit-Width.With proper training, different quantization bit-widths can share the same weights. Therefore, for elastic quantization bit-widths, we only need to separate and store the quantization step size and zero-point required for different quantization bit-widths. In other words, the model weights are shared among different quantization bit-widths, and only differences in quantization step size and zero-point. Typically, the quantization step size is smaller and the saturation truncation range is larger for higher bit-widths, while the quantization step size is larger and the saturation truncation range is smaller for lower bit-widths. This greatly alleviates the training pressure on hyperparameters, but poses challenges to the robustness of shared weights. Additionally, the choice of elastic quantization bit-widths is arbitrary and can be designed according to requirements. Elastic Quantization Symmetry.Elastic quantization symmetry supports both symmetric and asymmetric quantization. For symmetric quantization, the zero-point is fixed to 0 (\(z=0\)), while for asymmetric quantization, the zero-point is adjustable to different ranges (\(z\in\mathbb{Z}\)). Asymmetric quantization scheme with trainable zero-point that can learn to accommodate the negative activations [4]. The switching between symmetric and asymmetric quantization is achieved by dynamically modifying the value of the zero point. Elastic Quantization Granularity.Elastic quantization granularity supports both per-tensor and per-channel quantization. Per-tensor quantization uses only one set of step size and zero-point for a tensor in one layer (\(s\in\mathbb{R}_{+},z\in\mathbb{Z}\)) while per-channel quantization quantizes each weight kernel independently (\(s\in\mathbb{R}_{+}^{1\times C},z\in\mathbb{Z}^{1\times C}\)). Compared to per-tensor, per-channel quantization is a more fine-grained approach. Since both granularities need to be implemented in the elastic quantization space, the step size and zero-point for per-tensor can be obtained heuristically from per-channel, or can be learned as independent parameters. In addition, the elastic quantization granularity is designed for weights only, and the activations are all in the form of per-tensor. ### Elastic Quantization Network Modeling Assuming that the elastic quantization space of a model can be represented as \(\mathcal{E}=\{\mathcal{E}_{b},\mathcal{E}_{g},\mathcal{E}_{s}\}\), where \(\mathcal{E}_{b}\), \(\mathcal{E}_{g}\), and \(\mathcal{E}_{s}\) respectively represent elastic quantization bit-width, granularity, and symmetry, as described in Section 3.2. Given the floating-point weights \(\mathbf{w}\) and activations \(\mathbf{x}\), the learnable quantization step size set \(\mathbf{s}=\{s_{w,l}^{e},s_{a,l}^{e}\}\), and zero-point set \(\mathbf{z}=\{z_{w,l}^{e},z_{a,l}^{e}\}\), the optimization problem of the elastic quantized network can be formalized as: \[\min_{\mathbf{w}^{*},\mathbf{s}^{*},\mathbf{x}^{*}}\sum\mathcal{L}_{val}\left(\text{QNN} \left(\hat{\mathbf{w}},\hat{\mathbf{x}},\mathbf{s},\mathbf{z}\right)\right) \tag{3}\] where \(s_{w,l}^{e}\) and \(s_{a,l}^{e}\) represent the weights and activation step size with quantization configuration \(e\in\mathcal{E}\) in layer \(l\); \(\mathcal{L}_{val}\) denotes the validation loss; QNN denotes quantization neural network. It can be seen that the training objective of elastic quantization networks is to minimize the task loss under all elastic quantization spaces by optimizing the weights, step sizes, and zero-points. ### Elastic Quantization Training To enable efficient elastic quantization training, we propose the use of weight distribution regularization and group progressive guidance techniques to promote data consistency across various elastic quantization spaces. Weight Distribution Regularization.DNN weights often conform to Gaussian or Laplace distributions [3]. To better align these weights to the elastic quantization space, we propose the incorporation of skewness and kurtosis regularizations. Skewness regularization primarily limits the direction and degree of skewness in the data distribution (as expressed in Eq.(4), where \(\mu\) and \(\sigma\) are the mean and standard deviation of \(\mathbf{w}\)). Reducing the degree of skewness in the weight distribution enhances the robustness of weights in elastic quantization symmetry. \[\mathrm{Skew}[\mathbf{w}]=\mathbb{E}\left[\left(\frac{\mathbf{w}-\mu}{\sigma}\right)^{ 3}\right] \tag{4}\] In contrast, kurtosis regularization primarily limits the sharpness of the peak in the data distribution (as expressed in Eq.(5)). Reducing the sharpness of the weight distribution peak enhances the robustness of weights in the elastic quantization bit-width. \[\mathrm{Kurt}[\mathbf{w}]=\mathbb{E}\left[\left(\frac{\mathbf{w}-\mu}{\sigma}\right)^{ 4}\right] \tag{5}\] To sum up, the weight distribution regularization loss for the supernet training is defined as follows: \[\mathcal{L}_{\text{WDR}}=\frac{1}{L}\sum_{i=1}^{L}\left(\left|\mathrm{Skew} \left[\mathbf{w}_{i}\right]\right|^{2}+\left|\mathrm{Kurt}\left[\mathbf{w}_{i}\right] -\mathcal{K}_{T}\right|^{2}\right) \tag{6}\] where \(L\) is the number of layers and \(\mathcal{K}_{T}\) is the target for kurtosis regularization. Based on relevant experimental research [38], optimal robustness is achieved at \(\mathcal{K}_{T}=1.8\). Group Progressive Guidance.As highlighted in [19, 15], an ensemble of teacher networks can provide more diverse soft labels during distillation training of the student network, leading to greater consistency in output logits. In our supernet, a multitude of subnets exists with varying quantization configurations, thereby enabling the generation of diverse soft labels. Motivated by this, we employ different grouped subnets as a teacher ensemble during in-place distillation to achieve progressive guidance across different groups. Following the sandwich rule [51], we sample the highest quantization bit-width subnets (including random symmetry and granularity, denoted as \(H\)), the lowest (denoted as \(L\)), and random subnets (denoted as \(R\)) in each training step. In this approach, the subnets with the highest bit-width are trained to predict the ground truth label \(\mathbf{y}\), while the subnets with random bit-width losses are defined based on the cross-entropy with the ground truth label and the Kullback-Leibler (KL) divergence with the soft logits of highest subnets, \(\mathcal{Y}_{H}\). Likewise, the losses of the lowest subnets are defined based on the cross-entropy with \(\mathbf{y}\) and the KL divergence with \(\mathcal{Y}_{R}\). \[\left\{\begin{array}{l}\mathcal{L}_{H}=\mathcal{L}_{\text{CE}}\left(\mathcal{Y }_{H},\mathbf{y}\right)\\ \mathcal{L}_{R}=\lambda*\mathcal{L}_{\text{KL}}\left(\mathcal{Y}_{R},\mathcal{Y }_{H}\right)+(1-\lambda)*\mathcal{L}_{\text{CE}}\left(\mathcal{Y}_{R},\mathbf{y}\right) \\ \mathcal{L}_{L}=\lambda*\mathcal{L}_{\text{KL}}\left(\mathcal{Y}_{L},\mathcal{Y }_{R}\right)+(1-\lambda)*\mathcal{L}_{\text{CE}}\left(\mathcal{Y}_{L},\mathbf{y} \right)\end{array}\right. \tag{7}\] where \(\mathcal{L}_{\text{KL}}\) and \(\mathcal{L}_{\text{CE}}\) indicate the KL divergence loss and cross-entropy loss, respectively. In summary, the group progressive guidance losses for training the supernet are defined as follows: \[\mathcal{L}_{\text{GPG}}(\theta)=\mathcal{L}_{H}(\theta)+\mathcal{L}_{R}(\theta)+ \mathcal{L}_{L}(\theta) \tag{8}\] It then aggregates the gradients from all sampled subnets before updating the weights of the supernet model. ### Mixed-Precision Quantization Search The mixed-precision search approach is designed to systematically explore the suitable bit-width configuration for each layer of a supernet. During the performance estimation phase, it is necessary to perform batch norm calibration [23, 51] to re-calibrate the statistics of the batch normalization layer prior to estimating the performance of the quantization subnet. Batch norm calibration and the validation of quantization models are time-consuming, resulting in an expensive evaluation cost for the search. When employing search algorithms for quantized bit-width search, thousands of subnets must be evaluated. To expedite the search process and minimize the time cost in the search phase, we propose a proxy model for performance estimation. **Conditional Quantization-Aware Accuracy Predictor.** In the stage of mixed precision quantization, not only the bit-width of each layer but also the form of quantization will have a crucial impact on the final results. To achieve a unified prediction of the elastic quantization model, we propose a Conditional Quantization-Aware Accuracy Predictor (CQAP) in contrast to previous precision predictors [49]. As shown in the lower left corner of Figure 1, we use the quantization symmetry and granularity as the conditions to evaluate the final precision for different bit-widths, and adopt binary encoding as the input to the predictor. The backbone architecture of the predictor maintains the same MLP structure as the previous work [44, 49], and the output results in the predicted accuracy. The CQAP can be formalized as: \[\text{acc}=\text{MLP}(\underbrace{G_{w},S_{w},S_{a}}_{\text{Conditional}}, \underbrace{B_{w},B_{a}}_{\text{BitWidth}}) \tag{9}\] where \(G_{w}\), \(S_{w}\), \(B_{w}\) represent the granularity, symmetry, and bit width of each layer for weights quantization respectively. \(S_{a}\), \(B_{a}\) represent the symmetry and bit width of each layer for activations quantization respectively. **Genetic Algorithm for Mixed-Precision Search.** During the search phase, the genetic algorithm[47] explores the bit-width of each layer and utilizes a CQAP to evaluate the corresponding accuracy of each candidate configuration. The genetic algorithm first initializes a set of solutions that satisfy the constraints using Monte Carlo sampling [49, 43] as the initial population. Subsequently, the fitness score of each candidate quantization network produced by the predictor is evaluated based on its accuracy. The individual with the highest fitness scores is preserved as elitist and included in the mutation and crossover process to generate a new population based on a predefined probability. This selection-mutation-crossover procedure is iteratively performed until the algorithm achieves a satisfactory Pareto solution that satisfies the average bit-width targets for both weights and activations. ## 4 Experimental Results In this section, we present the results of a comprehensive set of experiments demonstrating the superiority of our proposed approach over several baselines on the ImageNet [8]. Additionally, we conducted comprehensive ablation experiments and visualization analyses to confirm the effectiveness of both the WDR and the GPG methods for EQ-Net. ### Implementation Details We separately trained two major classes of models using pre-trained weights provided by the TorchVision and PyTorch v1.10 frameworks [32]. The first class comprised classical ResNet [18] models, namely ResNet18 and ResNet50, while the second class included lightweight models MobileV2 [36] and EfficientNetB0 [42], which utilize separable convolutions. It is worth mentioning that the EfficientNetB0 model utilizes the Swish [35] activation function, which produces negative values. This feature allows us to investigate the differences between symmetric and asymmetric quantization using this model. The elastic quantization space of these networks is shown in Table 1. Note that we excluded 2-bit quantization in the lightweight model, as it results in a significant performance drop. We train each model for 120 epochs using Adam [22] optimizer with a cosine learning rate decay. The base learning rate is set as 0.001. After each quantization supernet is trained, we sample 8000 different subnetworks in each supernet and calculate their accuracy on a subset of the training set, making a <config, accuracy> dataset to train CQAP. We train CQAP for 100 epochs using SGD, the learning rate is set as 0.0004, and the weight decay of 0.0001. In the search phase of GA, we set the size of the population to 100 and the number of generations to 500. ### Comparison with State-of-the-Art Methods Table 2 shows the comparison of our trained EQ-Net which uses Bit-width, Granularity, and Symmetry One-For-All(BGS-OFA) method with fixed quantization, mixed precision, and other Bit-width One-For-All(B-OFA) methods. For ResNet18, EQ-Net outperforms RobustQuant [38] and CoQuant [39], by nearly 10% at 2 and 3 fixed bit-width, and this gap is further widened to 15% in ResNet50. When the quantization bit width is set to 3, we outperform MultiQuant [49] by 1.8% in ResNet18 but underperform this algorithm by 0.7% in ResNet50. We speculate that the reason for this difference is that our BGS-OFA method contains per-channel quantization form, which is more unstable [21] when the model is larger and affects the training of the whole supernet. Compared with LSQ method, we have less than 1% accuracy gap in the 2-bit quantization of ResNet model, but our method has better robustness and generality. In mixed precision quantization, our 3-bit mixed quantization accuracy in ResNet18 has reached the accuracy of FP32, which benefits from robust supernet training and search technology. In both the lightweight MobileNetV2 and EfficientNetB0 models, the capability of our algorithm is further illustrated. In MobileNetV2, we surpass the algorithms RobustQuant and MultiQuant which use the B-OFA approach by 11.4% and 1.1% at 4 bit-width, respectively. Meanwhile, our algorithm outperforms HAQ [45] by 4.2% in mixed precision quantization. The reason for achieving such well-done results is that when using separable convolution, the distribution of weights in some layers is irregular and sometimes even double-peaked [12], increasing the difficulty of quantization, while our WDR-Loss can well transition the weights to uniform distribution and improve the accuracy of quantization. Since the activation function used by ResNet18, ResNet50, and MobileNetV2 is ReLU [17], which has no negative values, there is not much difference between symmetric and asymmetric quantization. EfficientNetB0 uses the Swich [35] activation function with negative values, and we can see an improvement of about 1% when applying asymmetric quantization compared to symmetric quantization. Our algorithm outperforms LSQ by 0.6% in symmetric \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{NetWork} & \multicolumn{4}{c|}{Weight Quantization Forms} & \multicolumn{2}{c|}{Activation Quantization Forms} \\ \cline{2-9} & Bit-Width & Symmetric & \multicolumn{1}{c|}{Granularity} & \multicolumn{1}{c|}{Bit-Width} & \multicolumn{1}{c|}{Symmetric} \\ \hline ResNet18/ResNet50 & 2,3,4,5,6,7,8 & symmetric/asymmetric & per-channel/per-layer & 2,3,4,5,6,7,8 & symmetric/asymmetric \\ \hline MobileNetV2/EfficientNetB0 & 3,4,5,6,7,8 & symmetric/asymmetric & per-channel/per-layer & 3,4,5,6,7,8 & symmetric/asymmetric \\ \hline \end{tabular} \end{table} Table 1: Elastic quantization space design under different models \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Network**} & \multirow{2}{*}{**Benchmark**} & \multirow{2}{*}{**Criterion**} & \multirow{2}{*}{**Granularity**} & \multirow{2}{*}{**Symmetry**} & \multicolumn{2}{c|}{**Weights**} & \multicolumn{2}{c}{**Activation**} & \multicolumn{2}{c}{**Accuracy**} \\ & & & & & **W-bits** & & **W-Comp** & **A-bits** & **A-Comp** & **Top-1 (Drop)** & **FP Top-1** \\ \hline \multirow{8}{*}{ResNet-18} & LSQ [11] & Uniform & Per-tensor & Symmetric & 2 & 14.11\(\times\) & 2 & 13.25\(\times\) & **67.6\% (\(\pm\)12.9\%)** & 70.5\% \\ & LSQ+ [4] & Uniform & Per-tensor & Asymmetric & 2 & 14.11\(\times\) & 2 & 13.25\(\times\) & 66.8\% (13.3\%) & 70.1\% \\ & EAMIPS [6] & Mixed-Precision & Per-tensor & Symmetric & 2 MP & 16.00\(\times\) & — & \(<\)16.00\(\times\) & 65.9\% (13.9\%) & 69.8\% \\ \cline{2-10} & RobustQuant [38] & B-OFA & Per-tensor & Symmetric & 3 & 10.67\(\times\) & 3 & 10.67\(\times\) & 57.3\% (\(\pm\)13.0\%) & 70.3\% \\ & CoQuant [39] & B-OFA & Per-tensor & Symmetric & 2 & 14.11\(\times\) & 2 & 13.25\(\times\) & 57.1\% (\(\pm\)12.7\%) & 69.8\% \\ & AnyPrecision [50] & B-OFA & Per-tensor & Symmetric & 2 & 14.11\(\times\) & 2 & 13.25\(\times\) & 64.2\% (\(\pm\)14.0\%) & 68.2\% \\ & MultiQuant [49] & B-OFA & Per-tensor & Asymmetric & 3 & 10.37\(\times\) & 3 & 10.37\(\times\) & 67.5\% (\(\pm\)12.3\%) & 69.8\% \\ & MultiQuant [49] & B-OFA & Per-tensor & Asymmetric & 3 MP & 9.93\(\times\) & 3 MP & 9.56\(\times\) & 69.2\% (10.6\%) & 69.8\% \\ \cline{2-10} & \multirow{2}{*}{EQ-Net(Ours)} & \multirow{2}{*}{BGS-OFA} & Per-tensor & Symmetric & 2 & 14.11\(\times\) & 2 & 13.25\(\times\) & **65.9\% (\(\pm\)13.9\%)** & 69.8\% \\ & & Per-tensor & Asymmetric & 3 & 10.37\(\times\) & 3 & 10.37\(\times\) & **69.3\% (\(\pm\)0.5\%)** & 69.8\% \\ & & Per-tensor & Asymmetric & 3 MP & 9.93\(\times\) & 3 MP & 9.56\(\times\) & **69.8\% (\(\pm\)0.0\%)** & 69.8\% \\ \hline \multirow{8}{*}{ResNet-50} & LSQ [11] & Uniform & Per-tensor & Symmetric & 2 & 12.88\(\times\) & 2 & 15.34\(\times\) & **73.7\% (\(\pm\)13.2\%)** & 76.9\% \\ & HAQ [45] & Mixed-Precision & Per-tensor & Symmetric & 3 MP & 10.57\(\times\) & MP & **75.3\% (\(\pm\)0.8\%)** & 76.1\% \\ & HAWQ-V2 [9] & Mixed-Precision & Per-channel & Symmetric & 2 MP & 12.24\(\times\) & 4 MP & \(<\)8.00\(\times\) & 75.8\% (\(\pm\)11.6\%) & 77.4\% \\ \cline{2-10} & RobustQuant [38] & B-OFA & Per-tensor & Symmetric & 3 & 10.67\(\times\) & 3 & 10.67\(\times\) & 57.3\% (\(\pm\)19.0\%) & 76.3\% \\ & CoQuant [39] & B-OFA & Per-tensor & Symmetric & 2 & 12.88\(\times\) & 2 & 15.34\(\times\) & 57.1\% (\(\pm\)19.0\%) & 76.1\% \\ & AnyPrecision [50] & B-OFA & Per-tensor & Symmetric & 2 & 12.88\(\times\) & 2 & 15.34\(\times\) & 71.7\% (\(\pm\)13.3\%) & 75.0\% \\ & MultiQuant [49] & B-OFA & Per-tensor & Asymmetric & 3 & 10.67\(\times\) & 3 & 10.67\(\times\) & **75.4\% (\(\pm\)0.7\%)** & 76.1\% \\ \cline{2-10} & \multirow{2}{*}{EQ-Net(Ours)} & \multirow{2}{*}{BGS-OFA} & Per-tensor & Symmetric & 2 & 12.88\(\times\) & 2 & 15.34\(\times\) & **72.5\% (\(\pm\)31.6\%)** & 76.1\% \\ & & Per-tensor & Asymmetric & 3 & 10.67\(\times\) & 3 & 10.67\(\times\) & **74.7\% (\(\pm\)11.4\%)** & 76.1\% \\ \cline{2-10} & & Per-tensor & Symmetric & 3 MP & 10.57\(\times\) & 3 MP & 10.57\(\times\) & **75.1\% (\(\pm\)1.0\%)** & 76.1\% \\ \hline \multirow{8}{*}{MobileNetV2} & HAQ [45] & Mixed-Precision & Per-tensor & Symmetric & 4 MP & 8.00\(\times\) & 4 MP & 8.00\(\times\) & 67.0\% (\(\pm\)5.1\%) & 72.1\% \\ \cline{2-10} & RobustQuant [38] & B-OFA & Per-tensor & Symmetric & 4 & 8.00\(\times\) & 4 & 8.00\(\times\) & 59.0\% (\(\pm\)12.3\%) & 71.3\% \\ \cline{1-1} & MultiQuant [49] & B-OFA & Per-tensor & Asymmetric & 4 & 8.00\(\times\) & 4 & 8.00\(\times\) & 69.9\% (\(\pm\)12.0\%) & 71.9\% \\ \cline{1-1} \cline{2-10} & EQ-Net(Ours) & BGS-OFA & Per-tensor & Asymmetric & 4 & 8.00\(\times\) & 4 & 8.00\(\times\) & **71.0\% (\(\pm\)19.0\%)** & 71.9\% \\ \hline \multirow{8}{*}{EfficientNetB0} & LSQ [11] & Uniform & Per-tensor & Symmetric & 4 & 8.00\(\times\) & 4 & 8.00\(\times\) & 71.9\% (\(\pm\)14.2\%) & 76.1\% \\ & LSQ+ [4] & Uniform & Per-tensor & Asymmetric & 4 & 8.00\(\times\) & 4 & 8.00\(\times\ quantization but falls short of LSQ+ [4] by 0.3% in asymmetric quantization. This disparity can be attributed to the fact that the network weights need to balance the trade-offs between the two quantization methods, resulting in an increase in the accuracy of symmetric quantization while a little decrease in the accuracy of asymmetric quantization. ### Ablation Studies Effectiveness of Weight Distribution Regularization.To make the weight distribution of neural networks more suitable for elastic quantization, we introduce weight distribution regularization. Figure 2(a) illustrates the weight distribution of the 21st layer of ResNet20 on the CIFAR10 dataset. The figure reveals that certain layers in ResNet architecture exhibit skewed and sharp distribution characteristics, as evidenced by the kurtosis value of 3.37 and the skewness value of 0.64. The impact of such distribution phenomena on fixed-bit-width quantization is relatively insignificant. However, for elastic quantization with high robustness demands, such phenomena can significantly affect the overall performance, particularly for low bit widths. Figure 2(b) and Figure 2(c) depict the effects of applying kurtosis and skewness regularization to the weights, respectively. Notably, Figure 2(d) shows that simultaneously applying kurtosis and skewness regularization can lead to a distribution effect that is closer to uniform distribution, effectively eliminating data skewness and sharpness simultaneously. Moreover, as presented in Table 3, incorporating kurtosis and skewness regularization can boost accuracy by nearly 1% for the 2-bit scenario, while the average accuracy for 2, 4, and 8 bits can improve by 0.5%. Effectiveness of Group Progressive Guidance.In the training procedure of elastic quantization supernet, we adopt the training strategy of GPG proposed in Section 3.4. This strategy utilizes soft labels from the high bit-width subnet to progressively guide the low bit-width subnet, creating more coherence between the output of the high and low bit-width networks. As a result, the performance of the low bit-width subnet is substantially improved. The Convergence curve graph of ResNet20 trained using three different methods (hard label, label smoothing [40], and our GPG method) on CIFAR-10 are presented in Figure 3. It can be observed that our proposed strategy consistently outperforms the other methods at 2 bit-width during training. Additionally, the performance for 2 bit-width is similar when using the label smoothing and hard label methods. Furthermore, to demonstrate the training efficiency of the whole quantization supernet, we use the average precision of 2-4-8 bit-widths, and the average precision of our method is always the best. When the bit-width is set to 8, although our GPG method is initially inferior to the hard label method during the first few epochs, our method steadily improves and is able to catch up with the hard label method, which demonstrates that our \begin{table} \begin{tabular}{c c c} \hline \hline ResNet20 & 2-bit & Avg 2-4-8-bit \\ \hline Baseline & 86.4\% & 90.3\% \\ + Kurtosis Loss & 87.3\% & 90.5\% \\ + Skewness Loss & 86.9\% & 90.4\% \\ Kurtosis+Skewness Loss & **87.3\%** & **90.7\%** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study for weight distribution regularization. Figure 3: Top-1 accuracy of ResNet20 on CIFAR-10 for different benchmarks (including 2bit, 8-bit, and 2-4-8 bit average accuracy). HL and LS denote hard label and label smoothing, respectively. Figure 2: Ablation analysis of weights distribution from 21-th layer on elastic quantized ResNet20 with Kurtosis and Skewness regularization. The blue column represents the histogram distribution, and the red solid line represents the 7th order fitting curve of the data. method can improve the accuracy of the low bit-width subnet without sacrificing the high bit-width performance. Learned vs. Heuristic Per-Tensor Quantization.Our proposed EQ-Net offers both per-channel and per-tensor quantization options. Per-channel quantization utilizes different step sizes for each convolution kernel, while per-tensor involves sharing a single step size across a layer of the network. Hence, exploring the efficacy of utilizing independent learnable parameters or heuristics on a per-channel basis for per-tensor quantization warrants investigation. As shown in Table 4, we compare the learnable method with three heuristic methods. The results demonstrate that the learnable method outperforms all three heuristics. Specifically, the learnable step size exhibits 0.2%/0.6%/0.3% boosts over the best-performing heuristic method max at bit-widths of 2/4/8. Among the three heuristics, the max achieves the highest accuracy, followed by the mean, which is only 0.4%/0.1% lower than the max at 2/8 bit-width, respectively. The worst performing method is min, which is approximately 20% lower than the other two heuristics at any bit-width, this outcome is due to the narrow quantization value range that results from using the smallest step size, causing large quantization error. Therefore, in our EQ-Net, we use independent learnable step sizes for per-tensor quantization. Rank Preservation Analysis of Accuracy Predictor.As illustrated in Figure 1, the mixed precision search can be conducted after the completion of quantization supernet training. During the search phase, we employ the CQAP, as proposed in Section 3.5, as a proxy model for measuring accuracy. Since CQAP is used to evaluate the performance of each mixed-precision model, it is imperative to guarantee a rank correlation between predictors and actual performance. We sampled 10k images from the training set of the ImageNet dataset and used the accuracy of this subset to measure the performance of the candidate subnet. In Figure 4, we illustrate the rank correlation coefficients for three different supernets. It is evident that the Pearson coefficient is consistently above 0.90, and the Kendall coefficient is above 0.80 except for EfficientNetB0. It is demonstrated that there is a strong correlation between the predicted accuracy of our CQAP and the actual performance of the candidate subnet. The Kendall coefficient and Pearson coefficient for EfficientNetB0 are 0.71 and 0.90, respectively. These values are comparatively lower than those obtained for the other two networks under consideration. The reason for this slightly inferior performance can be attributed to the significant precision difference observed between symmetric and asymmetric quantization when applied to EfficientNetB0. ## 5 Conclusion In this paper, we have proposed Elastic Quantization Neural Networks (EQ-Net) that achieve hardware-friendly and efficient training through a one-shot weight-sharing quantization supernet. By training the supernet on designed elastic quantization space, EQ-Net can support subnets with both uniform and mixed-precision quantization without retraining. We propose two training schemes with Weight Distribution Regularization (WDR) and Group Progressive Guidance (GPG) techniques to optimize EQ-Net. We demonstrate that EQ-Net can achieve near-static quantization accuracy performance in an elastic quantization space. \begin{table} \begin{tabular}{c c c c} \hline Per-channel & 2-bit & 4-bit & 8-bit \\ Baseline & 88.3\% & 91.9\% & 92.5\% \\ \hline Per-tensor & 2-bit & 4-bit & 8-bit \\ min & 49.3\% & 72.7\% & 75.7\% \\ mean & 86.6\% & 91.8\% & 92.1\% \\ max & 87.0\% & 91.8\% & 92.2\% \\ learnable & **87.2\%** & **92.4\%** & **92.5\%** \\ \hline \end{tabular} \end{table} Table 4: Ablation study for learned vs. heuristic (min, mean, max) per-tensor quantization. Figure 4: Ablation analysis of CQAP Rank correlation between actual accuracy and predicted accuracy on split validation set of ImageNet. ## Acknowledgments This work was supported in part by the National Natural Science Foundation of China (No. 62206003, No. 62276001, No. 62136008, No. U20A20306, No. U21A20512) and in part by the Excellent Youth Foundation of Anhui Provincial Colleges (No. 2022AH030013).
2310.02576
A Prototype-Based Neural Network for Image Anomaly Detection and Localization
Image anomaly detection and localization perform not only image-level anomaly classification but also locate pixel-level anomaly regions. Recently, it has received much research attention due to its wide application in various fields. This paper proposes ProtoAD, a prototype-based neural network for image anomaly detection and localization. First, the patch features of normal images are extracted by a deep network pre-trained on nature images. Then, the prototypes of the normal patch features are learned by non-parametric clustering. Finally, we construct an image anomaly localization network (ProtoAD) by appending the feature extraction network with $L2$ feature normalization, a $1\times1$ convolutional layer, a channel max-pooling, and a subtraction operation. We use the prototypes as the kernels of the $1\times1$ convolutional layer; therefore, our neural network does not need a training phase and can conduct anomaly detection and localization in an end-to-end manner. Extensive experiments on two challenging industrial anomaly detection datasets, MVTec AD and BTAD, demonstrate that ProtoAD achieves competitive performance compared to the state-of-the-art methods with a higher inference speed. The source code is available at: https://github.com/98chao/ProtoAD.
Chao Huang, Zhao Kang, Hong Wu
2023-10-04T04:27:16Z
http://arxiv.org/abs/2310.02576v2
# A Prototype-Based Neural Network for ###### Abstract Image anomaly detection and localization perform not only image-level anomaly classification but also locate pixel-level anomaly regions. Recently, it has received much research attention due to its wide application in various fields. This paper proposes ProtoAD, a prototype-based neural network for image anomaly detection and localization. First, the patch features of normal images are extracted by a deep network pre-trained on nature images. Then, the prototypes of the normal patch features are learned by non-parametric clustering. Finally, we construct an image anomaly localization network (ProtoAD) by appending the feature extraction network with \(\mathbf{L2}\) feature normalization, a \(\mathbf{1\times 1}\) convolutional layer, a channel max-pooling, and a subtraction operation. We use the prototypes as the kernels of the \(\mathbf{1\times 1}\) convolutional layer; therefore, our neural network does not need a training phase and can conduct anomaly detection and localization in an end-to-end manner. Extensive experiments on two challenging industrial anomaly detection datasets, MVTec AD and BTAD, demonstrate that ProtoAD achieves competitive performance compared to the state-of-the-art methods with a higher inference speed. The source code is available at: [https://github.com/98chao/ProtoAD](https://github.com/98chao/ProtoAD). **Keywords: Image Anomaly Detection, Image Anomaly Localizationf, Non-parametric Clustering, Prototype-Based Network** ## 1 Introduction _Anomaly detection_ (AD) [1, 2] aims to detect anomalous samples that are deviated from a set of normal samples predefined during training. Traditional image anomaly detection adopts a semantic AD setting [3, 4, 5, 6], where anomaly samples are from unknown semantic classes different from the one normal samples belong to. Recently, detecting and localizing subtle image anomalies has become an important task in computer vision with various applications, such as anomaly or defect detection in industrial optical inspection [7, 8], anomaly detection and localization in video surveillance [9, 10, 11], or anomaly detection in medical images [12, 13]. In this setting, anomaly detection determines whether an image contains any anomaly, and anomaly localization, aka anomaly segmentation, localizes the anomalies at the pixel level. This paper focuses on the second setting, especially industrial anomaly detection and localization. Some examples from the MVTec AD dataset [8] along with predictions by our method are shown in Figure 1. In the above applications, anomalous samples are scarce and hard to collect. Therefore, image anomaly detection and localization are often solved with only normal samples. In addition, anomalous regions within images are often subtle (see Figure 1), making image anomaly localization a more challenging task that has not been thoroughly studied compared to image anomaly detection. Recent anomaly localization methods can be roughly categorized into two classes: reconstruction-based methods and OOD-based (out-of-distribution based) methods. Reconstruction-based methods are mainly based on the assumption that a model trained only on normal images can not reconstruct anomalous images accurately. They reconstruct image as a whole [14, 15, 16, 17, 18, 19, 20, 8, 21, 22, 12], or reconstruct in the feature space [23, 24, 22]. Then anomaly detection and localization can be Figure 1: Examples from the MVTec benchmark datasets. From top to bottom: anomaly samples, anomaly mask, and anomaly score maps predicted by our method. performed by measuring the difference between the reconstructed and original ones. This kind of method always needs cumbersome network training. OOD-based methods evaluate the degree of abnormality for a patch feature by measuring its deviation from a set of normal patch features, which is intrinsically a patch-wise OOD detecting task. Some methods such as PatchSVDD [25] and CutPaste [26] learn feature representation by self-supervised learning. On the contrary, some other methods [27, 28, 29, 30] simply extract features by deep networks pre-trained on natural image datasets such as ImageNet[31], and achieve promising and even better performances. Since the number of training patches is much larger than that of training images, the inference time and storage increase remarkably. Different strategies have been proposed to tackle this problem. Napoletano et al. [27] used k-means to learn the dictionary/prototypes for normal patch features, but they evaluated each test patch independently, resulting in high inference time. SPADE [28] selects k-nearest normal images for patch-wise evaluation based on the global image features, limiting anomaly localization performance. PaDiM [29] models the normal patches at each position by a multidimensional Gaussian distribution and measures the anomaly by the Mahalanobis distance between a test patch feature and the Gaussian at the same position. However, both SPADE [28] and PaDiM [29] are reliant on image alignment. The current state-of-the-art method, PatchCore [30], uses greedy coreset subsampling to reduce the inference time and storage significantly. This paper proposes ProtoAD, a prototype-based neural network for image anomaly detection and localization, to improve OOD-based methods' inference speed. We assume that all normal patch features can be grouped into some prototypes, and abnormal patch features cannot be properly assigned to any of them. Therefore, image anomaly localization can be performed by measuring the deviation of test patch features from the prototypes of normal patch features. First, the patch features of normal images are extracted by a deep network pre-trained on nature images and are \(L2\)-normalized. Then the prototypes of the normalized normal patch features are learned by a non-parametric clustering algorithm. The cosine similarity between two \(L2\)-normalized vectors is equivalent to the dot product between them. Therefore the cosine similarity between a normalized patch feature and a prototype can be implemented by a \(1\times 1\) convolution. Based on this equivalence, we construct an image anomaly localization network (ProtoAD) by appending the feature extraction network with the \(L2\) feature normalization, a \(1\times 1\) convolutional layer, a channel max-pooling, and a subtraction operation. We use the prototypes as the kernels of the \(1\times 1\) convolutional layer; therefore, our neural network does not need a training phase. Compared with previous OOD-based methods [27, 28, 29, 30], ProtoAD can perform the anomaly detection and localization in an end-to-end manner, which is more elegant and efficient. Extensive experiments on two challenging industrial anomaly detection datasets, MVTec AD [8] and BTAD [32], demonstrate that ProtoAD achieves competitive performance compared to the state-of-the-art methods with a higher inference speed. This advantage of ProtoAD makes it better match the needs of real-world industrial applications. ## 2 Related Works ### Image Anomaly Localization Anomaly detection is an image-level task to determine whether an image contains any anomaly. On the other hand, anomaly localization is more complex to locate anomalies at the pixel level. Here, we only introduce the methods that can be directly applied to image anomaly localization and roughly categorize current methods into two types: reconstruction-based and OOD-based. Reconstruction-based methods are mainly based on the assumption that a model trained only on normal images can not reconstruct anomalous images accurately, and anomaly detection and localization can be performed by measuring the difference between the reconstructed and original images. Early reconstruction-based methods [8, 12, 14, 15, 17] reconstruct image by auto-encoders (AE), variational autoencoders (VAE) or generative adversarial networks (GAN). However, the neural networks have high generalization capacities and can reconstruct anomalies well. Later, different strategies have been proposed to tackle this problem. Different memory-based auto-encoders [16, 18, 20] have been proposed to reconstruct images with features from memory bank to limit the generalization ability. Student-teacher models [22, 23] have been used to reconstruct pre-trained deep features. RIAD [19] randomly removes partial image regions and reconstructs the image by image in-painting. Glance [24] trains a Global-Net to regress the deep features of cropped patches based on their context. DRAEM [21] combines a reconstructive sub-network and a discriminative network and trains them in an end-to-end manner on synthetically generated just-out-of-distribution images. OOD-based methods evaluate the degree of abnormality for a patch feature by measuring its deviation from a set of normal patch features, which is intrinsically a patch-wise OOD detecting task. Some methods such as PatchSVDD [25] and CutPaste [26] learn feature representation by self-supervised learning. On the contrary, some other methods [27, 28, 29, 30] simply extract features by deep networks pre-trained on natural image datasets such as ImageNet [31], and achieve promising and even better performances. Since the number of training patches is much larger than that of training images, the inference time and storage increase remarkably. Different strategies such as clustering, density estimation, and sampling have been proposed to tackle this problem. Napoletano et al. [27] learned a dictionary of normal patches from the training set by k-means, and evaluated each patch of a test image by measuring its visual similarity with the k-nearest neighbors in the dictionary. SPADE [28] compares patch features of a test image with the patch features at the same position of k-nearest normal images selected based on global image features. However, this oversimplified pre-selection strategy will limit the localization performance. PaDiM [29] models the normal patches at each position by a multidimensional Gaussian distribution and detect anomaly by the Mahalanobis distance between a test patch feature and the Gaussian at the same position. Both SPADE [28] and PaDiM [29] are reliant on image alignment. Recently, PatchCore [30] constructs the memory bank of locally aware patch features by greedy coreset subsampling, and localizes anomaly by measuring the distances of test patch features to their nearest normal patch features in the bank. As a result, PatchCore achieves a new state-of-the-art and significantly reduces the inference time and storage. Our method is also an OOD-based method with pre-trained deep features but has several differences from the previous works. Our method uses non-parametric clustering instead of k-means in [27] to learn the prototypes for normal patch features. More importantly, our method can perform anomaly detection and localization by a network in an end-to-end manner, which is more elegant and efficient than the previous methods. Compared to reconstruction-based methods, our network do not need a cumbersome network training phase. ### Clustering Algorithms Clustering is a type of unsupervised learning task of dividing a set of unlabeled data points into a number of groups such that the data points in the same groups are more similar to each other than they are to the data points in other groups. Clustering provides an abstraction from data points to the clusters, and each cluster can be characterized by a cluster prototype, such as the centroid of a cluster, for further analysis. Clustering algorithms can be roughly divided into four categories: Partition-based cluster, Density-based clustering, Spectral Clustering, and Hierarchical-based clustering. Partition-based clustering algorithms divide the data into k groups, where k is the predefined number of cluster. The classical algorithms are k-means [33] and its variations. Although these algorithms are very fast, they need the number of clusters as a parameter and are sensitive to the selection of the initial k centroids. Density-based clustering defines a cluster as the largest set of densely connected points and can find clusters of arbitrary shapes. DBSCAN [34] is the most representative algorithm of this class. It has two parameters, radius length \(\epsilon\) and a parameter \(MinPts\). If there are \(MinPts\) points in the radius of \(\epsilon\) of a point, it is regarded as a high-density point. Spectral Clustering [35] has recently attracted much attention. Most spectral clustering algorithms need to compute the full similarity graph Laplacian matrix and have quadratic complexities, thus severely restricting their application to large data sets. Hierarchical clustering [36] is of two types: bottom-up and top-down approaches. In the bottom-up approach (aka agglomerative clustering), each data point starts as a cluster, and the most similar cluster pairs are iteratively merged according to the chosen similarity measure until some stopping criteria are met. In the top-down approach (aka divisive clustering), the clustering begins with a large cluster including all data and recursively breaks down into smaller clusters. Hierarchical clustering produces a clustering tree that provides meaningful ways to interpret data at different levels of granularity. Recently, Sarfraz et al. [37] proposed FINCH, a high-speed, scalable, and fully parameter-free hierarchical agglomerative clustering algorithm. In [27], k-means is used to learn the prototypes from normal patch features. To avoid choosing the number of clusters ahead, we adopt FINCH to learn the prototypes for normal patch features. ## 3 Method Our method consists of three steps: patch feature extraction, prototype learning, and anomaly detection and localization. An overview of our method is given in Figure. 2. We describe them sequentially in the following subsection. ### Patch Feature Extraction Since the features extracted by pre-trained networks have shown their effectiveness for various visual applications including anomaly detection [22, 23, 27, 28, 29, 30], we also adopt deep networks pre-trained on ImageNet dataset [31] as the feature extractor, and choose the backbone of Wide-ResNet [38] as the feature extractor following the previous works [28, 29, 30]. ResNet-like deep networks [38, 39] include several convolutional stages. The features become more abstract when the stage goes deeper, but their resolution gets lower. Thus, the feature maps from different stages form a Figure 2: An overview of the proposed method. First, the patch features of normal images are extracted by a deep network pre-trained on nature images. Then, the prototypes of the normal patch features are learned by FINCH clustering. For inference, an image anomaly localization network (ProtoAD) is constructed by appending the feature extraction network with the \(L2\) feature normalization, a \(1\times 1\) convolutional layer, a channel max-pooling (CMP), and a subtraction operation, and anomaly localization is performed in an end-to-end manner. feature hierarchy for an input image. Each spatial position of a feature map has a receptive field and corresponds to a patch/region in an input image; therefore, the feature vector at a spatial position of feature maps can be considered as a feature representation for the corresponding image patch. If the feature maps of a stage have a resolution of \(H\times W\), they contains \(H\times W\) patch features. The deep and abstract features from the ImageNet pre-trained networks are biased towards the ImageNet classification task and are less relevant to the anomaly detection and localization task. Therefore, we adopt the low- and mid-level (stage 1-3) feature representations and combine them as the patch features. Concretely, the feature maps at the higher-level are bilinearly re-scaled to have the same resolution as the lowest level, then the feature maps at different levels are concatenated together for handling multi-scale anomalies. The extracted features are then \(L2\)-normalized where each feature vector is divided by its \(L2\) norm. ### Prototype Learning After feature extraction, the prototypes of the \(L2\)-normalized patch features are learned by a clustering algorithm. Then, the prototypes are used in anomaly detection and localization instead of all the normal patch features to reduce the inference time and storage. There are mainly two concerns in choosing a clustering algorithm. First, the number of patch features is much larger than that of training images. For example, each category of MVTec AD dataset has several hundreds of images, while it has several hundreds of thousands of patch features in our implementation. Therefore, the clustering algorithm should be efficient and scalable to large-scale data. Second, most clustering algorithms have some parameters, e.g., the number of clusters or distance thresholds, which can not be well set without a priori knowledge of the data distribution. Thus, these algorithms demand a tedious parameter tuning process to achieve good performance. To meet the requirements of real applications, we adopt FINCH [37], a high-speed, scalable, and fully parameter-free hierarchical agglomerative clustering algorithm. The core idea of FINCH is to use the nearest neighbor information of each data point for clustering, which does not need to specify any parameters and has a low computational overhead. Given the integer indices of the first neighbor of each data point, an adjacency matrix is defined according to the following rules: \[A(i,j)=\begin{cases}1,&\text{if }j=\kappa_{i}^{1}\text{ or }\kappa_{j}^{1}=i \text{ or }\kappa_{i}^{1}=\kappa_{j}^{1}\\ 0,&otherwise\end{cases} \tag{1}\] where \(\kappa_{i}^{1}\) symbolizes the first neighbor of data point \(i\). This sparse adjacency matrix specifies a graph where connected data points form clusters. It directly provides clusters without solving a graph segmentation problem. After computing the first partition, FINCH merges the clusters recursively by using cluster means to compute the first neighbor of each cluster until all data points are included in a single cluster or until some stopping criteria is met. In this work, we define the stopping criteria as the number of cluster is less than a threshold and set the threshold to 10,000 to get good results in our experiments. We choose the last partition as the clustering result, and use the mean vectors of clusters as the prototypes of normal patch features. When the features are \(L2\)-normalized (making the length of a vector to 1), cosine similarity and Euclidean distance between the normalized features are equivalent in the sense of nearest neighbor searching: \[\frac{1}{2}L_{2}(\mathbf{x}_{a},\mathbf{x}_{b})^{2}=\frac{1}{2}(\mathbf{x}_{a }-\mathbf{x}_{b})\cdot(\mathbf{x}_{a}-\mathbf{x}_{b})=1-\mathbf{x}_{a}\cdot \mathbf{x}_{b}=1-\cos{(\mathbf{x}_{a},\mathbf{x}_{b})} \tag{2}\] where \(L_{2}()\) is Euclidean distance, \(\mathbf{x}_{a}\) and \(\mathbf{x}_{b}\) are two \(L2\)-normalized feature vectors, and \(\cos\) is cosine similarity. Therefore, we use cosine similarity for clustering and measuring the deviation of test patch features from norm patch features in the next subsection. ### Neural Network for Anomaly Detection and Localization When a test image passes through the feature extraction network, \(H\times W\) patch features have been extracted. The anomaly score of each patch feature can be computed by measuring its deviation from the prototypes of normal patch features. We compute the anomaly score of a test patch as one minus the cosine similarity between the normalized test patch feature and its nearest prototype. Formally, the anomaly score for the patch at position \((i,j)\) can be calculated as \[s_{ij}=1-\max_{1\leqslant k\leqslant K}\cos{(\mathbf{x}_{ij},\mathbf{m}_{k})} \tag{3}\] where \(\mathbf{x}_{ij}\) is the normalized patch feature at position \((i,j)\), \(\mathbf{m}_{k}\) is the \(k\)-th prototype, and \(\cos\) is cosine similarity. In addition, the image-level anomaly score for a test image can be simply computed by maximizing the anomaly scores of all its patch features. \[S=\max_{1\leqslant i\leqslant H,1\leqslant j\leqslant W}s_{ij} \tag{4}\] The cosine similarities between a normalized patch feature and a prototype can be computed by a \(1\times 1\) convolution (dot product) between them. Based on this equivalence, we construct a neural network (ProtoAD) for anomaly detection and localization. First, the \(L2\) feature normalization and a \(1\times 1\) convolutional layer are appended to the feature extraction network, and outputs feature maps of size \(H\times W\times K\), including the cosine similarities between the \(H\times W\) normalized patch features and all \(K\) prototypes. Then, channel max-pooling (CMP) is applied to the feature maps to get the normal score map of \(H\times W\), including the cosine similarities between the \(H\times W\) normalized patch features and their nearest prototypes. The anomaly score map can be further obtained by computing one minus the normal score map. This process is illustrated by Figure 3. Since the spatial resolution of feature maps is lower than that of an input image, we resize the anomaly score map to the resolution of the input image and use a Gaussian filter to smooth it. Finally, anomaly localization can be achieved by thresholding the anomaly score map, and the anomaly score for the test image can be obtained by maximizing the anomaly score map. We use the prototypes of normal patch features as the kernels of the \(1\times 1\) convolutional layer. Therefore the proposed neural network does not need a training phase. Compared to previous works [27, 28, 29, 30], our method can perform the anomaly detection and localization in an end-to-end manner, which is more elegant and efficient. ## 4 Experiments ### Datasets and Metrics #### 4.1.1 Dataset MVTec AD dataset [8] is a real-world industrial defect detection dataset which has become a standard benchmark for evaluating image anomaly detection and localization methods. It has 5354 high-resolution images belonging to 10 objects and 5 texture categories. The images of each category are split into a training and a testing set. Totally, the training set has 3629 normal images, and the test set has 1725 normal and abnormal images of various defects. The ground truth of the test set contains anomaly labels for image-level evaluation and anomaly masks for pixel-level evaluation. BTAD (BeanTech Anomaly Detection dataset) is a real-world industrial dataset recently released by [32]. It contains a total of 2830 real-world images of 3 industrial products. The images of each category are split into a defect-free training set and a testing set, supporting evaluation of both anomaly detection and localization. We follow the split of the two datasets for training and testing. Figure 3: Anomaly detection and localization process of ProtoAD. #### Evaluation Metrics AUROC (Area Under the Receiver Operating Characteristic curve) is the most commonly used metric for anomaly detection, which is independent of the threshold. We use image-level AUROC for evaluating the performance of anomaly detection, pixel-level AUROC for anomaly localization. Since the pixel-level AUROC is biased in favor of large anomalies, we also use PRO-score (per-region-overlap) [22] to evaluate anomaly localization, which weights ground-truth regions of different sizes equally. ### Experimental Setup We normalize the size of images from all categories of MVTec AD and BTAD dataset to \(256\times 256\), center crop images to \(224\times 224\), and do not apply any data augmentation. The backbone of Wide-ResNet50 pre-trained on ImageNet is employed as the feature extractor in our method as in [28, 29, 30]. We define the stopping criteria for FINCH clustering algorithm as the number of clusters is less than 10,000 and choose the last generated partition as the clustering result. For inference, we up-sample the anomaly score map to image size using bilinear interpolation and smooth it with the Gaussian filter with parameter \(\delta=4\) as in [29]. We implemented our models in Python 3.7 [40] and PyTorch [41], and run experiments on NVIDIA GeForce RTX 2080 Ti. ### Results on MVTec AD #### Comparison with the State-of-the-art We compare ProtoAD with the state-of-the-art methods including both the reconstruction and OOD-based methods. The compared reconstruction-based methods include Uninformed students (U-Student) [22], RIAD [19], MKD [23], Glance [24], DAAD [20] and DREAM [21]. And the compared OOD-based methods include SPADE [28], PatchSVDD (P-SVDD) [25], CutPaste [26], PaDiM [29], and PatchCore (P-Core) [30]. We directly use their evaluation results if they have been provided. We report the evaluation results (pixel-level AUROC and PRO-score) for pixel-level anomaly localization on MVTec AD dataset in Table 1 and Table 2 respectively. From table 1, we can see that the OOD-based methods generally achieve better pixel-level AUROC than the reconstruct-based methods. Among the OOD-based methods, the methods using the pre-trained deep features achieve better pixel-level AUROC than the methods based on self-supervised learning. PatchCore achieves the best pixel-level AUROC, PaDiM the second, and the reconstruct-based method DREAM the third. The pixel-level AUROC of our method is very close to those of PaDiM and DREAM. We also notice that our method is more effective on the texture category and achieves the second best AUROC. Table 2 gives the PRO-score results for methods which have used this metric. Among them, Glance achieves the best result, our method is the second best and outperform other OOD-based methods. After all, our method achieves competitive anomaly localization performance to the state-of-the-art methods. Figure 4 gives qualitative anomaly localization results of our method on MVTec AD dataset. We can see that our method can give accurate pixel-level localization regardless of anomaly region size and type (see supplementary for more qualitative results). We also report the image-level AUROC results for anomaly detection in Table 3. PatchCore achieves the best AUROC again, DREAM the second. Our method remains competitive and achieves the third-best AUROC, which is very close to that of DREAM. #### 4.3.2 Inference Efficiency Anomaly detection and localization algorithms need high precision and inference speed to match the requirements of real-world applications. Thus, we \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Category**} & \multicolumn{4}{c}{**Reconstruction-based**} & \multicolumn{4}{c}{**OOD-based**} \\ \cline{2-10} & **MKD** & **Clanco** & **RIAD** & **DRAEM** & **R-SVDD** & **CutPaste** & **SPADE** & **PaDiM** & **P-Core** & **ProtoAD** \\ \hline Carpet & 95.6 & 96.0 & 96.3 & 95.5 & 92.6 & 98.3 & 97.5 & 99.1 & 99.0 & 99.2 \\ Grid & 91.8 & 78.0 & 98.8 & 99.7 & 96.2 & 97.5 & 93.7 & 97.3 & 98.7 & 98.0 \\ Leather & 98.1 & 90.0 & 99.4 & 98.6 & 97.4 & 99.5 & 97.6 & 99.2 & 99.3 & 99.4 \\ Tile & 82.8 & 80.0 & 89.1 & 99.2 & 91.4 & 90.5 & 87.4 & 94.1 & 95.6 & 95.2 \\ Wood & 84.8 & 81.0 & 85.8 & 96.4 & 90.8 & 96.5 & 88.5 & 94.9 & 95.0 & 95.6 \\ \hline **Texture** & 90.6 & 85.0 & 93.9 & 97.9 & 93.7 & 96.3 & 92.9 & 96.9 & 97.5 & 97.5 \\ \hline Bottie & 96.3 & 93.0 & 98.4 & 96.1 & 98.1 & 97.6 & 96.4 & 98.3 & 98.6 & 98.3 \\ Cable & 82.4 & 94.0 & 84.2 & 94.7 & 96.8 & 90.0 & 97.2 & 96.7 & 98.4 & 97.5 \\ Capsule & 95.9 & 90.0 & 92.8 & 94.3 & 96.8 & 97.4 & 99.0 & 98.5 & 98.8 & 98.2 \\ Hazult & 94.6 & 84.0 & 96.1 & 90.7 & 97.5 & 97.3 & 99.1 & 98.2 & 98.7 & 98.8 \\ Metal Nut & 86.4 & 91.0 & 92.5 & 99.5 & 98.0 & 93.1 & 98.1 & 97.2 & 98.4 & 98.8 \\ Pill & 88.6 & 93.0 & 95.7 & 97.6 & 96.1 & 95.7 & 96.5 & 95.7 & 97.4 & 94.2 \\ Serv & 96.0 & 96.0 & 96.8 & 97.6 & 96.7 & 96.7 & 98.9 & 98.5 & 99.4 & 98.9 \\ Toothbrush & 96.1 & 96.0 & 98.9 & 98.1 & 98.1 & 98.1 & 97.9 & 98.8 & 98.7 & 98.8 \\ Transistor & 76.5 & 100.7 & 87.0 & 99.7 & 93.0 & 94.1 & 97.5 & 96.3 & 92.5 \\ Zipper & 93.9 & 99.0 & 97.8 & 98.8 & 96.1 & 99.3 & 96.5 & 98.5 & 98.8 & 96.7 \\ \hline **Object** & 90.8 & 90.6 & 94.3 & 97.0 & 96.7 & 95.8 & 97.6 & 97.8 & 98.4 & 97.1 \\ \hline **All** & 90.7 & 90.7 & 94.2 & **97.3** & 96.7 & 96.0 & 96.0 & 97.5 & **98.1** & 97.2 \\ \hline \hline \end{tabular} \end{table} Table 1: Anomaly localization performance on MVTec AD (Pixel-level AUROC). The best results of the two classes of methods are bold-faced respectively. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Category**} & \multicolumn{4}{c}{**Reconstruction-based**} & \multicolumn{4}{c}{**OOD-based**} \\ \cline{2-7} & **U-Student** & **Glance** & **SPADE** & **PaDiM** & **P-Core** & **ProtoAD** \\ \hline Carpet & 87.9 & 97.7 & 54.7 & 96.2 & 96.6 & 97.0 \\ Grid & 95.2 & 93.2 & 86.7 & 94.6 & 96.0 & 93.9 \\ Leather & 94.5 & 90.9 & 97.2 & 97.8 & 98.9 & 98.1 \\ Tile & 94.6 & 88.3 & 75.6 & 86.0 & 87.3 & 87.2 \\ Wood & 91.1 & 94.1 & 87.4 & 91.1 & 89.4 & 93.2 \\ \hline **Texture** & 92.7 & 92.8 & 88.3 & 93.1 & 93.6 & 93.9 \\ \hline Bottie & 93.1 & 96.8 & 95.5 & 94.8 & 96.2 & 95.8 \\ Cable & 81.8 & 98.0 & 90.9 & 88.8 & 92.5 & 93.8 \\ Capsule & 96.8 & 96.0 & 93.7 & 93.5 & 95.5 & 93.7 \\ Hazult & 96.5 & 96.2 & 95.4 & 92.6 & 93.8 & 95.3 \\ Metal Nut & 94.2 & 96.7 & 94.4 & 85.6 & 91.4 & 94.2 \\ Pill & 96.1 & 97.8 & 94.6 & 92.7 & 93.2 & 94.7 \\ Screw & 94.2 & 100 & 96.0 & 94.4 & 97.9 & 94.7 \\ Toothbrush & 93.3 & 96.1 & 93.5 & 93.0 & 91.5 & 91.2 \\ Transistor & 66.6 & 99.9 & 87.4 & 84.5 & 83.7 & 87.9 \\ Zipper & 95.1 & 99.2 & 92.6 & 95.9 & 97.1 & 93.3 \\ **Object** & 90.8 & 97.7 & 93.4 & 91.6 & 93.3 & 93.4 \\ \hline **All** & 91.4 & **96.1** & 91.7 & 92.1 & 93.4 & **93.6** \\ \hline \hline \end{tabular} \end{table} Table 2: Anomaly localization performance on MVTec AD (PRO-score). The best results of the two classes of methods are bold-faced respectively. also report the inference speed of our method and previous OOD-based methods using pre-trained deep features [28, 29, 30] in the table 4. In the experiments, all the methods adopt Wide-ResNet50 pre-trained on ImageNet as the feature extractor, center-cropped \(224\times 224\) image as input, and run on the same machine with a NVIDIA GeForce RTX 2080 Ti. For PatchCore, we use the implementation provided by the authors, which downsamples the normal patch features via greedy coreset subsampling (PatchCore-\(x\%\) denotes the percentage x of normal patch features are used in inference) and uses _faiss_[42] for nearest neighbor retrieval and distance computations. For PaDiM, we make \begin{table} \begin{tabular}{c c c} \hline Method & Scores & Inference Speed (FPS) \\ \hline SPADE & (85.5, 96.0, 91.7) & 7.58 \\ PatchCore(25\%) & (**99.1**, **98.1**, 93.4) & 22.30 \\ PatchCore(10\%) & (99.0, **98.1**, 93.5) & 24.45 \\ PatchCore(1\%) & (99.0, 98.0, 93.1) & 26.17 \\ PaDiM & (95.3, 97.5, 92.1) & 60.32 \\ ProtoAD & (97.7, 97.2, **93.6**) & **72.45** \\ \hline \end{tabular} \end{table} Table 4: Comparison of inference speed. Scores includes image-level AUROC, pixel-level AUROC, and PRO-score. The best results are bold-faced. Figure 4: Qualitative anomaly localization results of our method. From top to bottom: abnormal images, ground-truth, and anomaly score maps produced by our method. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \multirow{2}{*}{**Category**} & \multicolumn{4}{c}{**Reconstruction-based**} & \multicolumn{4}{c}{**OOD-based**} \\ \cline{2-10} & **MKD** & **DAAD** & **RIAD** & **DRAEM** & **SPADE** & **PaDiM** & **P-SVDD** & **CutPaste** & **P-Core** & **ProtoAD** \\ \hline Carpet & 79.3 & 86.6 & 84.2 & 97.0 & - & - & 92.6 & 98.3 & 98.7 & 99.5 \\ Grid & 78.0 & 95.7 & 99.6 & 99.9 & - & - & 96.2 & 97.5 & 98.2 & 93.7 \\ Leather & 95.1 & 86.2 & 100 & 100 & - & - & 97.4 & 99.5 & 100 & 100 \\ Tile & 91.6 & 88.2 & 98.7 & 99.6 & - & - & 91.4 & 90.5 & 98.7 & 99.2 \\ Wood & 94.3 & 98.2 & 93.0 & 99.1 & - & - & 90.8 & 95.5 & 99.2 & 99.1 \\ \hline **Texture** & 87.6 & 91.0 & 95.1 & 99.1 & - & 98.8 & 93.7 & 96.3 & 90.0 & 98.3 \\ \hline Bottle & 93.4 & 97.6 & 99.9 & 99.2 & - & - & 98.1 & 97.6 & 100 & 99.9 \\ Cable & 89.2 & 84.4 & 81.9 & 91.8 & - & - & 96.8 & 90.0 & 99.5 & 98.3 \\ Capsule & 80.5 & 76.7 & 88.4 & 98.5 & - & - & 95.8 & 97.4 & 98.1 & 93.1 \\ Hazelnut & 98.4 & 92.1 & 83.3 & 100 & - & - & 97.5 & 97.3 & 100 & 100 \\ Metal Nut & 73.6 & 75.8 & 88.5 & 98.7 & - & - & 98.0 & 93.1 & 100 & 99.9 \\ Pill & 82.7 & 90.0 & 83.8 & 98.9 & - & - & 95.1 & 95.7 & 96.6 & 95.8 \\ Srew & 83.3 & 98.7 & 84.5 & 93.9 & - & - & 95.7 & 96.7 & 98.1 & 94.9 \\ Toothbrush & 92.2 & 99.2 & 100 & 100 & - & - & 98.1 & 98.1 & 100 & 99.7 \\ Transistor & 85.6 & 87.6 & 90.9 & 93.1 & - & - & 97.0 & 93.0 & 100 & 97.7 \\ Zipper & 93.2 & 85.9 & 98.1 & 100 & - & - & 95.1 & 99.3 & 90.4 & 94.6 \\ \hline \hline **Object** & 87.8 & 88.8 & 89.9 & 97.4 & - & 93.6 & 96.7 & 95.8 & 99.2 & 97.4 \\ \hline **All** & 87.7 & 89.5 & 91.7 & **98.0** & 85.5 & 95.3 & 95.7 & 96.0 & **99.1** & 97.7 \\ \hline \end{tabular} \end{table} Table 3: Anomaly detection performance on MVTec AD (Image-level AUROC). The best results of the two classes of methods are bold-faced respectively. extensive optimization via GPU acceleration. Compared with the previous methods, our model achieves the highest speed, which is 1.2x, 2.7x, and 9.5x faster than PaDiM, PatchCore, and SPADE, respectively. The high inference speed is mainly because our model performs inference in an end-to-end manner, and the main computation added to the feature extraction network is the \(1\times 1\) convolutional layer. Compared to the reconstruct-based methods, our method does not need a cumbersome network training process. ### Ablation Study We report ablations studies on the MVTec AD dataset to evaluate the impact of different components of our method on the performance. #### 4.4.1 Feature Layer Selection ResNet-like deep networks [38, 39] include several convolutional stages. The feature maps from different stages can compose a feature hierarchy for an image. Since the deepest feature maps in the hierarchy are biased towards the ImageNet classification task, we only adopt the features at the low and middle hierarchy levels (stage 1-3) for anomaly detection and localization. Table 5 gives the performance achieved with the features from different levels and their combination. It can be observed that the features from hierarchy level 2 can achieve the best performance among the first three levels, and a combination of the three levels can further improve the performance. Therefore, our method uses the combination of the first three feature levels as the patch feature. \begin{table} \begin{tabular}{c c c c} \hline Partition & Texture & Object & All \\ \hline P2 & (98.9, 97.7, 48132) & (98.2, 97.2, 92904) & (98.5, 97.3, 77980) \\ P3 & (98.5, 97.6, 7802) & (97.8, 97.3, 20616) & (98.0, 97.4, 16345) \\ P4 & (98.4, 97.4, 1166) & (97.4, 97.1, 3397) & (97.7, 97.2, 2653) \\ P5 & (98.1, 97.1, 234) & (95.2, 96.6, 902) & (96.1, 96.8, 679) \\ P6 & (95.8, 96.4, 56) & (92.2, 95.9, 190) & (93.4, 96.1, 146) \\ \hline Best & (99.0, 97.7, 17559) & (98.2, 97.2, 52844) & (98.5, 97.4, 41083) \\ \hline Ours & (98.3, 97.5, 1787) & (97.4, 97.1, 4626) & (97.7, 97.2, 3680) \\ \hline \end{tabular} \end{table} Table 6: Anomaly detection and localization performance of ProtoAD with different FINCH partitions. Each tuple shows image-level AUROC, pixel-level AUROC, and average cluster numbers. \begin{table} \begin{tabular}{c c c c} \hline Feature Level & Texture & Object & All \\ \hline level 1 & (96.2, 96.6) & (85.5, 92.7) & (89.0, 94.0) \\ level 2 & (97.9, 97.3) & (97.2, 95.5) & (97.4, 96.1) \\ level 3 & (98.0, 96.7) & (95.7, 96.0) & (96.5, 96.2) \\ level 2+3 & (97.9, 97.3) & (96.8, 96.9) & (97.1, 96.9) \\ level 1+2+3 & (98.3, 97.5) & (97.4, 97.1) & (97.7, 97.2) \\ \hline \end{tabular} \end{table} Table 5: Anomaly detection and localization performance of ProtoAD with features at different levels. Each tuple shows image-level AUROC and pixel-level AUROC. #### Partition Selection from Clustering Hierarchy FINCH is a hierarchical agglomerative clustering algorithm. It recursively merges clusters from the bottom up and provides a set of partitions in a hierarchical structure. Each successive partition is a super-set of its preceding partitions, and the number of clusters in it is smaller than those in the preceding partitions. Thus, we need select a partition from the clustering hierarchy as the clustering result. We report the performance of our method with different partitions, from the second (P2) to the 6-th (P6) partition of FINCH, in table 6. We do not include the first partition because it has a huge number of clusters. The results in table 6 indicate the average performance decreases along with the merging process. This may be because, when the number of clusters gets smaller, clusters are less compact and unsuitable for anomaly detection. On the other hand, if the number of clusters is too large, there are too many prototypes, and the inference time and storage would increase rapidly. We also give the "Best" performance, which FINCH can achieve by selecting the best partition for each category respectively. This best performance is the upper bound that our method can achieve. However, selecting partition based on the average performance (from P2 to P6) or performance for each category (Best) is time-consuming and not suitable for real applications. In our method, we stop FINCH when the number of cluster is less than 10,000 and use the final partition as the clustering result, and give its results in the last line of table 6. Our partition selection rule can achieve performance very close to the best one with only a tenth of clusters. Therefore, our method can reach a good trade-off between effectiveness and efficiency. #### FINCH vs. K-Means We compare FINCH clustering algorithm with k-means for the prototype-based anomaly detection. In our method, we choose the partition generated so far by FINCH which having less than 10,000 clusters as the clustering result. For a fair comparison, we set k to 10,000 for k-means. The results in table 7 indicate that the method based on FINCH (the third column) achieves better performance than that based on k-means (the first column). Although it may achieve better performance for k-means by tuning k, it is time-consuming and not feasible for real applications. \begin{table} \begin{tabular}{c c c c} \hline & K-Means & K-Means & FINCH \\ Category & (L2) & (Norm L2) & (Coisne) \\ \hline texture & (95.0, 95.3) & (97.2, 96.6) & (98.3, 97.5) \\ \hline object & (92.9, 95.9) & (95.4, 96.1) & (97.4, 97.1) \\ \hline all & (93.6, 95.7) & (96.0, 96.2) & (97.7, 97.2) \\ \hline \end{tabular} \end{table} Table 7: Anomaly detection and localization performance of ProtoAD with different clustering methods. Each tuple shows image-Level AUROC and pixel-level AUROC. #### 4.4.4 Feature Normalization and Cosine Similarity We also explore the importance of feature normalization for the prototype-based anomaly detection. As shown in Table 7, k-means with Euclidean distance on the \(L2\)-normalized features (Norm L2) outperforms k-means with Euclidean distance on the original features (L2) in both anomaly detection and anomaly localization and achieves greater improvements in anomaly detection. When the features are \(L2\)-normalized, cosine similarity and Euclidean distance are equivalent in the sense of nearest neighbor searching. Therefore, we use cosine similarity for clustering and measuring the deviation of test patch features from norm patch features. We further implement cosine similarity with a \(1\times 1\) convolution and append it to the feature extraction network. Therefore inference can be performed in an end-to-end manner. ### Results on BTAD In Table 8, we report the results of our method on the BTAD dataset and compare them with those of the SOTA OOD-based method (SPADE, PaDiM, and ProtoAD) and the approaches adopted in [32]. In [32], three reconstruction-based methods have been evaluated, auto-encoder (AE) with MSE loss, auto-encoder with MSE and SSIM loss, and Vision-Transformer-based image anomaly detection and localization (VT-ADL). We report the image-level and pixel-level AUROC for each category and their average for all categories. For anomaly detection, ProtoAD achieved the best image-level AUROC. For anomaly localization, ProtoAD achieved the second-best pixel-level AUROC (97.0), very close to the best one (97.4) achieved by PaDiM. These results show our method's potential to generalize to new anomalous scenarios. ## 5 Conclusion We propose ProtoAD, a new OOD-based image anomaly detection and localization method. First, a pre-trained neural network is used to extract features for image patches. Then, a non-parametric clustering algorithm learns the prototypes for normal patch features. Finally, an image anomaly detection and localization network is constructed by appending the feature extraction network with the \(L2\) feature normalization, a \(1\times 1\) convolutional layer, a channel max-pooling, and a subtraction operation. As a result, ProtoAD does not need \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **Category** & AE(MSE) & AE(MSE+SSIM) & VT-ADL & SPADE & PaDiM & PatchCore & ProtoAD \\ \hline 01 & ( -, 49.0) & ( -, 53.0) & ( -, 99.0) & (93.2, 90.2) & (100.0, 97.2) & (95.4, 96.2) & (97.0, 95.5) \\ 02 & ( -, 92.0) & ( -, 96.0) & ( -, 94.0) & (74.8, 93.5) & (81.5, 95.4) & (85.1, 95.2) & (85.2, 96.5) \\ 03 & ( -, 55.0) & ( -, 89.0) & ( -, 77.0) & (99.4, 96.3) & (98.6, 99.6) & (99.7, 99.5) & (98.9, 99.0) \\ \hline **All** & ( -, 78.7) & ( -, 79.3) & ( -, 90.0) & (89.1, 93.3) & (93.4, **97.4**) & (93.4, 97.0) & (**94.0**, 97.0) \\ \hline \hline \end{tabular} \end{table} Table 8: Anomaly detection and localization performance on BTAD (Image-level and Pixel-level AUROC). The best results are bold-faced. a network training process and can conduct anomaly detection and localization in an end-to-end manner. Experimental results on the MVTec AD dataset and the BTAD dataset show that ProtoAD can achieve competitive performance compared to state-of-the-art methods. Furthermore, compared to other OOD-based methods, ProtoAD is more elegant and efficient. And compared to the reconstruct-based methods, ProtoAD does not need a cumbersome network training process. Therefore, it can better meet the requirements of real applications. ## 6 Funding and Competing interests This research was supported by the National Defense Basic Scientific Research Program of China under Grant JCKY2020903B002. The authors have no relevant financial or non-financial interests to disclose.
2310.19630
Convolutional Neural Networks for Automatic Detection of Intact Adenovirus from TEM Imaging with Debris, Broken and Artefacts Particles
Regular monitoring of the primary particles and purity profiles of a drug product during development and manufacturing processes is essential for manufacturers to avoid product variability and contamination. Transmission electron microscopy (TEM) imaging helps manufacturers predict how changes affect particle characteristics and purity for virus-based gene therapy vector products and intermediates. Since intact particles can characterize efficacious products, it is beneficial to automate the detection of intact adenovirus against a non-intact-viral background mixed with debris, broken, and artefact particles. In the presence of such particles, detecting intact adenoviruses becomes more challenging. To overcome the challenge, due to such a presence, we developed a software tool for semi-automatic annotation and segmentation of adenoviruses and a software tool for automatic segmentation and detection of intact adenoviruses in TEM imaging systems. The developed semi-automatic tool exploited conventional image analysis techniques while the automatic tool was built based on convolutional neural networks and image analysis techniques. Our quantitative and qualitative evaluations showed outstanding true positive detection rates compared to false positive and negative rates where adenoviruses were nicely detected without mistaking them for real debris, broken adenoviruses, and/or staining artefacts.
Olivier Rukundo, Andrea Behanova, Riccardo De Feo, Seppo Ronkko, Joni Oja, Jussi Tohka
2023-10-30T15:23:25Z
http://arxiv.org/abs/2310.19630v3
Convolutional Neural Networks for Automatic Detection of Intact Adenovirus from TEM Imaging with Debris, Broken and Artefacts Particles ###### Abstract Regular monitoring of the primary particles and purity profiles of a drug product during development and manufacturing processes is essential for manufacturers to avoid product variability and contamination. Transmission electron microscopy (TEM) imaging helps manufacturers predict how changes affect particle characteristics and purity for virus-based gene therapy vector products and intermediates. Since intact particles can characterize efficacious products, it is beneficial to automate the detection of intact adenovirus against a non-intact-viral background mixed with debris, broken, and artefact particles. In the presence of such particles, detecting intact adenoviruses becomes more challenging. To overcome the challenge, due to such a presence, we developed a software tool for semi-automatic annotation and segmentation of adenoviruses and a software tool for automatic segmentation and detection of intact adenoviruses in TEM imaging systems. The developed semi-automatic tool exploited conventional image analysis techniques while the automatic tool was built based on convolutional neural networks and image analysis techniques. Our quantitative and qualitative evaluations showed outstanding true positive detection rates compared to false positive and negative rates where adenoviruses were nicely detected without mistaking them for real debris, broken adenoviruses, and/or staining artefacts. - - Adenovirus; Debris; Artefact; Convolutional Neural Networks, Segmentation; TEM ## 1 Introduction Large-scale production of viral vectors for gene therapy requires tools to characterize the virus particles [2]. Transmission electron microscopy (TEM) is the only imaging technique allowing the direct visualization of viruses, due to its nanometer-scale resolution [21], [4]. Consequently, with TEM, it becomes possible to understand what occurs with viral particles when parameters or process operations change or when formulations are modified. Different biomanufacturing process conditions have different effects on particle characteristics, and images that reveal particle morphology together with quantitative analysis can provide a good understanding of and insights into the impact of such process changes via assessing overall morphology (stability, purity, integrity, and clustering) which might affect vector performance [1], [3]. However, due to the need for considerable operator skills, special laboratory facilities, and the limitations in providing quantitative data, it is not routinely used in process development [25]. It is important to note that TEM image analysis is typically performed in specialized TEM facilities, and the time to get results is often long [25]. Also, the process to annotate, segment, and detect intact adenoviruses in TEM images remains challenging due to the presence of broken adenoviruses, debris, and various kinds of staining artefacts as illustrated in Figure 1. Consequently, the intact adenovirus segmentation in TEM images using traditional image analysis methods is not reliable [5]; challenging intact adenovirus characterization. Deep convolutional neural networks (CNNs) have shown excellent performance in many biomedical imaging tasks which were thought to be unsolvable before the deep learning era [22], [23],[24]. Here, the CNN of interest was U-net, which is widely used and known for its excellent segmentation precision of medical images [7], [16], [18]. U-Net is a modified and/or extended version of a fully convolutional network that works with very few training images to yield more precise segmentations [7], [16], [18]. Although many works currently exist, mostly proposed for segmentation of bio/medical images using the U-net or variants or closely related versions [7], [8], [9], [10], [34], the U-net outperformed the earlier best methods and could still provide a fast and accurate segmentation of images. However, research in the automatic segmentation of intact adenoviruses in TEM images remains in its infancy. There exist a few works that proposed both CNN-based and non-CNN-based solutions to image analysis of TEM images of virus particles [11], [12], [13], [14], [15]. References [11] and [15] propose methods for segmentation of different types of viruses, including adenoviruses, from TEM images using a morphological image analysis pipeline [15] and U-Net [11]. Reference [12] proposes a method for classification between different types of viruses and makes available an open TEM dataset to study virus-type classification. Reference [13] proposes a fully connected neural network to detect feline calicivirus particles from TEM images. Finally, reference [14] focuses on the reduction of the number of trainable U-Net weights for segmentation of various virus particles from TEM images. However, among these works, there was no clear focus or dedicated work on intact adenovirus segmentation and detection with the aim of improving the characterization of adenoviruses in images captured by high-throughput TEM systems for production of viral vectors. Therefore, we introduce a U-Net-based approach together with software tools for fast and easy training, for segmentation of intact adenoviruses from high-throughput TEM images. Our purpose is not only due to the need for testing the automation of detection of intact adenovirus from TEM imaging with debris, broken, and artefacts particles but also to demonstrate that, detecting intact adenoviruses with high accuracy, even in highly challenging imaging conditions, was possible with U-Net. ## 2 Material and Methods Figure 1: Intact adenovirus particles (top-left-side – blue arrow), broken adenovirus particles (top-right-side – blue arrows). debris particles (bottom-left-side: inside red circles – large debris and blue arrows – small debris), artefact particles (bottom-right-side: inside blue circles - examples of uranyl acetate staining artefacts). ### Image data The imaging was performed by using the MiniTEM microscope by Vironova AB, Stockholm, Sweden [6], with an operating voltage of 25 kV and with a field of view (FOV) of 3 \(\upmu\)m for the adenovirus samples [27]. We first acquired a training and validation set of 50 images of the size of 2048-by-2048. The intact adenoviruses of this set were annotated using a semiautomatic software tool developed by us specifically for this purpose. We used this image set to train the CNN and validate its performance using cross-validation. Second, we acquired a test set of 20 MiniTEM images that were completely independent from the training and validation set and used to test the final CNN model for adenovirus detection. This test set contained very challenging images with varying levels of debris and staining artefacts that would be too challenging for the traditional image analysis methods. ### Software tool for semi-automatic annotation and segmentation of intact adenovirus #### 2.2.1 Semi-automatic annotation The image annotation process is one of the most challenging steps that affect the training outcome for the automatic segmentation of microscopy images [11]. Also, annotating large enough training sets for supervised learning is a bottleneck, and dedicated tools to speed up the annotation process are still needed [28], [29], [30]. In this regard, a GUI-based software tool for semi-automated segmentation of MiniTEM images was developed and later used to create annotated MiniTEM images used for training the U-Net model. The software tool is available at [https://github.com/AndreaBehan/miniTEM-Image-Segmentation](https://github.com/AndreaBehan/miniTEM-Image-Segmentation). A video showcasing the annotation process is available in the supplement. The tool can be used for rapid manual and semi-automatic annotation and semi-automatic segmentation of intact adenoviruses and other types of debris. #### 2.2.2 Semi-automatic segmentation Using the developed semi-automatic tool required to first create a set of candidate adenoviruses through automatic image analysis operations. It is important to note that the entire procedure is based on an assumption that an intact adenovirus is a circular, bright object surrounded by a darker area. The key steps are as follows, see the panels A, B, C, D, and E of Figure 2: Figure 2: The developed software tool for semi-automatic segmentation of adenoviruses in MiniTEM images. Top row (A) Close-up of the original MiniTEM image, (B) contrast-enhanced image, (C) median filtered contrast-enhanced image (D) image with large bright areas masked out (E) Adenoviruses detected by Hough transform. Bottom row: Left GUI of the annotation tool with an image with the overlaid automatic segmentation, right: Image with overlaid segmentation after manual corrections. 1. Enhance the contrast of the image by saturating the top and the bottom 1% of intensity values in the images and perform the median filtering, with a 15 by 15 window, on the enhanced image. (Figure 2, panels B and C) 2. Segment out large bright areas of the median filtered image by thresholding followed by morphological operations. This operation is necessary to allow for the Hough transform in the next step to concentrate on adenoviruses. Note that this step does not remove intact adenoviruses as they are surrounded by a darker area (Figure 2 panel D). 3. Find adenovirus boundaries by using the circular Hough transform [31] (Figure 2 panel E) 4. Remove candidate adenoviruses that do not have a dark area surrounding them by detecting the mode of the histogram of the rectangular patch around the adenovirus. After that, the user can interactively add and remove adenoviruses as shown in the supplementary video. ### Software tool for automatic segmentation and detection of intact adenovirus #### 2.3.1 U-Net architecture Our CNN for automatic segmentation was based on the U-Net architecture [16] as implemented in MATLAB. U-Net features a U-shaped design, comprising contracting and expansive paths. Figure 3 shows the input and output layers, as well as the intermediate layers and connections, of a deep learning network as visualized by the analyzeNetwork function in MATLAB. The contracting path consists of repeating blocks of convolution, ReLU activation, and max pooling. The expansive path involves transposed convolution, ReLU activation, concatenation with the downsampled feature map, and additional convolution. #### 2.3.2 Training To avoid high computation demands, during the U-net training process, each 2048-by-2048 image was split into non-overlapping 64 image patches of the size 256-by-256. For each original MiniTEM image with 16 bits' depth was changed to 8 bits' depth image to minimize memory usage during the training and evaluation. The execution environment was single-GPU with the Nvidia GeForce RTX 3070 graphic card and 11th Gen Intel(R) Core(TM) i7-11700F @ 2.50GHz, 2496 Mhz, 8 Core(s), 16 Logical Processor(s). #### 2.3.3 Hyperparameter settings Figure 3: The U-net architecture used in this work. Conv means convolution. ReLU is a rectified linear unit. DepthConv is depth concatenation. UpConv means up-convolution or transposed convolution. MaxPool is Max Pooling Hyperparameter settings were manually adjusted with no new adjustments if 90% of training accuracy was reached during the first 10% of all epochs [18]. Training hyperparameters that were not listed below remained set to default, including the number of first encoder filters and encoder depth. The number of epochs = 30; the minimum batch size = 4; the initial learning rate = 0.0001; L2 regularization = 0.00005; optimizer = Adam (adaptive moment estimation algorithm). The loss function used was the default cross-entropy function provided by MATLAB's U-Net Layers function, for image segmentation using the U-Net architecture [33]. In other words, the pixel classification layer was not replaced with a weighted pixel classification layer. #### 2.3.4 Data augmentation The data augmentation options used consisted of the random reflection in the left-right direction as well as the range of vertical and horizontal translations, with 50% probability, on the pixel interval ranging from -10 to 10. ### Post-processing A systematic combination of image filtering, dilating, and burning functions, [35], [36], [37], was used/applied to improve the quality of outlines of U-net's segmentation masks. In this way, we could emphasize or highlight the most precise outlines of intact adenovirus. ### Performance evaluation metrics We evaluated the segmentation both in terms of detection and segmentation performance. For detection, we counted the number of true positives (TP: intact adenovirus correctly detected by U-Net), false positives (FP: adenovirus incorrectly detected by the U-Net), and false negatives (FN: intact adenovirus not detected by U-Net) [19]. Based on TP, FP, and FN counts, we computed _recall_ and _precision_ and _F-value_ as \[precision=\frac{TP}{TP+FP} \tag{1}\] \[recall=\frac{TP}{TP+FN} \tag{2}\] \[F-value=\frac{\left(1+\beta^{2}\right)\texttt{+}recall\texttt{+}precision}{ \beta^{2}\texttt{+}recall\texttt{+}precision} \tag{3}\] In Eq.3, \(\beta\)corresponds to the relative importance of precision versus recall, which we set \(\beta=1\)[19][38]. We defined correct (and incorrect) detections based on the overlap of the segmentation masks and ground-truth masks, which required setting a threshold value on the overlap. To demonstrate that the detection results were not dependent on a single threshold value, we set our main or primary threshold at 75%. We also examined the secondary thresholds of 50% and 25%, as illustrated in Figure 4. For the external test set, for which manually created ground-truth segmentations did not exist, we used the developed software tool to detect and count the number of detected and missed adenoviruses as well as those incorrectly highlighted as detected or not detected. Also, we defined the threshold at 75%, 50%, and 25%, and defining a match was subjective (see Figure 4-b). For the evaluation of the semantic segmentation, we used the Dice score and intersection over the union (IoU) also known as the Jaccard coefficient as the performance measures [19]. ## 3 Results ### Quantitative evaluation with K-fold cross-validation on the training and validation sets We used 5-fold cross-validation on the training and validation set to quantitatively evaluate the segmentation results. Figure 5 represents the quantitative evaluation results on detection. As the figure illustrates, on average the detection rates were high, with both the average precision and recall exceeding 90% with all the studied thresholds. On some folds, where the total number of intact adenoviruses was low, the precision and/or recall dropped below the 90% limit. However, even in these folds, the precision and recall exceeded 80% in most cases indicating sufficient precision and recall for practical applications to monitor the manufacturing process of a drug product. For the segmentation evaluation, the average Dice score exceeded 0.80, which indicates that the segmentation quality corresponded well to the ground truth. In fact, perfect segmentation results were not expected here due to the semi-automated nature of the annotation of the training data and the potential difficulty in setting the boundary of the intact adenovirus. The segmentation quality was more than sufficient to assess the morphology of intact adenoviruses in monitoring the manufacturing process. Figure 4: Example of illustration showing the ideal (a) and real (b) cases of TP, FP, and FN: In our experiments, we divided the TP situations into categories based on subjectively defined thresholds. The primary threshold was set at 75% of the area of the full circle, while secondary thresholds were set at 50% and 25% of the area of the full circle to represent the extent of differences between the ground truth and output masks. FP and FN were defined as cases where there were complete and noticeable differences between the areas of ground truth and output masks, as shown in green and purple colors, respectively. Figure 6 presents the average percentage of intact adenovirus detected in terms of Dice and Intersection over Union (IoU) scores for each of the 5-folds. Figure 5: (a) Average number, (b) Recall, (c) Precision, and (d) F-value. Blue dots represent the results corresponding to the individual cross-validation folds, and the red dot is their average. TP stands for true positive. TP75 represents our main threshold set at 75%, TP50 represents the secondary TP threshold set at 50%, and TP25 is another secondary TP threshold set at 25%. TP75 + TP50 refers to the case, where we count the detections with more than 50% overlap with the ground-truth segmentations as correct. TP75 + TP50 + TP25 refers to the case, where we count the detections with more than 25% overlap with the ground-truth segmentations as correct. Figure 6: Average Dice and IoU score. Blue dots represent the results corresponding to the individual cross-validation folds, and the red dot is their average. ### Results on the external test set The external test set of 20 MiniTEM images was selected to test the accuracy of automatic detection of intact adenoviruses. These images were mixed-quality images containing intact adenoviruses, debris, artefacts, and broken particles. The quantitative detection results are shown in Figure 7 and all the 20 segmentations are shown in Figure 8. Note that we manually scored the detections as no ground-truth segmentation was available. Here, a high recall was achieved in all the cases, but the precision, albeit higher than 90% on average, remained low in some of the cases. Figure 8 shows the segmentation overlaid on the images, suggesting good segmentation performance even with the images containing non-intact adenoviruses, debris, and various staining artefacts. ## 4 Discussion In this work, we introduced a U-Net-based system for segmentation of intact adenoviruses from high-throughput TEM images for characterization of virus particles required in the production of virus vectors. Our experimental results demonstrated a great potential for precise automated detection of intact adenovirus in TEM system images with varying degrees of quality. More interestingly, the developed software tool for automatic detection did not mistake intact adenovirus for structures or debris or artefacts similar to an internally stained particle, doublet conformation, and triplet conformation of adenoviruses. Also, it did not mistake intact adenovirus for gradually degenerated integrity adenoviruses or black spots. However, due to the presence of Figure 7: (a) Number, (b) Recall, (c) Precision, and (d) F-value. TP75 is our main threshold set at 75%, and TP50 and TP25 are secondary TP threshold set at 50%, and 25%, respectively. TP75 + TP50 refers to the case, where we count the detections with more than 50% overlap with the ground-truth segmentations as correct. TP75 + TP50 + TP25 refers to the case, where we count the detections with more than 25% overlap with the ground-truth segmentations as correct. debris and artefacts in MiniTEM images, there were a few cases of false negative and false positive detections as shown in Section 3. We successfully demonstrated that it is possible to develop an automated segmentation tool for high-throughput experiments with relatively little operator effort by combining a semi-automated custom-made software tool for training and U-Net. While neural networks have been increasingly used for segmentation, detection, and classification of viruses and other particles from TEM images, there have been no previous Figure 8: The automatic segmentations on the images of the external test set images. works specifically focusing on the high-throughput imaging required in the production of virus vectors. Most of the previous work has concentrated on the detection of particles (such as human cytomegalovirusovirus [39, 40] or caveolae [41]) or the classification of different types of viruses in TEM images [12]. In the field of materials science, there is a great interest in using neural network-based approaches for characterizing nanoparticles based on TEM images [42, 43, 44]. Furthermore, [23] proposed the use of a fully residual U-Net for the segmentation of small extracellular vesicles from TEM images. Apart from the application itself, these image analysis problems differ considerably from the one we were facing. In our case, the main challenge lies in the variable quality of the images and the variable appearance of adenoviruses in the images acquired under different biomanufacturing process conditions, rather than the variable shape or form of adenoviruses, which are well-defined. ## 5 Conclusions To ease the development of improving adenovirus characterization, we developed a software tool for automatic segmentation and detection of intact adenovirus in TEM imaging systems, particularly MiniTEM. Despite the presence of debris and artefacts as well as broken particles in MiniTEM images, the developed software tool demonstrated the possibility to accurately and automatically segment and detect intact adenovirus particles. Future potential research efforts may cover small, large, and rod debris definitions for automatic segmentation and quantification purposes. ## Funding This work was funded via the project titled "Enhancing the Innovation Potential by advancing the know-how on biomedical image analysis" by the European Social Fund (S21770). ## Author contributions Olivier Rukundo: Designed the convolutional neural network for automatic segmentation of adenoviruses, developed the software tool for automatic detection of intact adenoviruses, and wrote the paper. Andrea Behanova and Riccardo De Feo: Designed and developed the software tool for semi-automatic segmentation of adenoviruses and debris. Seppo Ronkko and Joni Oja: Provided the training and testing images reviewed the paper and confirmed the validity of the study. Jussi Tohka: Read the paper and suggested modifications, conceptualized and supervised the research, and acquired the funding for the project that supported this work/paper. ## Conflict of interest The authors declare no conflict of interest. ## Supplementary material Software for semi-automatic annotation: [https://blogs.uef.fi/kubiac/software/](https://blogs.uef.fi/kubiac/software/) Software for automatic segmentation: [https://blogs.uef.fi/kubiac/software/](https://blogs.uef.fi/kubiac/software/) Software overview: [https://www.youtube.com/watch?v=4UZJHDPKI-g](https://www.youtube.com/watch?v=4UZJHDPKI-g)
2304.00598
Stochastic Reachability of Uncontrolled Systems via Probability Measures: Approximation via Deep Neural Networks
This paper poses a theoretical characterization of the stochastic reachability problem in terms of probability measures, capturing the probability measure of the state of the system that satisfies the reachability specification for all probabilities over a finite horizon. We achieve this by constructing the level sets of the probability measure for all probability values and, since our approach is only for autonomous systems, we can determine the level sets via forward simulations of the system from a point in the state space at some time step in the finite horizon to estimate the reach probability. We devise a training procedure which exploits this forward simulation and employ it to design a deep neural network (DNN) to predict the reach probability provided the current state and time step. We validate the effectiveness of our approach through three examples.
Karthik Sivaramakrishnan, Vignesh Sivaramakrishnan, Rosalyn Alex Devonport, Meeko M. K. Oishi
2023-04-02T18:57:55Z
http://arxiv.org/abs/2304.00598v2
# Stochastic Reachability of Discrete-Time Stochastic Systems via Probability Measures ###### Abstract We develop a framework for stochastic reachability for discrete-time systems in terms of probability measures. We reframe the stochastic reachability problem in terms of reaching and avoiding known probability measures, and seek to characterize a) the initial set of distributions which satisfy the reachability specifications, and b) a Markov based feedback control that ensures these properties for a given initial distribution. That is, we seek to find a Markov feedback policy which ensures that the state probability measure reaches a target probability measure while avoiding an undesirable probability measure over a finite time horizon. We establish a backward recursion for propagation of state probability measures, as well as sufficient conditions for the existence of a Markov feedback policy via the 1-Wasserstein distance. Lastly, we characterize the set of initial distributions which satisfy the reachability specifications. We demonstrate our approach on an analytical example. ## I Introduction Stochastic reachability analysis is an established method for probabilistic safety, that provides an assurance that states that start within some initial set can reach a desired target set while avoiding a "bad" set, with at least some known likelihood. Such an assurance is important for stochastic dynamical systems in which satisfaction of state and input constraints is paramount, including problems in space vehicle rendezvous [1] and robotics [2, 3]. A theoretical foundation based in dynamic programming has been developed for stochastic reachability of discrete-time stochastic hybrid systems [4, 5]. However, the computational hurdles associated with dynamic programming are significant, as it requires gridding the state space and evaluating a value function at all points on the grid. Alternative approaches have been developed, based in abstraction [6, 7], sampling [8], and underapproximative methods [9] as well as approximative methods [10] in convex optimization. We propose an alternative technique for stochastic reachability that is based in probability measures. Probability measures can be used to represent not only the distributions of the state, but also whether a state lies within some region of interest. Probability measures have been employed in covariance steering [11, 12, 13] and distribution steering [14, 15] problems, in which the optimization problems minimize the distance between probability measures, as well as in data-driven reachability via Christoffel functions [16]. One advantage of employing probability measures is that we can exploit tools from measure theory to enforce guarantees that hold almost surely, meaning that properties over a probability measure hold with probability one. Further, recent work in optimal transport has yielded significant gains in computing with probability measures [17]. In this paper, we develop a theoretical foundation for propagation of probability measures that represent the distribution of the state of a discrete-time stochastic system over a finite horizon, and synthesize a Markov feedback policy that satisfies the reachability specifications. The Markov feedback policy minimizes the Wasserstein distance between distributions at each time step. In contrast to standard methods (Figure 1), which seek to synthesize the largest set of initial conditions which satisfy reachability specifications with at least a minimum likelihood, our approach assures reachability specifications almost surely. We obtain the set of initial probability measures, as well as a corresponding Markov feedback policy, in lieu of an initial set. Our approach is Fig. 1: Top: The standard stochastic reach-avoid problem finds the set of initial states for which there exists a Markov feedback policy that ensures that the state avoids an unsafe set (red) and reaches a target set (purple). Bottom: We re-frame the stochastic reach-avoid problem in terms of probability measures, to find the set of initial probability measures (orange) for which a Markov feedback policy ensures that the state probability measure avoids an unsafe probability measure (red) and reaches a desired target probability measure (purple) almost surely. enabled by the representation of standard target and avoid sets as probability measures. We describe mathematical preliminaries from probability and measure theory in Section II. In Section III, we describe the recursive evolution of probability measures over a finite-time horizon, presuming a known Markov feedback policy, and in Section IV we synthesize a controller which is assured to satisfy the reachability specifications almost surely. We demonstrate our approach on a small, analytic example in Section V, and conclude in Section VI. ## II Preliminaries and Problem Formulation Let \(\mathbb{N}\) denote the natural numbers and \(\mathbb{N}_{[a,b]}\) denote the set of natural numbers from \(a\) to \(b\) inclusively, for \(a,b\in\mathbb{N}\) and \(a\leq b\)[18]. Let \(\mathbb{R}^{p}\) describe the extended real numbers \(\mathbb{R}^{p}\cup\{-\infty,\infty\}^{p}\) of dimension \(p\in\mathbb{N}\). Matrices and vectors are denoted with uppercase \(D\in\mathbb{R}^{l\times p}\) and lowercase \(d\in\mathbb{R}^{l}\), respectively. Random vectors are bolded, \(\mathbf{y}\in\mathbb{R}^{p}\). A probability space is a tuple \((\Omega,\mathscr{F},\mu)\), where \(\Omega\) is the sample space and \(\mathscr{F}\) is the \(\sigma\)-algebra containing measurable subsets of \(\Omega\), i.e. events in the sample space, and a probability measure \(\mu:\mathscr{F}\to[0,1]\). We consider the set of reals \(\mathbb{R}^{p}\) that is separable and metrizable, that is, a Polish space [19, Conventions]. A continuous random vector \(\mathbf{y}=[\mathbf{y}_{1}\cdots\mathbf{y}_{p}]^{\intercal}\) is a measurable function \(\mathbf{y}:\Omega\to\mathbb{R}^{p}\) which maps from the probability space to a Borel measurable space \((\mathbb{R}^{p},\mathscr{B}(\mathbb{R}^{p}))\), where \(\mathscr{B}(\mathbb{R}^{p})\) denotes the Borel \(\sigma\)-algebra and the inverse of the random vector is a subset of the \(\sigma\)-algebra, i.e., \(\mathbf{y}^{-1}(\mathscr{B}(\mathbb{R}^{p}))=\{\omega:\omega\in\Omega, \mathbf{y}(\omega)\in\mathscr{B}(\mathbb{R}^{p})\}\). Since we assume Borel measurable probability spaces on the set of reals, we utilize the cumulative distribution function (cdf) as the probability measure, \(\mu_{\mathbf{y}}\), to define a probability space, \((\mathbb{R}^{p},\mathscr{B}(\mathbb{R}^{p}),\mu_{\mathbf{y}})\). The following Definition and Theorem establish this relationship. **Definition 1** (Cumulative Distribution Function [20, Sec. 6.3]).: _A function \(\Phi:\mathbb{R}^{p}\to[0,1]\subset\mathbb{R}\) is a cdf when it satisfies the following conditions:_ 1. _The left and right limits of_ \(\Phi\) _are zero and one respectively, that is,_ \(\lim_{y\to-\infty}\Phi(y_{1},\ldots,y_{p})=0\) _and_ \(\lim_{y\to\infty}\Phi(y_{1},\ldots,y_{p})=1\)_,_ 2. _For a product of intervals,_ \(\times_{i=1}^{p}(a_{i},b_{i}]\subset\mathbb{R}^{p}\)_, the mixed monotonicity of the function is non-negative, i.e._ \[\Delta_{p}^{(a,b)}\Phi= \Phi(b_{1},\ldots,b_{p})\] \[-\sum_{j=1}^{p}\Phi(b_{1},\ldots,b_{j-1},a_{j},b_{j+1},\ldots,b_{ p})\] \[+\sum_{1\leq j<k\leq p}\Phi(b_{1},\ldots,b_{j-1},a_{j},b_{j+1},\ldots,\] \[b_{k-1},a_{k},b_{k+1},\ldots,b_{p})\] \[+(-1)^{p}\Phi(a_{1},\ldots,a_{p})\geq 0\] (1) _where \(y=[y_{1}\cdots y_{p}]^{\intercal}\in\mathbb{R}^{p}\). Further, we denote for ease of notation \(\Phi(y_{1},\ldots,y_{p})=\Phi(y)\)._ **Theorem 1** (Defining a cdf for a real random vector on a probability space).: _For any random vector \(\mathbf{y}\) on the probability space \((\mathbb{R}^{p},\mathscr{B}(\mathbb{R}^{p}),\mu_{\mathbf{y}})\), there exists a unique cdf, \(\Phi_{\mathbf{y}}\), that satisfies Definition 1, where_ \[\mu_{\mathbf{y}}((-\infty,y_{1}]\times\cdots\times(-\infty,y_{p}]) \tag{2a}\] \[=\mathbb{P}\{\mathbf{y}_{1}(\omega)\leq y_{1}\cap\cdots\cap \mathbf{y}_{p}(\omega)\leq y_{p}\}\] (2b) \[=\lim_{a\to\infty}\Delta_{p}^{(a,b)}\Phi_{\mathbf{y}}=\Phi_{ \mathbf{y}}(y) \tag{2c}\] Proof.: From [20, Sec. 6.1 and 6.3]. Since cdfs are unique, the converse of Theorem 1 allows one to define a random vector and its probability space, \((\mathbb{R}^{p},\mathscr{B}(\mathbb{R}^{p}),\mu_{\mathbf{y}})\), given a cdf. **Corollary 1** (Defining a probability space via cdf).: _If a function \(\Phi\) satisfies Definition 1, then there always exists a random vector \(\mathbf{y}\) on a probability space \((\mathbb{R}^{p},\mathscr{B}(\mathbb{R}^{p}),\mu_{\mathbf{y}})\) such that \(\Phi\) is its cdf, that is, \(\Phi=\Phi_{\mathbf{y}}\)._ Proof.: From [20, Sec. 6.1 and 6.3]. Thus, we can either define a random vector \(\mathbf{y}\) such that there exists a corresponding cdf, \(\Phi_{\mathbf{y}}(y)\), or define a cdf, \(\Phi(y)\), such that it is the cdf of the random vector \(\mathbf{y}\), that is, \(\Phi_{\mathbf{y}}(y)=\Phi(y)\). Finally, we say a function \(g:\mathbb{R}^{p}\to\mathbb{R}^{l}\) is a Borel measurable function as long as it is continuous almost everywhere, meaning that continuity holds everywhere except a measurable subset of measure zero [21, Sec. 2.2]. ### _Discrete-time Stochastic Dynamical System_ We define a discrete-time, stochastic system that evolves over a time horizon of \(N\) steps, for time steps \(k\in\mathbb{N}_{[0,N-1]}\). **Definition 2**.: _A discrete-time, stochastic system is a time-varying, Borel measurable mapping, \(f_{k}:\mathbb{R}^{n}\times\mathcal{U}\times\mathcal{W}_{k}\to\mathbb{R}^{n}\),_ \[x_{k+1}=f_{k}(x_{k},u_{k},w_{k})=[f_{1,k}(x_{k},u_{k},w_{k})\cdots\] \[f_{n,k}(x_{k},u_{k},w_{k})]^{\intercal}, \tag{3}\] _where:_ 1. \(\mathcal{U}\subseteq\mathbb{R}^{m}\) _is the set of admissible inputs that is compact, where_ \(u_{k}\in\mathcal{U}\) _is the input vector._ 2. \(\mathcal{W}=\{(\mathcal{W}_{k},\mathscr{B}(\mathcal{W}_{k}))\}_{k=0}^{N-1}\)_, where_ \(\mathcal{W}_{k}\subseteq\mathbb{R}^{p}\)_, is the set of Borel measurable spaces denoting the disturbance space, with random vector_ \(\mathbf{w}_{k}\) _on a probability space_ \((\mathcal{W}_{k},\mathscr{B}(\mathcal{W}_{k}),\mu_{\mathbf{w}_{k}})\) _where_ \(\Phi_{\mathbf{w}_{k}}(w_{k})\) _is its cdf._ 3. \(\Phi_{\mathbf{x}_{k+1}}:\mathscr{B}(\mathbb{R}^{n})\times\mathbb{R}^{n}\times \mathbb{R}^{n}\times\mathcal{W}_{k}\times\mathcal{U}\to[0,1]\) _is a cdf representing the time-varying transition kernel for a random vector_ \(\mathbf{x}_{k+1}\) _on the probability space_ \((\mathbb{R}^{n},\mathscr{B}(\mathbb{R}^{n}),\mu_{\mathbf{x}_{k+1}})\)_, given the current state_ \(x_{k}\in\mathbb{R}^{n}\)_, disturbance_ \(\mathbf{w}_{k}\)_, and input_ \(u_{k}\in\mathcal{U}\)_._ _We define the transition kernel in terms of the probability measure of the random vector \(\mathbf{w}_{k}\)[22, Ch. 8],_ \[\Phi_{\mathbf{x}_{k+1}}(x_{k+1},x_{k},u_{k})\] \[=\mu_{\mathbf{w}}(\{x_{k+1}\in\mathbb{R}^{n}:f_{1,k}(x_{k},u_{k}, \mathbf{w}_{k})\leq x_{1,k+1}\times\] \[\qquad\qquad\qquad\ldots\times f_{n,k}(x_{k},u_{k},\mathbf{w}_{k}) \leq x_{n,k+1}\}|x_{k},u_{k}) \tag{4a}\] \[=\Phi_{f_{k}(x_{k},u_{k},\mathbf{w}_{k})}(x_{k+1},x_{k},u_{k}). \tag{4b}\] We also presume a Markov feedback policy \(\pi_{k}:\mathbb{R}^{n}\to\mathcal{U}\) that depends on the current state, \(x_{k}\). **Definition 3** ([22,, Sec. 8.1, Def. 8.2]).: _A Markov feedback policy is a sequence \(\pi=\{\pi_{0},\ldots,\pi_{k},\ldots,\pi_{N-1}\}\) such that each Markov feedback map, \(\pi_{k}:\mathbb{R}^{n}\to\mathcal{U}\) for \(k\in\mathbb{N}_{[0,N-1]}\), is a time-varying, non-random, Borel measurable function, that is, \(u_{k}=\pi_{k}(x_{k})\)._ ### _Set Representations via Measure Theory_ We use tools from measure theory to represent a change of measure and probabilities of a random vector \(\mathbf{y}\) residing in a Borel measurable set via probability measures. **Theorem 2** (Radon-Nikodym Theorem, [20]).: _If \(\nu\) and \(\mu\) are \(\sigma\)-finite measures (e.g., probability measures) on the Borel measurable space \((\mathbb{R}^{p},\mathscr{B}(\mathbb{R}^{p}))\), where \(\mu\) is absolutely continuous with respect to \(\nu\), that is, \(\mu<<\nu\), then there exists a measurable function \(g(y)\geq 0\) such that_ \[\mu(G)=\int_{G}g(y)\mathrm{d}\nu(y),\ \forall G\in\mathscr{B}(\mathbb{R}^{p}). \tag{5}\] _The function \(g(y)=\frac{\mathrm{d}\mu}{\mathrm{d}\nu}\) is the Radon-Nikodym derivative of \(\mu\) with respect to \(\nu\)._ Theorem 2 imposes a change of measure via a non-negative measurable function. We make use of the the Dirac measure, a probability measure over a set, as such a non-negative measurable function. **Definition 4** (Dirac Measures [21, Sec. 1.3]).: _Given a probability space \((\mathbb{R}^{n},\mathscr{B}(\mathbb{R}^{n}))\), a Dirac measure is \(\delta_{y}:G\to\{0,1\}\) where_ \[\delta_{y}(G)=\begin{cases}1,\ y\in\mathcal{G}\\ 0,\ y\notin\mathcal{G}\end{cases}. \tag{6}\] Note that the Dirac measure functions similarly to an indicator function (\(1_{G}(y)=1\) if \(y\in G\), and is 0 otherwise). **Corollary 2** (Random Vector Residing in a Set via Dirac Measure).: _Given the probability measure of a random vector residing within a set, \(\mu(G)=\mathbb{P}\{\mathbf{y}\in G\}\), and the cdf of \(\mathbf{y}\), \(\Phi_{\mathbf{y}}\), such that \(\mu(G)<<\Phi_{\mathbf{y}}\), then the Dirac measure of a point \(y\) residing in a set, \(\delta_{y}(G)\), is a non-negative measurable function such that_ \[\mu(G)=\int_{\mathbb{R}^{p}}\delta_{y}(G)\mathrm{d}\Phi_{\mathbf{y}}(y). \tag{7}\] Proof.: This follows directly from Theorem 2. ### _Problem Formulation_ We seek to find the set of initial probability measures for which the state of the stochastic system in Definition 2, with some Markov feedback policy \(\pi\), will satisfy \[\mathbb{P}\left\{\mathbf{x}_{N}\in\mathcal{T}\cap\left(\cap_{k=0}^{N-1} \mathbf{x}_{k}\not\in\mathcal{A}\right)\right\} \tag{8}\] almost surely. In words, we wish for the system to reside in a target region, \(\mathcal{T}\), at final time \(N\), while avoiding an unsafe region, \(\mathcal{A}\), for \(k\in\mathbb{N}_{[0,N-1]}\) with probability 1 almost everywhere. Equation (8) imposes reaching a target region, \(\mathcal{T}\), while avoiding an undesired region, \(\mathcal{A}\), by staying within a target tube that is its complement, \(\mathcal{A}^{\mathcal{G}}\)[9]. We first transform these set representations to probability measures of the target set and the target tube via Definition 4. \[\mu_{\mathcal{T}}(x_{N}) =\delta_{x_{N}}(\mathcal{T})\] (9a) \[\mu_{\mathcal{A}^{\mathcal{G}}}(x_{k}) =\delta_{x_{k}}(\mathcal{A}^{\mathcal{\bar{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}})\] (\ref{\mu_{\mathcal{A}}(x_{k})=\delta_{x_{k}}(\mathcal{A}^{\mathcal{ \bar{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}})\)\] (\ref{\mu_{\mathcal{A}}(x_{k})=1-\mu_{\mathcal{A}^{\mathcal{\bar{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{ }}}}}}}}}}}}}})\] (\ref{\mu_{\mathcal{A}}(x_{k})=1}\). Note that this transformation is readily extendable to random sets [23]. **Problem 1** (Stochastic Reach-Avoid via Probability Measures).: _Given a target probability measure, \(\mu_{\mathcal{T}}\), and an avoid probability measure, \(\mu_{\mathcal{A}}\), we seek to find \(P_{\mathbf{x}_{0}}\), the set of initial probability measures, for which there exists a Markov feedback policy such that the probability measure of state of the system in Definition 2 reaches \(\mu_{\mathcal{T}}\) at the final time \(N\) and avoids \(\mu_{\mathcal{A}}\) for all time steps \(k\in\mathbb{N}_{[0,N-1]}\), almost surely._ We solve Problem 1 in two steps. First, in Section III, we presume that a Markov feedback policy exists and we recursively construct the state probability measures which satisfy the reach-avoid specifications almost surely backwards-in-time. Second, in Section IV, we construct Markov feedback policies for every initial probability measure that satisfies the reach-avoid specification almost surely, then construct the set of initial probability measures, \(P_{\mathbf{x}_{0}}\). ## III Backward Recursion of the State Probability Measure We seek to characterize the backward recursion that enables propagation of probability measures (Figure 2) that satisfy the target and avoid probability measures. To do so, we first describe the state probability measure \(\mathbf{x}_{k+1}\) in terms of the cdf of the prior state, \(\mathbf{x}_{k}\), using the relationship in (4). \[\Phi_{\mathbf{x}_{k+1}}(x_{k+1})\\ =\int_{\mathbb{R}^{n}}\Phi_{f_{k}(x_{k},u_{k},\mathbf{w}_{k})}(x_{ k+1},x_{k},\pi_{k}(x_{k}))\mathrm{d}\Phi_{\mathbf{x}_{k}}(x_{k}) \tag{10}\] **Lemma 1** (Uniqueness of State Probability Measure at Time \(k\)).: _For a probability measure \(\Phi_{\mathbf{x}_{k+1}}\) and a known Markov feedback map \(\pi_{k}(x_{k})\), there exists a unique probability measure \(\Phi_{\mathbf{x}_{k}}\) such that (10) holds._ Proof.: From [21, Lemma 10.4.3]. Satisfaction of the avoid and target probability measures requires additional constraints to be placed on this propagation. To constrain the state probability measure \(\Phi_{\mathbf{x}_{k}}\) so that it avoids the avoid probability measure \(\mu_{\mathcal{A}}\), we seek to satisfy \[\int_{\mathbb{R}^{n}}\mu_{\mathcal{A}^{\mathcal{\bar{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbf{\mathbfmathbfmathbfmathbfmathbf{\mathbfmathbfmathbf{\mathbfmathbfmathbf{ \mathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbf \mathbfmathbfmathbf{\leftleftleftleftleftleftleftleftleftleftleftleft \leftleftleftleftleftleftleftleftleftleft {ileftleftleftleftleftleftleftleftleftleftleftleftleftleft {i\leftleftleftleftleftleftleftleftleftleftleftleftleft {i{\leftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleft {\leftleftleftleftleftleftleftleftleftleftleftleft {\leftleftleftleftleftleftleftleftleftleft {\leftleftleftleftleftleftleftleftleft {\leftleftleftleftleftleftleftleftleftleftleft {\leftleftleftleftleftleftleftleftleftleftleftleft {\leftleftleftleftleftleftleftleftleftleft \leftleftleftleftleftleftleft {\leftleftleftleftleftleftleftleftleftleft \leftleftleftleftleftleftleft { \leftleftleftleftleftleftleftleft { \leftleftleftleftleftleftleftleftleftleftleft { { \leftleftleftleftleftleftleftleftleftleftleftleftleft { \leftleftleftleftleftleftleftleftleftleft { \right} \right}}\right}\right}\)\))))))))))))))))))))))))))))))))))))))))))}}}}}}}}\)))))))((((((((((((((((((((((((((((((()))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))}}}}}}}}}((())))))))))))))))))} For the state probability measure to meet the target set requirement, we must constrain \(\Phi_{\mathbf{x}_{N-1}}\) as follows, \[\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\mu_{\mathcal{T}}(x_{N}) \mathrm{d}\Phi_{\mathbf{\hat{x}}_{N}}(x_{N},x_{N-1},\pi_{N-1}(x_{N-1}))\\ \cdot\mathrm{d}\Phi_{\mathbf{x}_{N-1}}(x_{N-1})=1, \tag{12}\] for \(\hat{\mathbf{x}}_{N}=f_{N-1}(x_{N-1},u_{N-1},\mathbf{w}_{N-1})\). We formalize these notions in the following Theorem. **Theorem 3**.: _Given a target probability measure \(\mu_{\mathcal{T}}\), a target tube probability measure \(\mu_{\mathcal{A}}\), and a Markov feedback policy \(\pi\), the state probability measure must satisfy (10) for \(k\in\mathbb{N}_{[0,N-1]}\), as well as_ 1. _(_11_) for_ \(k\in\mathbb{N}_{[0,N-2]}\)_,_ 2. _(_12_) for_ \(k=N-1\)_,_ _in order to satisfy the reachability constraints almost surely._ Proof.: Consider the case in which \(k=N-1\): Note that (12) holds as long as \(\Phi_{\mathbf{x}_{N-1}}(x_{N-1})\) satisfies (11). Now consider \(k\in\mathbb{N}_{[0,N-2]}\). We employ backwards induction, beginning with the base case \(k=N-2\), \[\Phi_{\mathbf{x}_{N-1}}(x_{N-1})\\ =\int_{\mathbb{R}^{n}}\Phi_{f_{N-2}(x_{N-2},u_{N-2},\mathbf{w}_{N -2})}(x_{N-1},x_{N-2},\pi_{N-2}(x_{N-2}))\\ \cdot\mathrm{d}\Phi_{\mathbf{x}_{N-2}}(x_{N-2}), \tag{13}\] holds almost everywhere as long as \(\Phi_{\mathbf{x}_{N-2}}(x_{N-2})\) satisfies (11). If the equality holds for the case \(k=j\) where \(j<N-2\), then it must also hold for the case \(k=j-1\). Observe that for \(k=j-1\) and \(k=j\) the recursions, respectively, \[\Phi_{\mathbf{x}_{j+1}}(x_{j+1})=\int_{\mathbb{R}^{n}}\Phi_{f_{j} (x_{j},u_{j},\mathbf{w}_{j})}(x_{j+1},x_{j},\pi_{j}(x_{j}))\\ \cdot\mathrm{d}\Phi_{\mathbf{x}_{j}}(x_{j})\\ \Phi_{\mathbf{x}_{j}}(x_{j})\\ =\int_{\mathbb{R}^{n}}\Phi_{f_{j-1}(x_{j-1},u_{j-1},\mathbf{w}_{ j-1})}(x_{j},x_{j-1},\pi_{j-1}(x_{j-1}))\\ \cdot\mathrm{d}\Phi_{\mathbf{x}_{j-1}}(x_{j-1}), \tag{14b}\] hold almost everywhere as long as \(\Phi_{\mathbf{x}_{j}}(x_{j})\) and \(\Phi_{\mathbf{x}_{j-1}}(x_{j-1})\) satisfy (11). Thus, the equality holds for all \(k\leq N-2\). With establishing the conditions for which the backward recursion exists, we also can form a _certificate_ if the Markov feedback policy does not ensure that the state probability measure satisfies the reachability specifications almost surely. **Definition 5** (Certificate of Infeasibility).: _A backward recursion with a given Markov feedback policy provides a certificate of infeasibility at time step \(k\in\mathbb{N}_{[0,N]}\) when the recursion in Theorem 3 fails. That is, it cannot ensure a state probability measure, \(\Phi_{\mathbf{x}_{k}}\), that satisfies (11) for \(k\in\mathbb{N}_{[0,N-2]}\) or (11) and (12) for \(k=N-1\)._ As a result, when a given Markov feedback policy does not satisfy Definition 5, it may be an indication that the feedback policy should be modified to allow more control effort, for example, to ensure satisfaction. ## IV Feedback controller synthesis via probability measures With a complete description of the machinery of propagating probability measures via a backward recursion, we can now consider synthesis of Markov feedback maps in this same framework. First, we note that one of the main theoretical challenges associated with this framework is the need to enforce equality of probability measures almost everywhere. In particular, the formulation of an optimal Markov feedback policy within this framework requires a description of distances between probability measures. For this reason, we employ the Wasserstein distance from optimal transport [19] as it allows us to match two probability measures by measuring the distance between them. **Theorem 4** (Kantorovich-Rubinstein Theorem [19], Ch. 6).: _Let \((\mathcal{M},\mathscr{B}(\mathcal{M}))\) denote a Polish space. Let \(\mu_{1}\) and \(\mu_{2}\) represent probability measures on \((\mathcal{M},\mathscr{B}(\mathcal{M}))\), that is, \(\mu_{1},\mu_{2}\in\mathscr{B}(\mathcal{M})\). Thus, \(\forall\mu_{1},\mu_{2}\in\mathscr{B}(\mathcal{M})\), the dual of the 1-Wasserstein distance can be written as_ \[W_{1}(\mu_{1},\mu_{2})\\ =\sup_{\|g\|_{\mu}\leq 1}\left\{\int_{\mathcal{M}}g(x)\mathrm{d} \mu_{1}(x)-\int_{\mathcal{M}}g(x)\mathrm{d}\mu_{2}(x)\right\}, \tag{15}\] _where \(g:\mathcal{M}\to\mathbb{R}_{\geq 0}\) that satisfy the Lipschitz condition \(|g(x_{1})-g(x_{2})|\leq D(x_{1},x_{2})\) for all \(x_{1},x_{2}\in\mathcal{M}\)._ Theorem 4 allows us to measure the largest point of deviation between two probability measures via this distance Fig. 2: The backward recursion (10) and constraints (11), (12) together ensure that the target probability measure can be reached and the avoid probability measure avoided, subject to stochastic system dynamics and a known controller. metric. In addition, the \(1\)-Wasserstein distance exhibits the property of lower semicontinuity, which is an essential property that allows us to optimize over functions. **Lemma 2** (Lower semicontinuity of Wasserstein [19], Ch. 6).: _Let \((\mathcal{M},\mathscr{B}(\mathcal{M}))\) be a Polish space which is a complete, separable metric space with Borel \(\sigma\)-algebra \(\mathscr{B}(\mathcal{M})\) and \(W_{1}:\mathcal{M}\to\mathbb{R}_{\geq 0}\). The 1-Wasserstein distance in Theorem 4 is lower semicontinuous on \(\mathscr{B}(\mathcal{M})\), if for any sequence of probability measures \(\mu_{i}\) and \(\nu_{i}\) on \(\mathscr{B}_{1}(\mathcal{M})\) that converges weakly to probability measures \(\mu\) and \(\nu\) on \(\mathscr{B}(\mathcal{M})\), then we have that \(W_{1}(\mu,\nu)\leq\liminf\limits_{i\to\infty}\,W_{1}(\mu_{i},\nu_{i})\)._ Proof.: Given by Theorem 4.1 and Remark 6.12 in [19]. Lemma 2 essentially states that, if a sequence of probability measures \((\mu_{i},\nu_{i})\) weakly converges to another pair of probability measures \((\mu,\nu)\), then \(W_{1}(\mu_{i},\nu_{i})\) is an infimum to \(W_{1}(\mu,\nu)\) as \(i\in\mathbb{N}\) goes to \(\infty\). Therefore, the \(1-\)Wasserstein distance is lower semicontinuous, meaning that it is possible to optimize over this distance metric. ### _Controller Synthesis_ We now apply Theorem 4 to the left and right sides of the equalities in equations (10) and (12), to obtain (16) and (17), respectively. Equations (16) and (17) can be used to construct a Markov feedback policy, such that (10) and (12), respectively, are satisfied. Feedback policies that solve (16) and (17) are optimal, in that they minimize the 1-Wasserstein distance. (Note that this is a different notion of optimality of the control than in standard stochastic reachability, in which an optimal control minimizes a cost function that encodes a a reachability-specific cost function.) **Theorem 5** (Existence of an Optimal Markov Feedback Policy).: _Consider a recursion of \(\Phi_{\mathbf{x}_{k}}\) that satisfies (16) for \(k\in\mathbb{N}_{[0,N-2]}\) and (17) for \(k=N-1\). If \(\pi_{k}^{*}(x_{k}):\mathbb{R}^{n}\to\mathcal{U}\) satisfies_ \[\pi_{k}^{*}(x_{k})=\operatorname*{arg\,inf}_{\pi_{k}\in\mathcal{U}}\,W_{1}\left( \Phi_{f_{k}(x_{k},u_{k},\mathbf{w}_{k})},\Phi_{\mathbf{x}_{k+1}}\right) \tag{18}\] _for every \(\Phi_{\mathbf{x}_{k}}\) that satisfies (11) for \(k\in\mathbb{N}_{[0,N-2]}\), and if \(\pi_{N-1}^{*}(x_{N-1}):\mathbb{R}^{n}\to\mathcal{U}\) satisfies_ \[\pi_{N-1}^{*}(x_{N-1})=\operatorname*{arg\,inf}_{\pi_{N-1}\in \mathcal{U}}\,W_{1}\left(\mu_{\mathcal{T}}(x_{N})\Phi_{\mathbf{\hat{x}}_{N}}, \mu_{\mathcal{T}}(x_{N})\right), \tag{19}\] _where \(\mathbf{\hat{x}}_{N}=f_{N-1}(x_{N-1},u_{N-1},\mathbf{w}_{N-1})\) for every \(\Phi_{\mathbf{x}_{N-1}}\) that satisfies (11) and (12) for \(k=N-1\), then \(\pi^{*}=\{\pi_{0}^{*}(x_{0}),\pi_{1}^{*}(x_{1}),\ldots,\pi_{N-1}^{*}(x_{N-1})\}\) is an optimal Markov feedback policy, where each \(\pi_{k}^{*}(x_{k})\) is an optimal Markov feedback map._ Proof.: We first consider the \(k=N-1\) time step. Through lower semicontinuity of the Wasserstein distance, provided via Lemma 2, we know that since the optimization in (19) is sound, there must exist an optimal Markov feedback map \(\pi_{N-1}^{*}(x_{N-1})\in\mathcal{U}\) as in (19) for every \(\Phi_{\mathbf{x}_{N-1}}\) that satisfies (11). Therefore, \(\Phi_{\mathbf{x}_{N}}\) and \(\Phi_{\mathbf{x}_{N-1}}\) will satisfy (12) for \(\pi_{N-1}^{*}(x_{N-1})\) that minimizes (17). For \(k\in\mathbb{N}_{[0,N-2]}\), we again exploit lower semicontinuity of the Wasserstein distance via Lemma 2, to note that there must exist an optimal Markov feedback map \(\pi_{k}^{*}(x_{k})\in\mathcal{U}\) that optimizes (16). Hence for every \(\Phi_{\mathbf{x}_{k}}\) that satisfies (11), \(\Phi_{\mathbf{x}_{k+1}}\) and \(\Phi_{\mathbf{x}_{k}}\) will satisfy (10). With existence of an optimal Markov feedback policy assured, we now turn to synthesis of the policy. **Corollary 3**.: _Given an optimal Markov feedback policy \(\pi^{*}\) as in Theorem 5, if the Wasserstein distance in (16) and (17) is zero, then \(\pi^{*}\) satisfies the state probability measure reaching \(\mu_{\mathcal{T}}\) at final time \(N\), while staying within \(\mu_{\mathcal{A}^{\mathsf{B}}}\) for \(k\in\mathbb{N}_{[0,N-1]}\) almost surely._ Proof.: Using a similar logic as in Lemma 2, note that for a sequence of state probability measures, we assume an optimal Markov feedback policy that guarantees the Wasserstein distance is zero. Thus, the state probability measures on the left and right sides of (10) and (12), respectively, are equivalent. With this equivalence, the target probability measure, \(\mu_{\mathcal{T}}\), and the target tube measure, \(\mu_{\mathcal{A}^{\mathsf{B}}}\), are satisfied almost surely. Corollary 3 ensures that the state probability measure reaches the target probability measure at time step \(k=N\) and avoids the avoid probability measure for \(k\in\mathbb{N}_{[0,N-1]}\) almost surely. ### _Characterizing the Set of Initial State Probability Measures_ Recall that Problem 1 seeks to find the set of initial state probability measures, \(P_{\mathbf{x}_{0}}\), for which there exists a Markov feedback policy such that the state probability measure satisfies the reach-avoid specification almost surely. Corollary 3 provides a means to generate the set \(P_{\mathbf{x}_{0}}\), which we capture via the following definition. **Definition 6**.: _The set of initial state probability measures, \(P_{\mathbf{x}_{0}}\), consists of the probability measures \(\Phi_{\mathbf{x}_{0}}\) that satisfy Corollary 3._ Synthesis of the set \(P_{\mathbf{x}_{0}}\) is the focus of future work. ### _Relationship to Discrete-Time Stochastic Reachability_ Canonical stochastic reachability frameworks [4, 5, 23] construct a set of initial states for which there exists a Markov feedback policy that maximizes (8). The approach described here provides a stricter interpretation, by insisting that (8) be satisfied almost surely. Additionally, in contrast to a point-based or set-based evaluation, the approach proposed here is based on entire distributions. Lastly, the controller synthesis that we propose varies considerably from that addressed via existing approaches, in that instead of maximizing a likelihood of safety, our proposed controller minimizes violation of equality constraints that describe propagation of distributions in a manner consistent with the reachability specifications. We plan to investigate these relationship more precisely in future work. ## V Examples Consider the time-discretized single integrator with additive disturbance over a horizon \(N=1\), \[\mathbf{x}_{k+1}=\mathbf{x}_{k}+\Delta T\mathbf{u}_{k}+\mathbf{w}_{k}, \tag{20}\] with sampling rate \(\Delta T=1\) and \(\mathbf{w}_{k}\) that follows a normal distribution \(\mathbf{w}_{k}\sim\mathcal{N}(m,\sigma)\), with mean \(m=0\) and standard deviation \(\sigma=1\). We assume the state probability measure is described by Gaussian probability measures, which are amenable to analytical solutions. For ease of calculation, we consider an affine feedback controller (as opposed to a general Markov controller) that preserves Gaussianity of the state probability measure, \[\mathbf{u}_{k}=h_{k}\mathbf{x}_{k}+v_{k}, \tag{21}\] where \(h_{k}\in\mathbb{R}\) is a scaling term and \(v_{k}\in\mathbb{R}\) is a feedforward term. We presume the input lies within \(u_{\min}=-0.1\) and \(u_{\max}=0.1\) via the probabilistic constraint, \[\mathbb{P}\left\{\cap_{k=0}^{N-1}(u_{\min}\leq\mathbf{u}_{k}\leq u_{\max}) \right\}=1. \tag{22}\] We combine (21) with (20) to form the closed-loop dynamics, \[\mathbf{x}_{k+1} =\mathbf{x}_{k}+\Delta T(h_{k}\mathbf{x}_{k}+v_{k})+\mathbf{w}_{k}, \tag{23a}\] \[=(1+\Delta Thh_{k})\mathbf{x}_{k}+\Delta Tv_{k}+\mathbf{w}_{k}, \tag{23b}\] with mean and variance \[m_{\mathbf{x}_{k+1}} =(1+\Delta Thh_{k})m_{\mathbf{x}_{k}}+\Delta Tv_{k}+m_{\mathbf{w} _{k}}, \tag{24a}\] \[\sigma_{\mathbf{x}_{k+1}}^{2} =(1+\Delta Thh_{k})\sigma_{\mathbf{x}_{k}}^{2}+\sigma_{\mathbf{w} _{k}}^{2}. \tag{24b}\] ### _Non-Random Target and Avoid Sets_ For the target set \(\mathcal{T}=\{x\in\mathbb{R}:x\leq 1\}\), we construct the target probability measure \(\mu_{\mathcal{T}}(x)=\delta_{x}(\mathcal{T})\). For the avoid set with \(\mathcal{A}=(-0.5,0.5)\subset\mathbb{R}\), we construct the target tube probability measure \(\mu_{\mathcal{A}^{\complement}}(x)=\delta_{x}(\mathcal{A}^{\complement})\). Because we only consider one time step, we must assure that \(\Phi_{\mathbf{x}_{0}}\) satisfies (11) and \(\Phi_{\mathbf{x}_{1}}\) satisfies (12), almost surely. Since \(\int_{\mathbb{R}}\mu_{\mathcal{A}^{\complement}}(x_{0})\mathrm{d}\Phi_{\mathbf{ x}_{0}}(x_{0})=1\) for \(\mathbf{x}_{0}\sim\mathcal{N}(m_{\mathbf{x}_{0}},\sigma_{\mathbf{x}_{0}})\), we have two solutions: \(\Phi_{\mathbf{x}_{0}}^{(2)}\) and \(\Phi_{\mathbf{x}_{0}}^{(1)}\), where \(\Phi_{\mathbf{x}_{0}}^{(2)}=1-\bar{\Phi}_{\mathbf{x}_{0}}^{(2)}\) is the complement cdf, as shown in Figure 4. Similarly, with the affine control, we must ensure \(\int_{\mathbb{R}}\int_{\mathbb{R}}\mu_{\mathcal{T}}(x_{1})\mathrm{d}\Phi_{( \downarrow\Delta Th_{0})x_{0}+\Delta Tu_{0}+\mathbf{w}_{0}}(x_{1})\mathrm{d} \Phi_{\mathbf{x}_{0}}(x_{0})=1\). This is possible by the use of the normal approximation of Dirac measures, since the normal distribution is a Schwartz function [24]. That is, for some mean \(m_{\mathbf{x}_{0}}\), \(\lim_{\sigma_{\mathbf{x}_{0}}\to 0}\Phi\left(\frac{x_{0}-m_{\mathbf{x}_{0}}}{ \sigma_{\mathbf{x}_{0}}}\right)=\delta_{x}(\{x\in\mathbb{R}:m_{\mathbf{x}_{0}} \leq x\})\), where \(\Phi(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-t^{2}/2}\) is a standard Gaussian cdf. Hence we note that the two cases which satisfy (11) are \(m_{x_{0}}^{(1)}=0.5\) and \(m_{x_{0}}^{(2)}=-0.5\) with \(\sigma_{\mathbf{x}_{0}}\to 0\). Conroller synthesis is accomplished by satisfying (12), to ensure the transition from \(\Phi_{\mathbf{x}_{0}}\) to \(\Phi_{\mathbf{x}_{1}}\). However, because control authority is bounded, it is not possible to satisfy (12). With the maximal value of control, the resulting state probability measure is still far from the desired probability measure. As shown in Figure 5 (left column), to achieve the desired measure (red), which is represented by a Dirac measure, we would need to have a probability measure with standard deviation that approaches 0 in the limit (right column). This would require \(v_{0}\) to be infinite. Note that \(h_{0}=0\) since both \(\Phi_{\mathbf{x}_{0}}^{(1)}\) and \(\Phi_{\mathbf{x}_{0}}^{(2)}\) utilize the normal approximation of a Dirac measure. The infeasibility shown in Figure 5 is due to the fact that this analysis seeks to satisfy (12) _almost surely_, which is quite strict. We anticipate that for this analysis to have utility beyond certificates of infeasibility for control, it will be important to examine relaxations that allow for probability measures within some allowable distance of the desired probability measure. ### _Random Target and Avoid Sets_ We now consider target and avoid sets that are random, meaning that the set takes a probabilistic representation. For a Bernoulli random variable \(\mathbf{y}\) with \(p=0.2\) and \(q=0.8\), the target set is \(\mathcal{T}\) is random, with the set \(\mathcal{T}_{1}\) evaluated with 20% likelihood or \(\mathcal{T}_{2}\) evaluated with 80% likelihood, where \(\mathcal{T}_{1}=\{x\in\mathbb{R}:x\leq-0.5\}\) and \(\mathcal{T}_{2}=\{x\in\mathbb{R}:x\leq-1\}\). The resulting target probability measure is \(\mu_{\mathcal{T}}(x)=0.2\cdot\delta_{x}(\mathcal{T}_{1})+0.8\cdot\delta_{x}( \mathcal{T}_{2})\). For the target tube probability measure, consider the same \(\mathbf{y}\) such that \(\mu_{\mathcal{A}^{\complement}}(x)=0.2\cdot\delta_{x}(\mathcal{A}^{\complement} _{1})+0.8\cdot\delta_{x}(\mathcal{A}^{\complement}_{2})\), for \(\mathcal{A}^{\complement}_{1}=\{x\in\mathbb{R}:\ x\leq-0.5\ \text{or}\ x\geq 0.5\}\) and \(\mathcal{A}^{\complement}_{2}=\{x\in\mathbb{R}:\ x\leq-1\ \text{or}\ x\geq 1\}\). This case follows a similar argument to the non-random sets, in which \(\Phi_{\mathbf{x}_{0}}\) satisfies (11) and \(\Phi_{\mathbf{x}_{1}}\) satisfies (12), almost surely. Thus, we have two possible state probability measures in the form of \(\Phi_{\mathbf{x}_{0}}(x_{0})=p\cdot\Phi_{\mathbf{x}_{0,1}}(x_{0})+q\cdot\Phi_{ \mathbf{x}_{0,2}}(x_{0})\) which is a mixture of Gaussian probability measures where for \(p=0.2\) with mean \(m_{\mathbf{x}_{0,1}}\) and standard deviation \(\sigma_{\mathbf{x}_{0,1}}\) and \(q=0.8\) with \(m_{\mathbf{x}_{0,2}}\) and \(\sigma_{\mathbf{x}_{0,2}}\). We use a similar normal approximation argument as in the non-random case, resulting in \(m_{\mathbf{x}_{1}}^{(1)}=0.5\), \(m_{\mathbf{x}_{0,2}}^{(1)}=1\) and \(m_{\mathbf{x}_{0,1}}^{(2)}=-0.5\), \(m_{\mathbf{x}_{0,2}}^{(2)}=-1\) with all \(\sigma_{\mathbf{x}_{0,1}},\sigma_{\mathbf{x}_{0,2}}\to 0\), as shown in Figure 6. Similarly to the previous analysis, it is not possible to satisfy (12) with bounded control authority, as shown in Figure 7 (left column). Satisfaction of (12) would require the standard deviation of the mixture of Gaussian probability measure, \(\Phi_{\mathbf{x}_{1}}\), to approach zero, which would require \(v_{0}\) to be infinite and \(h_{0}=0\) (right column). As in the non-random set case, we cannot assure (12) due to the affine controller and Gaussian probability measure, as shown in Figure 7 (left) for the value of \(v_{k}\) which saturates the control (with \(h_{0}=0\)). For (12) to be satisfied, we require \(v_{0}\rightarrow-\infty\) (right), which, as in the previous case, requires infinite control authority. Fig. 4: For a one-step time horizon for a Gaussian system with Dirac measure target and avoid sets and an affine control, satisfying (11) results in a Gaussian with standard deviation \(\sigma_{\mathbf{x}_{0}}\to 0\) (right, blue solid). For small values of \(\sigma_{\mathbf{x}_{0}}\) (left and middle) discrepancies between the Gaussian (cyan dashed) and Dirac measure (red) that captures the target tube probability measure are evident. Fig. 5: With bounded control authority, for each of the two initial probability measures, \(\Phi_{\mathbf{x}_{1}}^{(1)}\) and \(\Phi_{\mathbf{x}_{1}}^{(2)}\), it is not possible to construct an affine controller that satisfies (12). This is seen in the fact that the state probability measure \(\Phi_{\mathbf{x}_{1}}\) (left, cyan dashed) does not attain the desired value of \(1\) before the target probability measure (red) does (as \(x_{1}\) increases). In contrast, if we were to allow infinite control authority (right), the feedforward gain of \(v_{0}\rightarrow\infty\) (respectively \(v_{0}\rightarrow-\infty\)) would satisfy (12). ## VI Conclusion We present a framework for stochastic reachability via probability measures. Given a Markov feedback policy, we establish the existence of the backward recursion of state probability measures. Then, we establish the conditions under which there exists a Markov feedback policy that minimizes a Wasserstein distance, ensuring satisfaction of propagation of the probability measures in a manner that respects reachability specifications. Future work will focus on relationships to the standard stochastic reachability problem and relaxations that exploit computational advantages of the Wasserstein distance. ## Acknowledgements This work has been supported in part by the NSF under awards CNS-1836900 and CMMI-2105631 as well as by NASA under the University Leadership Initiative award #80NSSC20M0163. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF or any NASA entity.
2302.05034
The Neural Networks Based Needle Detection for Medical Retinal Surgery
In recent years, deep learning technology has developed rapidly, and the application of deep neural networks in the medical image processing field has become the focus of the spotlight. This paper aims to achieve needle position detection in medical retinal surgery by adopting the target detection algorithm based on YOLOv5 as the basic deep neural network model. The state-of-the-art needle detection approaches for medical surgery mainly focus on needle structure segmentation. Instead of the needle segmentation, the proposed method in this paper contains the angle examination during the needle detection process. This approach also adopts a novel classification method based on the different positions of the needle to improve the model. The experiments demonstrate that the proposed network can accurately detect the needle position and measure the needle angle. The performance test of the proposed method achieves 4.80 for the average Euclidean distance between the detected tip position and the actual tip position. It also obtains an average error of 0.85 degrees for the tip angle across all test sets.
Jidong Xu, Jinglun Yu, Jianing Yao, Rendong Zhang
2023-02-10T03:12:09Z
http://arxiv.org/abs/2302.05034v2
# The Neural Networks Based Needle Detection for Medical Retinal Surgery ###### Abstract In recent years, deep learning technology has developed rapidly, and the application of deep neural networks in the medical image processing field has become the focus of the spotlight. This paper aims to achieve needle position detection in medical retinal surgery by adopting the target detection algorithm based on YOLOv5 as the basic deep neural network model. The state-of-the-art needle detection approaches for medical surgery mainly focus on needle structure segmentation. Instead of the needle segmentation, the proposed method in this paper contains the angle examination during the needle detection process. This approach also adopts a novel classification method based on the different positions of the needle to improve the model. The experiments demonstrate that the proposed network can accurately detect the needle position and measure the needle angle. The performance test of the proposed method achieves 4.80 for the average Euclidean distance between the detected tip position and the actual tip position. It also obtains an average error of 0.85 degrees for the tip angle across all test sets. YOLOv5, deep neural networks, classification method, needle detection, retinal surgery ## 1 Introduction Nowadays, needles are common surgical devices in retinal surgery. The insertion angle and position of the needle are critical to the surgery procedure. In actual practice, it is often difficult to obtain a quick and accurate observation of the angle and tips of the tiny needle. Meanwhile, owing to the variable angles and tips direction that can adopt, the detection of needles becomes a relatively challenging task. Therefore, the application of deep neural networks is necessary. The convolutional neural network (CNN) is widely used in image processing due to its powerful feature extraction capability, particularly in object detection [1]. Many classical algorithms are applied to achieve a fast and accurate detection of objects [2]. Such as R-CNN based on SVM is a classical deep learning method to do object detection [3][4][5], Fast R-CNN [6] based on VGG-16 [7], and the Faster R-CNN [8] based on Region Proposal Network (RPN). However, compared with the most advanced Faster-RCNN among the state-of-the-art models, the proposed model is based on YOLOv5, which introduces Focus Layer in the backbone, and achieves a higher speed performance and superb accuracy for detection [9]. For the needle position detection in the surgery process, the first need is a robust classification method that allows the needle form to be discriminated. The proposed approach develops the angle, and tip sets for needle detection, indicating that the intuitive and specific classification with refined deep neural networks provides exemplary performance in the detection process [10][11]. The proposed classification method separates all angle and tip conditions simply and intuitively into four classes. This process reduces the time consumed by training and simplifies the tedious training process while ensuring detection accuracy. Application to the pig retina background data set revealed that the refined YOLOv5 model and the classification method in the experiments gives superior results in the detection accuracy. ## 2 Approach In this part, we discuss the specific method to determine the needle position on medical retinal surgery. Our model is based on YOLOv5 object detection model, which is a state-of-the-art neural network framework. Furthermore, we propose a novel classification approach achieving the accurate detection of the needle tips' position and needle angles. ### YOLO Framework You Only Look Once (YOLOv5) is a CNN-based neural network [9][12][13], which is able to achieve a fast and accurate object detection. The input of YOLOv5 model is a whole image, and the output of YOLOv5 are the predicted bounding boxes' coordinates, object classes and the corresponding confidence of the prediction. In YOLOv5 model, the input image is divided into S*S grid cells. Next, YOLOv5 model applies object localization and classification on each grid cells and finds the grid cells which includes the centre point of the object bounding box. After that, applying Intersection over Union [14] and Non-Maximum Suppression [15] method, the YOLO framework predicts the final output, which is the bounding boxes containing the predicted labels, probabilities, and most importantly, the coordinates of the bounding boxes. ### Detection of Needle Tips and Angles After making a prediction on the input image, the YOLOv5 model will output a bounding box, which contains the box vertex coordinates, object classes and corresponding probabilities. Since we would like to determine the needle tips' position and needle angles, we make the needle tip locate on one of the vertexes of the bounding boxes when we label the object before training. Besides, the middle point of the needle is placed on the corresponding diagonal vertex. Therefore, if we are able to determine the coordinates of vertexes of the bounding boxes and determine the needle tip is located at which vertex of the bounding box, we can detect the needle tips position. As for the needle angles, we calculate the angle between the needle and the horizontal edge of the bounding box as the needle angle. Therefore, we could calculate the needle angle with this equation: \[\tan\theta=\frac{\left|y_{1}-y_{2}\right|}{\left|x_{1}-x_{2}\right|} \tag{1}\] where (\(x_{1}\),\(y_{1}\)) is the coordinate of the output predicted needle tip, and the (\(x_{2}\),\(y_{2}\)) is the predicted coordinate of the midpoint of the needle. With these two coordinates, we are able to determine the needle angles. As we know, the needle tip is located at one of the vertex of the predicted bounding boxes, and there are four different possible situations of the needle tip's position: the needle tip may be located at the left top vertex of the bounding box (LT), on the left bottom vertex of the bounding box (LB), on the right top vertex (RT) or on the right bottom vertex (RB), as shown in figure 1. In order to determine the needle tips' locating on which vertex and needle angles, we propose a method, which divides the needle images into four classes according to their needle tips locations in the train set. The needles with tips locating on the left top vertex of the bounding box is classified as one class and is labelled as LT. The second class is the needles with the tips locating on the left bottom vertex of the bounding box, and is labelled as LB. Similarly, we classify the other two needle classes called RT (right-top) and RB (right-bottom). In this way, the output of YOLOv5 model includes the needle classes and the bounding boxes' coordinates. With the classification of needle tips, we are able to determine the needle tip's position and finally calculate the needle angle. Figure 1: Four classes of needles: LT, LB, RT, RB. ## 3 Experiment results In the experiment, we collect 120 needle images as the train set and test set, which are all shoot by microscope. The diameter of the needle is 0.4 millimetre. Since there is a high degree of similarity between pig retinal images and human retinal images, we apple the pig retinas as the background to collect the original needle image in random positions over the background. The image size of every original image data is 692x516. The original needle images are divided into two parts, which includes the train set and the test set. The train set contains 96 needle images, and the test set contains 24 images. For the train set, we apply data augmentation on the original 96 images and enlarge the number of the training images into 576. The data augmentation includes image flip, rotation, and corruption. In train set, we label each needle image in YOLOv5 format. Each needle image is assigned a class label (LT, LB, RT or RB) and a bounding box indicating the needle position. For the training, we set the batch size as 4, set the training epochs as 200, and use the pre-trained weights start the training. ### Training results The training results are shown in figure 2. With the training epochs close to 200, the box loss, object loss and classification loss decrease and close to 0. The box loss is used to calculate position loss between the predicted bounding boxes and the ground truth bounding boxes. The object loss calculates the loss of between the predicted probability and ground truth about whether there is an object needle existing in the image. And the class loss calculates the difference of the probability between the predicted class and ground truth class. The precision and recall rate are close to 1 at 200 epochs, which indicates this object detector can be said as a good model. Furthermore, the [email protected]:.95 is more than 0.9. These results show the model training satisfies the experiment requirement. ### Testing results After training the model, we apply our trained model on the test set, which contains 24 needle images with different needle positions and needle angles. The detection results are shown in figure 3. In every test image, there is a detected object, which includes the bounding box, predicted class label and the corresponding confidence. From the test results, we compare all the predicted class labels with the ground truth labels, and the results show all the detected needles have corrected predicted labels. Besides, with observation, the predicted bounding boxes are accurate since the needle positions and directions are close to the ground truth. Figure 2: Training results. The detected needle tips' positions and the detected needle angles are shown in Table 1. The Det Tip is the detected needle tip's coordinates in pixel, and the Real Tip is the ground truth coordinate of needle tip's position; the Det Ang is the detected needle angle, and the Real Ang is the ground truth needle angle; the Tip Dist. shows the Euclidian distance between the detected needle tip's position and the ground truth needle tip's position; the Ang Err shows the error of the detected needle angle and the ground truth needle angle. In order to evaluate the average detecting performance of the model on all test images, we calculate the average Euclidian distance between all the predicted needle tips and ground truth needle tips, which is 4.8 in pixel. Besides, we also calculate the average angle error for all test images, which is 0.85 degree. Furthermore, the detection time on each test image is about 0.015s. These test results indicate our model achieves an accurate and fast needle detection. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Image** & **Det Tip** & **Real Tip** & **Det Ang** & **Real Ang** & **Tip Dist** & **Ang Err** \\ \hline [MISSING_PAGE_POST] \end{tabular} \end{table} Table 1: Needle tips’ positions and needle angles. Figure 3: Detecting results. ## 4 Conclusions In this paper, we present a new model for needle position detection for medical retinal surgery use. The proposed approach based on the YOLOv5 object detection algorithm provides a novel classification method for detecting different positions and angles of needle tips. The experiment results demonstrate that this model can quickly and accurately extract the figure of needle tips and corresponding angles with a small error. Furthermore, applying the model in ex-vivo pig retinal data sets obtains superior performance in the needle detection tasks. In the following development, we will march forward in two aspects. First, we will apply data in real surgery or use the human retina as background for more authentic approaches. Second, to improve the proposed method's applicability, we will enlarge the dataset with multiple needle conditions and apply needle detection in real-time retinal surgery based on video instead of image [16]. ## Acknowledgments Here, we would like to express our thanks to all the people who provide us assistance and encouragement in this paper. Firstly, we want to thank every member of our team. With the efforts and persistence of every group member, we overcome difficult and challenges and complete our paper. Furthermore, we would like to express our acknowledgments to our families and friends, especially to our parents, who provide us endless love and unwavering support.
2308.03378
Friedrichs' systems discretized with the Discontinuous Galerkin method: domain decomposable model order reduction and Graph Neural Networks approximating vanishing viscosity solutions
Friedrichs' systems (FS) are symmetric positive linear systems of first-order partial differential equations (PDEs), which provide a unified framework for describing various elliptic, parabolic and hyperbolic semi-linear PDEs such as the linearized Euler equations of gas dynamics, the equations of compressible linear elasticity and the Dirac-Klein-Gordon system. FS were studied to approximate PDEs of mixed elliptic and hyperbolic type in the same domain. For this and other reasons, the versatility of the discontinuous Galerkin method (DGM) represents the best approximation space for FS. We implement a distributed memory solver for stationary FS in deal.II. Our focus is model order reduction. Since FS model hyperbolic PDEs, they often suffer from a slow Kolmogorov n-width decay. We develop two approaches to tackle this problem. The first is domain decomposable reduced-order models (DD-ROMs). We will show that the DGM offers a natural formulation of DD-ROMs, in particular regarding interface penalties, compared to the continuous finite element method. We also develop new repartitioning strategies to obtain more efficient local approximations of the solution manifold. The second approach involves graph neural networks used to infer the limit of a succession of projection-based linear ROMs corresponding to lower viscosity constants: the heuristic behind is to develop a multi-fidelity super-resolution paradigm to mimic the mathematical convergence to vanishing viscosity solutions while exploiting to the most interpretable and certified projection-based ROMs.
Francesco Romor, Davide Torlo, Gianluigi Rozza
2023-08-07T07:57:31Z
http://arxiv.org/abs/2308.03378v1
Friedrichs' systems discretized with the Discontinuous Galerkin method: domain decomposable model order reduction and Graph Neural Networks approximating vanishing viscosity solutions ###### Abstract Friedrichs' systems (FS) are symmetric positive linear systems of first-order partial differential equations (PDEs), which provide a unified framework for describing various elliptic, parabolic and hyperbolic semi-linear PDEs such as the linearized Euler equations of gas dynamics, the equations of compressible linear elasticity and the Dirac-Klein-Gordon system. FS were studied to approximate PDEs of mixed elliptic and hyperbolic type in the same domain. For this and other reasons, the versatility of the discontinuous Galerkin method (DGM) represents the best approximation space for FS. We implement a distributed memory solver for stationary FS in deal.II. Our focus is model order reduction. Since FS model hyperbolic PDEs, they often suffer from a slow Kolmogorov n-width decay. We develop two approaches to tackle this problem. The first is domain decomposable reduced-order models (DD-ROMs). We will show that the DGM offers a natural formulation of DD-ROMs, in particular regarding interface penalties, compared to the continuous finite element method. We also develop new repartitioning strategies to obtain more efficient local approximations of the solution manifold. The second approach involves graph neural networks used to infer the limit of a succession of projection-based linear ROMs corresponding to lower viscosity constants: the heuristic behind is to develop a multi-fidelity super-resolution paradigm to mimic the mathematical convergence to vanishing viscosity solutions while exploiting to the most interpretable and certified projection-based ROMs. ## 1 Introduction Friedrichs' systems (FS) are a class of symmetric positive linear systems of first-order partial derivative equations (PDEs). They were introduced by Friedrichs [34] as a tool to study hyperbolic and elliptic phenomena in different parts of the domain within a unifying framework. The main ideas that allow recasting many models into the FS frameworks are the introduction of extra variables to lower the order of the higher derivatives and the linearization of nonlinear problems. FS are characterized by linear and positive operators and (non-uniquely defined) boundary operators that allow them to impose classical boundary conditions (BCs). Various works proved uniqueness, existence and well-posedness of the FS in their strong, weak and ultraweak formulation and the necessary conditions to properly define the boundary operators [34, 73, 74, 28, 31, 3, 24]. In the last decades, different numerical discretizations of the FS have been proposed to approximate the analytical solutions. The strategies vary among finite volume [79] and discontinuous Galerkin (DG) formulations [45, 52, 28, 29, 31, 30, 12, 17]. Along with the DG discretization, also error estimation analysis that provide, according to the type of edge penalization, optimal or sub-optimal estimates, have been carried out [28, 29, 24]. We focus on the DG method since it is more versatile to approximate both elliptic and hyperbolic PDEs and it fits naturally in the framework of domain decomposable ROMs (DD-ROMs). In the context of parametric PDEs, for multi-query tasks or real-time simulations, fast and reliable simulations of the same problem for different parameters are often needed. This is especially true when the full-order models (FOMs) are based on expensive and high-order DG discretizations. Reduced order models (ROMs) decrease the computational costs looking for the solutions of unseen parametric instances on low-dimensional discretization spaces, called reduced basis spaces. This is possible because the new solutions to be predicted are expected to be highly correlated with the database of training DG solutions used to build the reduced spaces. ROMs have been proven to be a powerful tool for many applications [69, 62, 44, 77]. In particular for linear problems, classical Galerkin and Petrov-Galerkin projection methods are very easy to set up and extremely convenient in terms of computational costs. FS are perfectly suited for such algorithms due to their linearity. This is a preliminary step needed to reduce parametric nonlinear PDEs whose linearization results in FS. In the most simple formulation, we will apply singular value decomposition (SVD) to compress a database of snapshots and provide a reliable reduced order model (ROM), with standard _a posteriori_ error estimators. In the context of model order reduction, FS are particularly beneficial as theoretical frameworks for many reasons. They represent a new form of structure-preserving ROMs: the positive symmetric properties of FS are in fact easily inherited by the reduced numerical formulations. This advocates for the employment of FS for reduced order modelling whenever a PDE can be reformulated in the FS framework. This is the case for the Euler equations of gas dynamics, when they are written in terms of entropy variables [79, 66]. The same rationale is behind structure-preserving symplectic or Hamiltonian ROMs [43] and port-Hamiltonian ROMs [84, 10]. Moreover, since FS are often studied in their ultraweak formulation, they are good candidates for optimally stable error estimates [13] at the full-order level [12], also in a hybridized DG implementation in [17], and at the reduced order level, similarly to what has been achieved in the works [11, 39, 42]. Finally, from the point of view of software design, the possibility to implement in a unique maintainable and generic manner the realization of ROMs for PDEs ascribed to the class of FS is a convenient feature to search for. Though being linear, FS are hyperbolic systems and often show an advection dominated character, which is not easily approximable through a simple proper orthogonal decomposition (POD). This leads to a slow Kolmogorov \(n\)-width (KnW) decay that results in very inefficient approximations of the reduced models. Several approaches have been studied to overcome this difficulty [80, 49, 67, 75, 16, 15, 2, 59, 20, 82, 51]. A strategy that has been developed to reduce PDEs solved numerically with domain decomposition approaches, like fluid-structure interaction systems, are domain decomposable ROMS (DD-ROMS). The initial formulations [62, 63, 48, 27, 48] involved continuous finite elements discretizations for which new ways to couple the solutions restricted to different subdomains needed to be devised, especially to enforce continuity at the interfaces. We show that the DGM imposes naturally flux interface penalties from the full-order discretizations and it is, thus, amenable for straightforward implementations of DD-ROMs. From the point of view of solution manifold approximability and so KnW decay, DD-ROMs are based on local linear approximants that are employed to reach a higher accuracy for unseen solutions. This is useful when the computational domain is divided in subdomains that are independently affected by the parametric instances. The typical case in which this may happen are parametric models for which discontinuous values of the parameters over fixed subdomains cause non correlated responses on their respective subdomains. Similar cases will be studied in sections 5.3.1 and 5.3.2. Another example is represented by parametric fluid-structure interaction systems in which the parameters cause complex interdependencies between the structure and fluid components in favor of partitioned linear solution manifold approximations (SVD is performed separately for the fluid, for the structure and for the interface) rather than monolithic ones. In our implementation of DD-ROMs, we exploit the partitions obtained from the distributed memory solver in deal.II. Since these domain decompositions typically satisfy constraints related to the computational efficiency, we devise some strategies to repartition the domain responding to solution manifold approximability concerns instead. Another work that implements this is [87] where the Reynolds stress tensor is employed, among others, as indicator for partitioning the computational domain. Similarly, we develop new indicators. Another way to approach the problem of a slow KnW decay is exploiting the mathematical proofs of existence of vanishing viscosity solutions [65, 57, 25, 38]. In fact, solutions of hyperbolic problems can be obtained as a limit process of solutions associated to viscosity terms approaching zero. The crucial point is that ROMs associated to larger viscosity values may not suffer from a slow Kolmogorov \(n\)-width decay. Hence, we can set up classical projection based ROMs for the high viscosity solutions, and use graph neural networks (GNNs) [81] only to infer the vanishing viscosity solution in a very efficient manner. This procedure can be applied also to more general hyperbolic problems, not necessarily FS. The key features of this new methodology are the following: the employment of computationally heavy graph neural networks is reduced to a minimum and, at the same time, interpretable certified projection ROMs are exploited as much as possible in their regime of accurate linear approximability. In fact, GNNs, used generally to perform non-intrusive MOR, have high training computational costs and they are employed mainly for small academic benchmarks in terms of number of degrees of freedom, up to now. We avoid these high computational efforts with our multi-fidelity formulation: the GNNs are employed only to infer the vanishing viscosity solutions from the previous higher viscosity level, not to approximate and perform dimension reduction of the entire solution manifold. The overhead is the collection of additional full-order snapshots corresponding to high viscosity values, but this can be performed on coarser meshes as it will be done in section 6. Moreover, the support of our GNNs is the DG discretization space, so, we can enrich the typical machine learning framework of GNNs with data structure and operators from numerical analysis. We validate the use of data augmentation with numerical filters (discretized Laplacian, gradients), as proposed in [81]. In brief, we summarize our contributions with the present work: * structure-preserving model order reduction for Friedrichs' systems. We synthetically describe the realization of ROMs for FS and the definition of standard _a posteriori_ error estimators. Hints towards the implementation of optimally stable ROMs are highlighted. * domain decomposable reduced-order models for full-order models discretized with the discontinuous Galerkin method. We introduce DD-ROMs for DG discretizations and introduce novel indicators to repartition the computational domain with the aim of obtaining more efficient local solution manifold approximants. * surrogate modelling of vanishing viscosity solutions with graph neural networks. We propose a new framework for the MOR of parametric hyperbolic PDEs with a slow Kolmogorov n-width decay. The topics addressed in this work are presented as follows. In Section 2, we introduce the definition of FS and well-posedness results and we will provide several examples of models that fall into this framework: the Maxwell equations in stationary regime, the equations of linear compressible elasticity and the advection diffusion reaction equations. Then, we provide a DG discretization of the FS following [24] with related error estimates in Section 3. In Section 4, we introduce the projection-based MOR technique and some error bounds that can be effectively used. In Section 5, we will discuss a new implementation of domain decomposable ROMs for FOMs discretized with the DGM and we will test the approach on three parametric models. In Section 6, we introduce the concept of vanishing viscosity solutions and how graph neural network are exploited to overcome the problem of a slow Kolmogorov \(n\)-width decay. We will provide some numerical tests to show the effectiveness of the proposed approach. Finally, in Section 7 we summarize our results and we suggest further directions of research. ## 2 Friedrichs' systems In this section, we will provide a summary of FS theory: their definition, existence, uniqueness and well-posedness results, their weak and ultraweak forms and many PDEs which can be rewritten into FS. The following discussion collects many results from [34, 73, 47, 74, 46, 52, 79, 52, 29, 31, 30, 3, 12, 24], but we will follow the notation in [24]. Let us represent with \(d\) the ambient space dimension and with \(m\geq 1\) the number of equations of the FS. We consider a connected Lipschitz domain \(\Omega\subset\mathbb{R}^{d}\), with boundary \(\partial\Omega\) and outward unit normal \(\mathbf{n}:\partial\Omega\to\mathbb{R}^{d}\). A FS is defined through \((d+1)\) matrix-valued fields \(A^{0},A^{1}\ldots,A^{d}\in[L^{\infty}(\Omega)]^{m\times m}\) and the following differential operators \(\mathcal{X},A,\tilde{A}:\Omega\to\mathbb{R}^{m\times m}\). We suppose that \(\mathcal{X}\in[L^{\infty}(\Omega)]^{m\times m}\) and define \[\mathcal{X}=\sum_{k=1}^{d}\partial_{k}A^{k}\,\qquad A=A^{0}+\sum_{i=1}^{d}A^{ i}\partial_{i}\,\qquad\tilde{A}=\left(A^{0}\right)^{t}-\mathcal{X}-\sum_{i=1}^{d}A^{i} \partial_{i}\, \tag{1}\] assuming that \[A^{k}=(A^{k})^{T}\,\text{a.e. in }\Omega, \text{for }k=1,\ldots,d, \text{(symmetry property)} \tag{2a}\] \[A^{0}+(A^{0})^{T}-\mathcal{X}\text{ is u.p.d. \text{ a.e. in }\Omega, \text{(positivity property)} \tag{2b}\] thus, the name **symmetric positive operators** or **Friedrichs operators**, which is used to refer to (\(A\), \(\tilde{A}\)). We recall that the operator in (2b) is uniformly positive definite (u.p.d) if and only if \[\exists\mu>0:A^{0}+(A^{0})^{T}-\mathcal{X}>2\mu_{0}\mathbb{I}\quad\text{a.e. in }\Omega. \tag{3}\] If this property is not satisfied, it can be sometimes recovered as shown in Appendix A. A weaker condition can be required for two-field systems [30]. The boundary conditions are expressed through two boundary operators \(\mathcal{D}:\partial\Omega\to\mathbb{R}^{m\times m}\) with \[\mathcal{D}=\sum_{k=1}^{d}n_{k}A^{k},\qquad\text{a.e. in }\partial\Omega \tag{4}\] and \(\mathcal{M}:\partial\Omega\to\mathbb{R}^{m\times m}\) satisfying the following **admissible boundary conditions** \[\mathcal{M}\quad\text{is nonnegative \ a.e. on }\quad\partial\Omega, \text{(monotonicity property)} \tag{5a}\] \[\ker(\mathcal{D}-\mathcal{M})+\ker(\mathcal{D}+\mathcal{M})= \mathbb{R}^{m}\quad\text{a.e. on }\quad\partial\Omega. \text{(strict adjointness property)} \tag{5b}\] _Remark 1_ (Strict adjointness).: The term strict adjointness property comes from Jensen [52, Theorem 31]. The strict adjointness property is needed for the solution of the ultra-weak formulation of the FS to uniquely satisfy the boundary conditions: in a slightly different framework from the one presented here, see [52, Theorem 29] and [12, proof of Lemma 2.4]. **Theorem 1** (Friedrichs' system strong solution [34]).: _Let \(f\in[L^{2}(\Omega)]^{m}\), the strong solution \(z\in[C^{1}(\overline{\Omega})]^{m}\) to Friedrichs' system_ \[\begin{cases}Az=f,&\text{in }\Omega,\\ (\mathcal{D}-\mathcal{M})z=0,&\text{on }\partial\Omega.\end{cases} \tag{6}\] _is unique. Moreover, there exists a solution of the ultra-weak formulation_ \[(z,\tilde{A}y)_{L^{2}}=(f,y)_{L^{2}},\qquad\forall y\in[C^{1}(\overline{\Omega })]^{m}\ s.t.\ (\mathcal{D}+\mathcal{M}^{t})y=0. \tag{7}\] Let \(L=[L^{2}(\Omega)]^{m}\). We define the weak formulation on the graph space \(V=\{z\in L:Az\in L\}\), which amounts to differentiability in the characteristics directions: \(A\in\mathcal{L}(V,L)\) and \(\tilde{A}\in\mathcal{L}(V^{\prime},L)\). The boundary operator \(\mathcal{D}\) is translated into the abstract operator \(D\in\mathcal{L}(V,V^{\prime})\): \[\left\langle Dz,y\right\rangle_{V,V^{\prime}}=(Az,y)_{L}-(z,\tilde{A}y)_{L}, \quad\forall z,y\in V. \tag{8}\] When \(z\) is smooth, it can be seen as the integration by parts formula [52, 12]: \[\left\langle Dz,y\right\rangle_{V,V^{\prime}}=\left\langle\mathcal{D}z,y \right\rangle_{H^{\frac{1}{2}}(\partial\Omega),H^{-\frac{1}{2}}(\partial \Omega)},\quad\forall z\in H^{1}(\Omega),\ y\in H^{1}(\Omega). \tag{9}\] A sufficient condition for well-posedness of the weak formulation is provided by the cone formalism [3, 24] that poses the existence of two linear subdomains \((V_{0},V_{0}^{*})\) of \(V\): \[V_{0}\text{ maximal in }C^{+},\quad V_{0}^{*}\text{ maximal in }C^{-} \tag{10a}\] \[V_{0}=D(V_{0}^{*})^{\perp},\quad V_{0}^{*}=D(V_{0})^{\perp}, \tag{10b}\] such that \(A:V_{0}\to L\) and \(\tilde{A}:V_{0}^{*}\to L\) are isomorphism, where \(C^{\pm}=\{w\in V|\pm\left\langle Dw,w\right\rangle_{V,V^{\prime}}\geq 0\}\). Provided that \(V_{0}+V_{0}^{*}\subset V\) is closed [3], the conditions in (10) are equivalent to the existence of the boundary operator \(M\in\mathcal{L}(V,V^{\prime})\) that satisfies **admissible boundary conditions** analogue to the ones in (5): \[M\quad\text{is monotone}, \text{(monotonicity property)} \tag{11a}\] \[\ker(D-M)+\ker(D+M)=V, \text{(strict adjointness property)} \tag{11b}\] identifying \(V_{0}=\ker(D-M)\) and \(V_{0}^{*}=\ker(D+M^{*})\). **Theorem 2** (Friedrichs' System weak form [28, 29, 24, 12]).: _Let us assume that the boundary operator \(M\in\mathcal{L}(V,V^{\prime})\) satisfies the monotonicity and strict adjointness properties (11). Let us define for \(z,z^{*}\in V\) the bilinear forms_ \[a(z,y) =(Az,y)_{L}+\tfrac{1}{2}((D-M)z,y)_{V^{\prime},V},\quad\forall y \in V, \tag{12a}\] \[a^{*}(z^{*},y) =\left(\tilde{Az}^{*},y\right)_{L}+\tfrac{1}{2}((D+M^{*})z^{*},y )_{V^{\prime},V},\quad\forall y\in V. \tag{12b}\] _Then, Friedrichs' operators \(A:V_{0}\to L\) and \(\tilde{A}:V_{0}^{*}\to L\) are isomorphisms: for all \(f\in L\) and \(g\in V\) there exists unique \(z,z^{*}\in V\) s.t._ \[a(z,y) =(f,y)_{L}+\langle(D-M)g,y\rangle_{V^{\prime},V}\quad\forall y \in V, \tag{13a}\] \[a^{*}(z^{*},y) =(f,y)_{L}+\langle(D+M^{*})g,y\rangle_{V^{\prime},V}\quad\forall y \in V, \tag{13b}\] _that is_ \[\begin{cases}Az=f,&\text{in }L,\\ (M-D)(z-g)=0&\text{in }V^{\prime},\end{cases}\quad\begin{cases}\tilde{Az}^{*}=f,& \text{in }L,\\ (M^{*}+D)(z^{*}-g)=0&\text{in }V^{\prime}.\end{cases} \tag{14}\] ### A unifying framework The theory of Friedrichs' systems provides a unified framework to study different classes of PDEs [52]: first-order uniformly hyperbolic, second-order uniformly hyperbolic, elliptic and parabolic partial differential equations. Originally, Friedrichs' aim was to study equations of mixed type (hyperbolic, parabolic, elliptic) inside the same domain such as the Tricomi equation [34] (or more generally the Frankl equation [52]) inspired by models from compressible gas dynamics for which the domain is subdivided in a hyperbolic supersonic and an elliptic subsonic part. Some examples of FS can be found in the literature: \[(x_{2}\partial_{1}^{2}\bullet+\partial_{1}^{2}\bullet)u=0,\quad\text{(Tricomi \@@cite[cite]{[\@@bibref{}{Tricomi}{}{}]})} \tag{15a}\] \[\begin{bmatrix}-\partial_{1}\bullet&\partial_{2}\bullet\\ -\partial_{2}\bullet&\partial_{1}\bullet\end{bmatrix}\begin{pmatrix}u_{1}\\ u_{2}\end{pmatrix} =0,&\text{(Cauchy-Riemann \@@cite[cite]{[\@@bibref{}{Cauchy-Riemann \@@citephrase{}{}{}{}]}})}\\ (A(x_{2})\partial_{1}^{2}+\partial_{1}^{2})u =0,&\text{(Frankl \@@cite[cite]{[\@@bibref{}{Frankl \@@citephrase{}{}{}]}})}\end{bmatrix}\] (15c) \[\begin{bmatrix}\mathbb{I}_{3}&-\lambda^{-1}\mathbb{I}_{3}( \nabla\cdot\bullet)-\frac{(\nabla\bullet+(\nabla\bullet)^{t})}{2}\\ -\frac{1}{2}\nabla\cdot(\bullet+\bullet^{t})&\alpha\mathbb{I}_{3}\end{bmatrix} \begin{pmatrix}\sigma\\ \mathbf{u}\end{bmatrix} =0,&\text{(Compressible linear elasticity \@@cite[cite]{[\@@bibref{}{Compressible-linear elasticity \@@citephrase{}{}{}]}})}\end{bmatrix}\] (15d) \[\begin{bmatrix}\mu\mathbb{I}_{3}&\nabla\times\bullet\\ -\nabla\times\bullet&\sigma\mathbb{I}_{3}\end{bmatrix}\begin{pmatrix}\mathbf{ H}\\ \mathbf{E}\end{bmatrix} =0,&\text{(Maxwell eq. in stationary regime \@@cite[cite]{[\@@bibref{}{Compressible-linear elasticity \@@citephrase{}{}{}]}})}\] (15e) \[(-\nabla\cdot(\kappa\nabla\bullet)+\beta\cdot\nabla\cdot\epsilon+ \mu\bullet)\mathbf{u} =0,&\text{(Diffusion advection reaction \@@cite[cite]{[\@@bibref{}{Compressible-linear elasticity \@@citephrase{}{}{}]}})}\] (15f) \[(A_{0}\partial_{t}\bullet+\Sigma_{i=1}^{3}\tilde{A}_{i}\partial_{i }\bullet)\mathbf{V} =0,\text{(Linearized symmetric Euler \@@cite[cite]{[\@@bibref{}{Cauchy-Riemann \@@citephrase{}{}{}]}},\text{\@@cite[cite]{[\@@bibref{}{Compressible-linear elasticity \@@citephrase{}{}{}]}})}\] (15g) \[(a\gamma^{0}\partial_{t}\bullet+\gamma^{1}\partial_{1}\bullet+ \gamma^{2}\partial_{2}\bullet+\gamma^{3}\partial_{3}\bullet+B)\mathbf{\psi} =0,\text{(Dirac system \@@cite[cite]{[\@@bibref{}{Dirac-Klein-Gordon system \@@cite[cite]{[\@@bibref{}{Dirac-Klein-Gordon system \@@cite[cite]{[\@@bibref{}{Dirac-Klein-Gordon system \@@cite[cite]{[\@@bibref{Dirac-Klein-Gordon system \ with \(\mathcal{R}^{k}_{ij}=\epsilon_{ikj}\) being the Levi-Civita tensor. The graph space is \(V=H(\mathrm{curl},\Omega)\times H(\mathrm{curl},\Omega)\). The boundary operator is \[\mathcal{D}=\sum_{k=1}^{d}n_{k}A^{k} =\begin{bmatrix}0_{d,d}&\mathcal{T}\\ \mathcal{T}^{T}&0_{d,d}\end{bmatrix},\qquad\text{with }\mathcal{T}\xi:=\mathbf{n} \times\mathbf{\xi}, \tag{16}\] \[\left\langle D(\mathbf{H},\mathbf{E}),(\mathbf{h},\mathbf{e}) \right\rangle_{V^{{}^{\prime}},V} =(\mathbf{n}\times\mathbf{E},\mathbf{e})_{L^{2}(\partial\Omega)}-( \mathbf{n}\times\mathbf{H},\mathbf{h})_{L^{2}(\partial\Omega)}. \tag{17}\] We impose homogeneous Dirichlet boundary conditions tangential to the electric field \((\mathbf{n}\times\mathbf{E})_{|\partial\Omega}=0\) through \[\mathcal{M}=\begin{bmatrix}0_{d,d}&-\mathcal{T}\\ \mathcal{T}^{T}&0_{d,d}\end{bmatrix},\qquad\left\langle M(\mathbf{H},\mathbf{ E}),(\mathbf{h},\mathbf{e})\right\rangle_{V^{{}^{\prime}},V}=-(\mathbf{n}\times \mathbf{E},\mathbf{e})_{L^{2}(\partial\Omega)}-(\mathbf{n}\times\mathbf{H}, \mathbf{h})_{L^{2}(\partial\Omega)}. \tag{18}\] #### 2.1.2 Compressible linear elasticity We consider the parametric compressible linear elasticity system in \(\mathbb{R}^{d}=\mathbb{R}^{3}\), where \(\mathbf{\sigma}\in\mathbb{R}^{d\times d}\) is the stress tensor and \(\mathbf{u}\in\mathbb{R}^{d}\) is the displacement vector. The system can be written as \[\left(\begin{array}{c}\mathbf{\sigma}-\mu_{1}(\nabla\cdot\mathbf{u})\mathbb{I}_ {3,3}-2\mu_{2}\frac{(\nabla\mathbf{u}+(\nabla\mathbf{u})^{t})}{2}\\ -\frac{1}{2}\nabla\cdot(\mathbf{\sigma}+\mathbf{\sigma}^{t})+\mu_{3}\mathbf{u}\end{array} \right)=\left(\begin{array}{c}0\\ \mathbf{r}\end{array}\right),\quad\forall x\in\Omega, \tag{19}\] where \(\mathbf{r}\in\mathbb{R}^{3}\), and \(\mu_{1},\ \mu_{2}>0\) are the Lame constants. Rescaling the displacement \(\mathbf{u}\) by \(2\mu_{2}\), we obtain \[\left(\begin{array}{c}\mathbf{\sigma}-\frac{\mu_{1}}{2\mu_{2}+3\mu_{1}}\mathrm{ tr}(\mathbf{\sigma})\mathbb{I}_{3,3}-\frac{(\nabla\mathbf{u}+(\nabla\mathbf{u})^{T})}{2} \\ -\frac{1}{2}\nabla\cdot\left(\mathbf{\sigma}+\mathbf{\sigma}^{T}\right)+\frac{\mu_{3}}{2 \mu_{2}}\mathbf{u}\end{array}\right)=\left(\begin{array}{c}\mathbf{0}\\ \mathbf{r}\end{array}\right),\quad\forall x\in\Omega. \tag{20}\] In this case, we consider the graph space \[V=H_{\mathbf{\sigma}}\times[H^{1}(\Omega)]^{d},\quad H_{\mathbf{\sigma}}=\{\mathbf{\sigma} \in[L^{2}(\Omega)]^{d\times d}\,|\,\nabla\cdot(\mathbf{\sigma}+\mathbf{\sigma}^{t})\in [L^{2}(\Omega)]^{d}\}. \tag{21}\] If we reorder the coefficients of \(\mathbf{\sigma}\) into a vector, we can define \(\mathbf{z}=\begin{pmatrix}\mathbf{\sigma}\\ \mathbf{u}\end{pmatrix}\) and have \[A^{0}=\begin{bmatrix}\mathbb{I}_{d^{2},d^{2}}-\frac{\mu_{1}}{2\mu_{2}+3\mu_{1} }\mathcal{Z}&0_{d^{2},d}\\ 0_{d,d^{2}}&\frac{\mu_{3}}{2\mu_{2}}\mathbb{I}_{d,d}\end{bmatrix},\qquad A^{k} =\begin{bmatrix}0_{d^{2},d^{2}}&\mathcal{E}^{k}\\ (\mathcal{E}^{k})^{T}&0_{d,d}\end{bmatrix},\qquad f=\begin{bmatrix}\mathbf{0}_ {d^{2}}\\ \mathbf{r}\end{bmatrix}, \tag{22}\] with \(\mathcal{Z}_{[ij],[kl]}=\delta_{ij}\delta_{kl}\) and \(\mathcal{E}^{k}_{[ij],l}=-\frac{1}{2}\left(\delta_{ik}\delta_{jl}+\delta_{il} \delta_{jk}\right)\). This leads to the definition of the boundary operator \[\mathcal{D}=\sum_{k=1}^{d}n_{k}A^{k}=\begin{bmatrix}0_{d^{2},d^{2 }}&\mathcal{N}\\ \mathcal{N}^{T}&0_{d,d}\end{bmatrix}\text{ with }\mathcal{N}\mathbf{\xi}:=-\frac{1}{2}( \mathbf{n}\otimes\mathbf{\xi}+\mathbf{\xi}\otimes\mathbf{n}), \tag{23a}\] \[\left\langle D(\mathbf{\sigma},\mathbf{u}),(\mathbf{\tau},\mathbf{v}) \right\rangle_{V^{{}^{\prime}},V}=-\langle\tfrac{1}{2}(\mathbf{\sigma}+\mathbf{\sigma}^ {t})\cdot\mathbf{n},\mathbf{v}\rangle_{-\frac{1}{2},\frac{1}{2}}-\langle \tfrac{1}{2}(\mathbf{\tau}+\mathbf{\tau}^{t})\cdot\mathbf{n},\mathbf{u}\rangle_{-\frac {1}{2},\frac{1}{2}}. \tag{23b}\] Mixed boundary conditions \(\mathbf{u}_{|\Gamma_{D}}=0\) and \((\mathbf{\sigma}\cdot\mathbf{n})_{|\Gamma_{N}}=0\) can be applied through the following boundary operator on the Dirichlet boundary \(\Gamma_{D}\) and on the Neumann boundary \(\Gamma_{N}\): \[\begin{split}\left\langle M(\mathbf{\sigma},\mathbf{u}),(\mathbf{\tau}, \mathbf{v})\right\rangle_{V^{{}^{\prime}},V}=&-\langle\tfrac{1}{2}(\mathbf{ \sigma}+\mathbf{\sigma}^{t})\cdot\mathbf{n},\mathbf{v}\rangle_{-\frac{1}{2}, \frac{1}{2},\Gamma_{D}}+\langle\tfrac{1}{2}(\mathbf{\tau}+\mathbf{\tau}^{t})\cdot \mathbf{n},\mathbf{u}\rangle_{-\frac{1}{2},\frac{1}{2},\Gamma_{D}}\\ &+\langle\tfrac{1}{2}(\mathbf{\sigma}+\mathbf{\sigma}^{t})\cdot\mathbf{n}, \mathbf{v}\rangle_{-\frac{1}{2},\frac{1}{2},\Gamma_{N}}-\langle\tfrac{1}{2}(\bm {\tau}+\mathbf{\tau}^{t})\cdot\mathbf{n},\mathbf{u}\rangle_{-\frac{1}{2},\frac{1}{2}, \Gamma_{N}},\end{split} \tag{24}\] the constructive procedure employed to define the boundary operator \(M\in\mathcal{L}(V,V^{{}^{\prime}})\) is reported in the Appendix B. #### 2.1.3 Grad-div problem: advection-diffusion-reaction equations Another example is the advection-diffusion-reaction equation \[-\nabla\cdot(\kappa\nabla u)+\mathbf{\beta}\cdot\nabla u+\mu u=\mathbf{r}, \tag{25}\] with \(\kappa\in[L^{\infty}(\Omega)]^{d\times d}\) and \(\mathbf{\beta}\in[W^{1,\infty}(\Omega)]^{d}\), under the hypothesis that \(\kappa\in[L^{\infty}(\Omega)]^{d\times d}\) and \(\mu-\nabla\cdot\mathbf{\beta}\in L^{\infty}(\Omega)\) are uniformly bounded from below to satisfy the positivity property (2b). Let us write the equation in the mixed form with \(\mathbf{\sigma}=-\kappa\nabla u\) and \(\mathbf{z}=\begin{pmatrix}\mathbf{\sigma}\\ u\end{pmatrix}\). Then, (25) can be rewritten as (6) with \[A^{0}=\begin{bmatrix}\kappa^{-1}&0_{d,1}\\ 0_{1,d}&\mu\end{bmatrix},\qquad A^{k}=\begin{bmatrix}0_{d,d}&\mathbf{e}_{k}\\ (\mathbf{e}_{k})^{T}&\beta_{k}\end{bmatrix},\qquad\mathbf{f}=\begin{pmatrix}0\\ \mathbf{r}\end{pmatrix}. \tag{26}\] Here, \(0_{m,\ell}\in\mathbb{R}^{m\times\ell}\) is a matrix of zeros and \(\mathbf{e}_{k}\) is the unitary vector with the \(k\)-th entry equal to \(1\). The graph space is \(V=H(\mathrm{div},\Omega)\times H^{2}(\Omega)\). The boundary operator \(D\) becomes \[D=\sum_{k=1}^{d}n_{k}A^{k}=\begin{bmatrix}0_{d,d}&\mathbf{n}\\ \mathbf{n}^{t}&\beta\cdot\mathbf{n}\end{bmatrix},\qquad\langle D(\boldsymbol{ \sigma},u),(\boldsymbol{\tau},v)\rangle_{V^{\prime},V}=\langle\boldsymbol{ \sigma}\cdot\mathbf{n},v\rangle_{-\frac{1}{2},\frac{1}{2}}-\langle\boldsymbol{ \tau}\cdot\mathbf{n},u\rangle_{-\frac{1}{2},\frac{1}{2}}. \tag{27}\] Homogeneous Dirichlet boundary conditions \(u_{|\partial\Omega}=0\) can be imposed with \[\mathcal{M}=\begin{bmatrix}0_{d,d}&-\mathbf{n}\\ \mathbf{n}^{t}&0\end{bmatrix}, \tag{28}\] while Robin/Neumann boundary conditions of the type \(\boldsymbol{\sigma}\cdot\mathbf{n}=\gamma u\) are imposed with \[\mathcal{M}=\begin{bmatrix}0_{d,d}&\mathbf{n}\\ -\mathbf{n}^{t}&2\gamma+\boldsymbol{\beta}\cdot\mathbf{n}\end{bmatrix}. \tag{29}\] For our test case in Section 6, we will consider as advection field \(\boldsymbol{\beta}:\Omega\to\mathbb{R}^{d}\) an incompressible velocity field from the solution of the 2d incompressible Navier-Stokes equations as described later. Similarly to the linear compressible elasticity mixed boundary conditions in Section 2.1.2, we want to impose \(u_{|\Gamma_{D}}=\mathbf{g}\in[L^{\frac{1}{2}}(\Gamma_{D})]^{d}\) and \(\left(\boldsymbol{\sigma}\cdot\mathbf{n}\right)_{|\Gamma_{N}}=0\). This is possible with \[\langle M(\boldsymbol{\sigma},u),(\boldsymbol{\tau},v)\rangle_{V^{\prime},V}= \langle\boldsymbol{\sigma}\cdot\mathbf{n},v\rangle_{-\frac{1}{2},\frac{1}{2}, \Gamma_{D}}+\langle\boldsymbol{\tau}\cdot\mathbf{n},u\rangle_{-\frac{1}{2}, \frac{1}{2},\Gamma_{D}}-\langle\boldsymbol{\sigma}\cdot\mathbf{n},v\rangle_{- \frac{1}{2},\frac{1}{2},\Gamma_{N}}-\langle\boldsymbol{\tau}\cdot\mathbf{n},u \rangle_{-\frac{1}{2},\frac{1}{2},\Gamma_{N}}, \tag{30}\] the proof is similar to the one reported in Appendix B. ## 3 Discontinuous Galerkin discretization In the literature, a few discretization approaches for FS are presented, e.g. finite volume method [79] or discontinuous Galerkin (DG) method [28, 24, 12]. More recently, a hp-adaptive hybridizable DG formulation was introduced in [17]. In this work, we perform a DG discretization following the notation reported in [28, 24]. Consider a shape-regular tessellation \(\mathcal{T}_{h}\) of the domain \(\Omega\) and take a piecewise polynomial space \(V_{h}\) over \(\mathcal{T}_{h}\), defined by \(V_{h}=\{z\in V:z|_{T}\in\mathbb{P}_{h}^{d}(T),\forall T\in\mathcal{T}_{h}\}\), where \(k\) is the polynomial degree. We assume that there is a partition \(P_{\Omega}=\{\Omega_{i}\}_{1\leq i\leq N_{\Omega}}\) of \(\Omega\) into disjoint polyhedra such that the exact solution \(z\) belongs to \(V^{*}=V\cap[H^{1}(P_{\Omega})]^{m}\). We define the discrete bilinear form \(\forall y_{h}\in V_{h},\ z\in V^{*}\) \[a_{h}^{cf}(z,y_{h}) =\sum_{T\in\mathcal{T}_{h}}(Az,y_{h})_{L^{2}(T)}+\tfrac{1}{2}\sum _{F\in\mathcal{F}_{h}^{b}}\left((\mathcal{M}-\mathcal{D})z,y_{h}\right)_{L^{2} (F)}-\sum_{F\in\mathcal{F}_{h}^{i}}\left(\mathcal{D}_{F}[\![z]\!],\{\!\{y_{h}\} \!\}\right)_{L^{2}(F)} \tag{31a}\] \[=\sum_{T\in\mathcal{T}_{h}}(z,\tilde{A}y_{h})_{L^{2}(T)}+\tfrac{1}{ 2}\sum_{F\in\mathcal{F}_{h}^{b}}\left((\mathcal{M}+\mathcal{D})z,y_{h}\right)_{L ^{2}(F)}+\sum_{F\in\mathcal{F}_{h}^{i}}\left(\mathcal{D}_{F}\{\!\{z\}\!\},[\![y _{h}]\!]_{L^{2}(F)}, \tag{31b}\] where the first two terms are the piece-wise discontinuous discretization of the bilinear form (12a) and the last term penalizes the jump across neighboring cells and stabilizes the method. Here, \(\mathcal{F}_{h}^{k}\) is the collection of the faces of the triangulation \(\mathcal{T}_{h}\) belonging to the boundary of \(\Omega_{h}\), while \(\mathcal{F}_{h}^{i}\) is the collection of internal faces. The jump and the average of a function on a face \(F\) shared by two elements \(T_{1}\) and \(T_{2}\) are defined as \([\![u]\!]=u|_{T_{1}}-u|_{T_{2}}\) and \(\{[\![u]\!]=\tfrac{1}{2}(u|_{T_{1}}+u|_{T_{2}})\), respectively. The boundary operator \(\mathcal{D}:\partial\Omega\to\mathbb{R}^{m\times m}\) can be extended also on the internal faces \(F\in\mathcal{F}_{h}^{i}\) as \(\mathcal{D}_{F}=\sum_{k=1}^{d}n_{k}^{F}A^{k}\), where \(n^{F}\) is a normal to the face \(F\) and it is well-defined. In order to obtain quasi-optimal error estimates, extra stabilization terms are needed. We additionally impose that \(A^{i}\in[C^{0,\frac{1}{2}}(\overline{\Omega}_{j})]^{m\times m},\ \forall T\in \mathcal{T}_{h},\ i=1,\ldots,d,\ \forall\Omega_{j}\in P_{\Omega}\). A possibility is given by the following stabilization term \[s_{h}(z,y_{h})=\sum_{F\in\mathcal{F}_{h}^{k}}(S_{F}^{b}z,y_{h})_{L^{2}(F)}+\sum_ {F\in\mathcal{F}_{h}^{i}}(S_{h}^{i}[\![z]\!],[\![y_{h}]\!])_{L^{2}(F)}, \tag{32}\] where the operators \(S_{h}^{i}\) and \(S_{F}^{b}\) have to satisfy the following constraints for some \(\alpha_{j}>0\) for \(j=1,\ldots,5\): \[S_{F}^{b}z=0\quad\forall F\in\mathcal{F}_{h}^{b},\qquad S_{F}^{i}[ \![z]\!]=0\quad\forall F\in\mathcal{F}_{h}^{i},\qquad\text{with $z$ the exact solution,} \tag{33a}\] \[S_{F}^{b}\text{ and }S_{F}^{i}\text{ are symmetric and nonnegative,}\] (33b) \[S_{F}^{b}\leq\alpha_{1}\mathbb{I}_{m,m},\qquad\alpha_{2}|D_{F}| \leq S_{F}^{i}\leq\alpha_{3}\mathbb{I}_{m,m},\] (33c) \[|((M-D)y,z)_{L^{2}(F)}|\leq\alpha_{4}((S_{F}^{b}+M)y,y)_{L^{2}(F)}^ {(12)}\|z\|_{L^{2}(F)},\] (33d) \[|((M+D)y,z)_{L^{2}(F)}|\leq\alpha_{5}((S_{F}^{b}+M)z,z)_{L^{2}(F)}^ {1/2}\|y\|_{L^{2}(F)}. \tag{33e}\] Specific definitions of these operators for our test cases are presented in [28, 24], properly declined for our mixed boundary conditions in the compressible linear elasticity and advection-diffusion-reaction test cases, see Sections 2.1.2 and 2.1.3, respectively. Finally, we can define the bilinear form and the right-hand side \[a_{h}(z,y_{h})=a_{h}^{cf}(z,y_{h})+s_{h}(z,y_{h}),\quad l_{h}(y_{h})=\sum_{T\in \mathcal{T}_{h}}(f,y_{h})_{L^{2}(T)}+\tfrac{1}{2}\sum_{F\in\mathcal{F}_{h}^{b} }\left((M-D)g,y_{h}\right)_{L^{2}(F)} \tag{34}\] that lead to the definition of the discrete problem. **Definition 1** (DG Friedrichs' System).: _Given \(f\in L\) and \(g\in V_{h}\), the DG approximation of the FS constitute in finding a \(z_{h}\in V_{h}\) such that_ \[a_{h}(z_{h},y_{h})=l_{h}(y_{h}),\qquad\forall y_{h}\in V_{h}. \tag{35}\] To prove the accuracy of the discrete problem, it is necessary to have the following conditions: * Consistency, i.e., \(a_{h}(z,y_{h})=a(z,y_{h})\) for \(z\in V^{*}\); * \(L^{2}\)-coercivity, i.e., \(a_{h}(y_{h},y_{h})\geq\rho_{0}\|y_{h}\|_{L}^{2}+\tfrac{1}{2}|y_{h}|_{M}^{2}\), with \(|y_{h}|_{M}^{2}=\int_{\partial\Omega}y^{t}My\); * Inf-sup stability \[|||z_{h}|||\lesssim\sup_{y_{h}\neq 0}\frac{a_{h}(z_{h},y_{h})}{|||y_{h}|||}\] (36) with \(|||y|||^{2}=||y||_{L^{2}}^{2}+|y|_{M}^{2}+|y|_{S}^{2}+\sum_{T\in\mathcal{T}_ {h}}h_{T}\left\|A^{b}\partial_{k}y\right\|_{L^{2}(T)}^{2}\) and \(|y|_{S}^{2}=s_{h}(y,y)\); * Boundedness \(a_{h}(w,y_{h})\lesssim|||w|||_{*}|||y_{h}|||\) with \[|||y|||_{*}^{2}=|||y|||^{2}+\sum_{T\in\mathcal{T}_{h}}\left(h_{T}^{-1}||y||_{ L^{2}(T)}^{2}+||y||_{L^{2}(\partial T)}^{2}\right).\] (37) **Theorem 3** (Error estimate from [28, 24]).: _Let \(z\in V^{*}\) be the solution of the weak problem (13a) and \(z\in V_{h}\) be the solution of the discrete DG problem (35). Then, the consistency and inf-sup stability of the discrete system (35) imply_ \[|||z-z_{h}|||\lesssim\inf_{y_{h}\in V_{h}}|||z-y_{h}|||_{*}, \tag{38}\] _in particular, if \(z\in[H^{k+1}(\Omega)]^{m}\) the following convergence rate holds_ \[|||z-z_{h}|||\lesssim h^{k+\tfrac{1}{2}}\|z\|_{[H^{k+1}(\Omega)]^{m}}. \tag{39}\] ## 4 Projection-based model order reduction The computation of discrete solutions of parametrized PDEs can require a not negligible computational time. In particular, in multi-query context, when many evaluations for different parameters are required, the computations may become unbearable. In this section, we introduce a reduced order model (ROM) for the FS in case of parameter dependent problems [44, 77], in order to drastically reduce the computational costs. To do so, we exploit two aspects of the above presented FS: the linearity of the problems and the affine dependence of the operators on the physical parameters. As we have seen in Section 2.1, all the problems are depending on some parameters \(\boldsymbol{\rho}\in\mathcal{P}\subset\mathbb{R}^{N_{\text{par}}}\) and the dependence is affine. This means that it is possible to find \(N_{\text{aff}}\) terms independent on the parameters for each form, such that they can be affinely combined with some parameter dependent functions to obtain the original operator, i.e., \[a_{h}(z,y_{h};\boldsymbol{\rho})=\sum_{\ell=1}^{N_{\text{aff}}}\theta_{\ell}^{ \varepsilon}(\boldsymbol{\rho})a_{\ell,h}(z,y_{h}),\qquad l_{h}(y_{h})=\sum_{ \ell=1}^{N_{\text{aff}}}\theta_{\ell}^{f}(\boldsymbol{\rho})l_{h,l}(y_{h}). \tag{40}\] Then, we select a reduced space \(V_{r}\subset V_{h}\) provided by a compression algorithm, e.g. SVD/POD/PCA [53, 58, 83] or Greedy algorithm [70, 69, 44, 20]. We suppose that the reduced dimension \(r\) is much smaller than the dimension \(N_{h}\) of the full order model space \(V_{h}\). We look as ansatz for a reduced solution \(z_{\text{RB}}\in V_{r}\) a linear combination of the bases \(\{\psi_{j}^{\text{RB}}\}_{j=1}^{r}\) of \(V_{r}\), i.e., \[z_{\text{RB}}=\sum_{j=1}^{r}z_{\text{RB}}^{j}\psi_{j}^{\text{RB}}, \tag{41}\] then, performing a standard Galerkin projection, we obtain the following RB problem. **Definition 2** (Reduced Basis Problem).: _Find \(z_{RB}\in V_{r}\), given by the coefficients \(z_{RB}^{j}\), such that_ \[\sum_{i=1}^{r}z_{RB}^{j}(\boldsymbol{\rho})\sum_{\ell=1}^{N_{\text{ pdf}}}\theta_{\ell}^{\text{g}}(\boldsymbol{\rho})a_{\ell,h}(\psi_{j}^{RB},\psi_{i}^{ RB})=\sum_{\ell=1}^{N_{\text{ pdf}}}\theta_{\ell}^{f}(\boldsymbol{\rho})(f_{\ell},\psi_{i}^{RB}),\qquad\text{ for all }i=1,\ldots,r. \tag{42}\] The obtained problem scales depend on the dimension \(r\) and \(N_{\text{aff}}\) in its assembly and only on \(r\) in its solution, and it is completely independent on \(N_{h}\). To obtain computational advantages for the parametric problem, we split the tasks into an expensive _offline phase_ and a cheap _online phase_. In the _offline phase_, we find the reduced space \(V_{r}\) and we assemble the reduced matrices and right hand sides \[A_{\ell}:=\{a_{\ell,h}(\psi_{j}^{\text{RB}},\psi_{i}^{\text{RB}})\}_{i,j}, \qquad b_{\ell}:=\{(f_{\ell},\psi_{i}^{\text{RB}})\}_{i}. \tag{43}\] In the _online phase_, we can simply evaluate the coefficients \(\theta_{\ell}^{\text{g}}(\boldsymbol{\rho})\) and \(\theta_{\ell}^{f}(\boldsymbol{\rho})\) and obtain the reduced linear system \[A(\boldsymbol{\rho})z_{\text{RB}}=b(\boldsymbol{\rho}),\qquad\text{with }A( \boldsymbol{\rho}):=\sum_{\ell}\theta_{\ell}^{\text{g}}(\boldsymbol{\rho})A_{ \ell}\text{ and }b(\boldsymbol{\rho}):=\sum_{\ell}\theta_{\ell}^{f}( \boldsymbol{\rho})b_{\ell}. \tag{44}\] This gives a great speed up in computational times. ### Reduced basis a posteriori error estimate We derive two error estimators for the energy norm and the \(L^{2}\) norm of the reduced basis error \(e_{h}=z_{h}-z_{RB}\in V_{h}\) following the procedure in [44]. Exploiting the equality in (31), we obtain the following lower bound \[a_{h}(e_{h},e_{h})= a_{h}^{cf}(e_{h},e_{h})+s_{h}(e_{h},e_{h}) \tag{45a}\] \[= \sum_{T\in\mathcal{T}_{h}}(Ae_{h},e_{h})_{L^{2}(T)}+\tfrac{1}{2} \sum_{F\in\mathcal{F}^{\text{k}}}\left((\mathcal{M}-\mathcal{D})e_{h},e_{h} \right)_{L^{2}(F)}-\sum_{F\in\mathcal{F}_{h}^{\text{k}}}(\mathcal{D}_{F}[\![ e_{h}]\!],\{\!(e_{h}\}\!])_{L^{2}(F)}+\] (45b) \[\sum_{F\in\mathcal{F}_{h}^{\text{k}}}(S_{F}^{b}e_{h},e_{h})_{L^{2 }(F)}+\sum_{F\in\mathcal{F}_{h}^{\text{k}}}(S_{h}^{i}[\![e_{h}]\!],[\![e_{h}] \!])_{L^{2}(F)}\] (45c) \[= \sum_{T\in\mathcal{T}_{h}}\left((A^{0}-\tfrac{1}{2}\mathcal{X})e_{ h},e_{h}\right)_{L^{2}(T)}+\tfrac{1}{2}\sum_{F\in\mathcal{F}^{\text{k}}} \left(\mathcal{M}e_{h},e_{h}\right)_{L^{2}(F)}+\sum_{F\in\mathcal{F}_{h}^{ \text{k}}}(S_{F}^{b}e_{h},e_{h})_{L^{2}(F)}+\sum_{F\in\mathcal{F}_{h}^{\text{ k}}}(S_{h}^{i}[\![e_{h}]\!],[\![e_{h}]\!])_{L^{2}(F)}\] (45d) \[\geq \mu_{0}\|e_{h}\|_{L}^{2}+\tfrac{1}{2}|e_{h}|_{M}^{2}+|e_{h}|_{S}^ {2}, \tag{45e}\] where we have defined \(|\cdot|_{M}^{2}=\tfrac{1}{2}\sum_{F\in\mathcal{F}^{\text{k}}}\left(\mathcal{M} \cdot,\cdot\right)_{L^{2}(F)}\) and \(|\cdot|_{S}^{2}=s_{h}(\cdot,\cdot)\). We define the \(R\)-norm \[||y_{h}||_{R}^{2}=\mu_{0}\|y_{h}\|_{L}^{2}+\tfrac{1}{2}|y_{h}|_{M}^{2}+|y_{h}|_ {S}^{2},\quad\forall y_{h}\in V_{h}, \tag{46}\] that may depend on \(\rho\) only through \(\mu_{0}\) and is generated by the scalar product \[\langle u_{h},v_{h}\rangle_{R}=\mu_{0}\sum_{T\in\mathcal{T}_{h}}(u_{h},v_{h})_ {L^{2}(T)}+\tfrac{1}{2}\sum_{F\in\mathcal{F}^{\text{k}}}\left(\mathcal{M}^{ \text{sym}}u_{h},v_{h}\right)_{L^{2}(F)}+\sum_{F^{\text{k}}}(S^{b}u_{h},v_{h})+ \sum_{F^{\text{i}}}(S^{i}u_{h},v_{h}). \tag{47}\] The boundary operators we will employ in our benchmarks are all skew-symmetric so \(\mathcal{M}^{\text{sym}}=\frac{\mathcal{M}+\mathcal{M}^{\text{t}}}{2}\) is the null matrix and \(|e_{h}|_{M}=0\). Now, we can proceed to provide an _a posteriori_ error estimate for the \(R\)-norm and energy norm. Hence, let us define \(r_{RB}(y_{h})=l_{h}(y_{h};\rho)-a_{h}(z_{RB},y_{h};\rho)\) and its \(R\) and \(L\)-Riesz representations as \(\hat{r}_{R}\) and \(\hat{r}_{L}\) such that \[r_{RB}(u_{h})=\langle\hat{r}_{R},u_{h}\rangle_{R},\quad X_{R}\hat{\mathbf{r}}_{ R}=\mathbf{L}_{h}-A_{h}\mathbf{z}_{RB},\quad r_{RB}(u_{h})=\langle\hat{r}_{L},u_{h} \rangle_{L},\quad X_{L}\hat{\mathbf{r}}_{L}=\mathbf{L}_{h}-A_{h}\mathbf{z}_{RB}, \tag{48}\] where \(X_{R}\) and \(X_{L}\) are the \(R\)-norm and \(L\)-norm mass matrices, and \(L_{h}\), \(A_{h}\) and \(\mathbf{z}_{RB}\) are the representations of \(l_{h}(\cdot;\rho)\), \(a_{h}(\cdot,\cdot;\rho)\) and \(z_{RB}\) in the DG basis of \(V_{h}\). The \(\hat{r}_{L}\) representation can be computed cheaply when the parametric model is affinely decomposable with respect to the parameters, while \(\hat{r}_{R}\) requires the inversion of a possibly parametric dependent matrix \(X_{R}\). Now, consider the energy norm of the error \(\|e_{h}\|_{nrg}^{2}=a_{h}(e_{h},e_{h})\) and the coercivity constant \(\|e_{h}\|_{nrg}^{2}\geq\mu_{0}\|e_{h}\|_{L}^{2}\) derived in (45e), we have the following _a posteriori_ error estimates \[\frac{\|e_{h}\|_{nrg}}{\|z_{h}\|_{nrg}}\leq\frac{\|\hat{r}_{R}\|_{R}}{\|z_{h}\|_{ nrg}},\qquad\frac{\|e_{h}\|_{R}}{\|z_{h}\|_{R}}\leq\frac{\|\hat{r}_{R}\|_{R}}{\|z_{h}\|_{R}}, \qquad\frac{\|e_{h}\|_{nrg}}{\|z_{h}\|_{nrg}}\leq\frac{\|\hat{r}_{L}\|_{L}}{\|z_{h} \|_{nrg}},\qquad\frac{\|e_{h}\|_{lrg}}{\|z_{h}\|_{lrg}}\leq\frac{\|\hat{r}_{L}\|_{L}}{ \|z_{h}\|_{lrg}}, \tag{49}\] namely, the relative energy error with the corresponding _a posteriori_\(R\)-norm energy estimate, the relative \(R\)-norm error with the corresponding _a posteriori_\(L\)-norm energy estimate, and the relative \(L\)-norm error with the corresponding _a posteriori_\(L\)-norm ### Optimally stable error estimates for the ultraweak Petrov-Galerkin formulation In this section, we show that Friedrichs' systems are a desirable unifying formulation to consider when performing model order reduction also due to the possibility to achieve an optimally stable formulation. This can further simplify the error estimator analysis reaching the equality between the error and the residual norm. This is not the first case in which optimally stable formulations are introduced also at the reduced level, see [11, 39, 42]. In the following, we describe how to achieve this ultraweak formulation and we delineate the path one should follow to use such formulation. Nevertheless, we will not use this formulation in our numerical tests and we leave the implementation to future studies. We introduce the following Discontinuous Petrov-Galerkin (DPG) formulation from [12]. To do so, we first define \(V(\mathcal{T}_{h})\) the broken graph space with norm \(\|\bullet\|_{V(\mathcal{T}_{h})}^{2}=\|\bullet\|_{L}^{2}+\sum_{T\in\mathcal{T }_{h}}\|A\bullet\|_{L^{2}(T)}^{2}\) and \(\tilde{V}=V/Q(\Omega)\) the quotient of the graph space \(V\) with \[\begin{split} Q(\Omega)=&\left\{z\in V\,\bigg{|} \sum_{T\in\mathcal{T}_{h}}\langle Dz,y\rangle_{\tilde{V}(T),V(T)}+\tfrac{1}{2} ((M-D)z,y\rangle_{\tilde{V}(\Omega),V(\Omega)}=0,\quad\forall y\in V(\mathcal{ T}_{h})\right\}\\ =&\left\{z\in V\,\Big{|}\,a(z,y)=0,\quad\forall y\in V (\mathcal{T}_{h})\right\}.\end{split} \tag{50}\] The DPG formulation reads: find \((z,q)\in L\times\tilde{V}\) such that, for all \(y\in V(\mathcal{T}_{h})\), \[\sum_{T\in\mathcal{T}_{h}}(z,\tilde{A}y)_{L^{2}(T)}+\sum_{T\in\mathcal{T}_{h} }\langle Dq,y\rangle_{\tilde{V}(T),V(T)}+\tfrac{1}{2}((M-D)q,y\rangle_{\tilde {V},V}=\sum_{T\in\mathcal{T}_{h}}\,(f,v)_{\tilde{V}(T),V(T)}+\tfrac{1}{2} \langle(M-D)g,y\rangle_{\tilde{V},V}. \tag{51}\] The introduction of the hybrid face variables \(q\in\tilde{V}\) is necessary since \(z\in L\) does not satify (8). In practice, assuming that the traces of \(y\in V(\mathcal{T}_{h})\) are well-defined and belong to a space \(X(\mathcal{F}_{i,b})\), we can formulate (51) as follows: find \((z,q)\in L\times X(\mathcal{F}_{i,b})\) such that, for all \(y\in V(\mathcal{T}_{h})\), \[\sum_{T\in\mathcal{T}_{h}}(z,\tilde{A}y)_{L^{2}(T)}+\sum_{F\in\mathcal{F}_{i}} (\mathcal{D}q,[\![y]\!])_{X(F)}+\tfrac{1}{2}\sum_{F\in\mathcal{F}_{h}}\,(( \mathcal{M}-\mathcal{D})q,y)_{X(F)}=\sum_{T\in\mathcal{T}_{h}}\,(f,v)_{\tilde {V}(T),V(T)}+\tfrac{1}{2}\sum_{F\in\mathcal{F}_{h}}\,((\mathcal{M}-\mathcal{D })g,y)_{X(F)}, \tag{52}\] where \(X(\mathcal{F}_{i,b})\) is, for example, \([H^{-\frac{1}{2}}(\mathcal{F}_{i,b})]^{d}\times[H^{\frac{1}{2}}(\mathcal{F}_{ i,b})]^{d}\) for compressible linear elasticity, \(H^{-\frac{1}{2}}(\mathcal{F}_{i,b})\times H^{\frac{1}{2}}(\mathcal{F}_{i,b})\) for the scalar advection-diffusion-reaction and \(L^{2}_{\mathcal{T}}(\mathcal{F}_{i,b})\times L^{2}_{\mathcal{T}}(\mathcal{F}_{ i,b})\) for the Maxwell equations in stationary regime, with \(L^{2}_{\mathcal{T}}(\mathcal{F}_{i,b})\) being the space of fields in \(H(\operatorname{curl},\mathcal{T}_{h})\) whose tangential component belongs to \([L^{2}(\mathcal{F}_{i,b})]^{3}\). The problem (51) above is well-posed and consistent [12, Lemma 2.4] with the previous formulation in (13a). We consider the optimal norms, \[\|(z,q)\|_{\mathcal{U}}^{2}=\sum_{T\in\mathcal{T}_{h}}\|z\|_{L(T)}^{2}+\|q\|_{ \tilde{V}}^{2},\qquad\|y\|_{\mathcal{Y}}^{2}=\sum_{T\in\mathcal{T}_{h}}\| \tilde{A}y\|_{L(T)}^{2}+\|[\![y]\!]\|_{\partial\Omega_{h}}^{2},\quad\text{with} \quad\|[\![y]\!]\|_{\partial\Omega_{h}}=\sup_{q\in\tilde{V}}\frac{a(q,y)}{\| q\|_{\tilde{V}}}, \tag{53}\] or formally, considering (52) \[\|(z,q)\|_{\mathcal{U}}^{2}=\sum_{T\in\mathcal{T}_{h}}\|z\|_{L(T)}^{2}+\sum_{F \in\mathcal{F}_{i,b}}\|q\|_{X(F)}^{2},\qquad\|y\|_{\mathcal{Y}}^{2}=\sum_{T\in \mathcal{T}_{h}}\|\tilde{A}y\|_{L(T)}^{2}+\sum_{F\in\mathcal{F}_{i}}\| \mathcal{D}_{F}[\![y]\!]\|_{X(F)}^{2}+\sum_{F\in\mathcal{F}_{h}}\|(\mathcal{ M}^{t}-\mathcal{D})y\|_{X(F)}^{2}. \tag{54}\] With these optimal norms for the trial and test spaces we have the following result [13, Theorem 2.6]. **Theorem 4** (Optimally stable formulation).: _The bilinear form \(b:(L\times\tilde{V},\|\bullet\|_{\mathcal{U}})\to(V(\mathcal{T}_{h}),\| \bullet\|_{\mathcal{Y}})\) defined as_ \[b(u,y)=\sum_{T\in\mathcal{T}_{h}}(z,\tilde{A}y)_{L^{2}(T)}+\sum_{T\in\mathcal{T }_{h}}\langle Dq,y\rangle_{\tilde{V}(T),V(T)}+\tfrac{1}{2}\langle(M-D)q,y \rangle_{\tilde{V},V} \tag{55}\] _with \(u=(z,q)\), is an isometry between \(L\times\tilde{V}\) and \(V^{\prime}(\mathcal{T}_{h})\): we have that \(\gamma=\beta=\beta^{*}=1,\) where_ \[\gamma:=\sup_{u\in\mathcal{U}}\sup_{y\in\mathcal{Y}}\frac{b(u,y)}{\|u\|_{ \mathcal{U}}\|y\|_{\mathcal{Y}}},\quad\beta:=\inf_{u\in\mathcal{U}}\sup_{y\in \mathcal{Y}}\frac{b(u,y)}{\|u\|_{\mathcal{U}}\|y\|_{\mathcal{Y}}},\quad\beta^{* }:=\inf_{y\in\mathcal{Y}}\sup_{u\in\mathcal{U}}\frac{b(u,y)}{\|u\|_{\mathcal{U }}\|y\|_{\mathcal{Y}}} \tag{56}\] _with \(\mathcal{U}=(L\times\tilde{V},\|\bullet\|_{\mathcal{U}})\)._ This property is inherited at the discrete level as long as fixed \(U_{h}^{N_{h}}\subset\mathcal{U}\) a discretization of the trial space with \(\dim\,Z_{h}=N_{h}\), the discrete test space \(Y_{h}^{N_{h}}\subset V(\mathcal{T}_{h})\) is the set of supremizers \[Y_{h}^{N_{h}}=\operatorname{span}\left\{y_{u_{h}}\in V(\mathcal{T}_{h})\,\bigg{|} \,y=\operatorname*{argmax}_{y\in V(\mathcal{T}_{h})}\frac{b(u_{h},y)}{\|y\|_{ \mathcal{Y}}},\quad u_{h}\in Z_{h}^{N_{h}}\right\} \tag{57}\] and \(\dim\ U_{h}=\dim\ Y_{h}\), see [13, Lemma 2.8]. In particular, for every \(u_{h}=(z_{h},q_{h})\in U_{h}^{N_{h}}\), we have the optimal _a posteriori_ error estimate \[\|u-u_{h}\|_{\mathcal{U}}=\sup_{y\in V(\mathcal{T}_{h})}\frac{b(u-u_{h},y)}{\|y \|_{Y}}=\|r_{h}(u_{h})\|_{(V(\mathcal{T}_{h}))^{\prime}},\quad\langle r_{h}(u_ {h}),v\rangle_{(V(\mathcal{T}_{h}))^{\prime},V(\mathcal{T}_{h})}=\langle f,v \rangle_{(V(\mathcal{T}_{h}))^{\prime},V(\mathcal{T}_{h})}-b(u_{h},v). \tag{58}\] The same reasoning can be iterated another time to perform model order reduction with the choice \(V_{n}=\{\psi_{j}^{\rm RB}\}_{j=1}^{r}\subset U_{h}^{N_{h}}\subset\mathcal{U}\), and \[Y^{RB}=\text{span}\left\{y_{u_{RB}}\in V(\mathcal{T}_{h})\,\Big{|}\,y=\mathop {\rm argmin}_{y\in V(\mathcal{T}_{h})}\frac{b(u_{RB},y)}{\|y\|_{\mathcal{Y}}},\quad u_{RB}\in V_{n}\right\}, \tag{59}\] such that for \(u_{RB}\in V_{n}\), \[\|u-u_{RB}\|_{\mathcal{U}}=\|r_{RB}(u_{RB})\|_{(V(\mathcal{T}_{h}))^{\prime}},\quad\langle r_{RB}(u_{RB}),v\rangle_{(V(\mathcal{T}_{h}))^{\prime},V( \mathcal{T}_{h})}=\langle f,v\rangle_{(V(\mathcal{T}_{h}))^{\prime},V( \mathcal{T}_{h})}-b(u_{RB},v). \tag{60}\] The main difficulty is the evaluation of the trial spaces \(Y_{h}^{N_{h}}\) and \(Y^{RB}\) since the bilinear form \(b\) may depend on the parameters \(\boldsymbol{\rho}\). If the parameters affect only the source terms, the boundary conditions or the initial conditions for time-dependent FS, this problem is avoided. The evaluation of \(Y_{h}^{N_{h}}\) can be performed locally for each element \(T\in\mathcal{T}_{h}\), differently from \(Y^{RB}\). An example of the evaluation of the basis of \(Y_{h}^{N_{h}}\) is presented in [13, Equations 24, 25] for linear scalar hyperbolic equations that can be interpreted as FS. ## 5 Domain decomposable Discontinuous Galerkin ROMs Extreme-scale parametric models are unfeasible to reduce with standard approaches due to the high computational costs of the offline stage. Parametric multi-physics simulations, such as fluid-structure interaction problems, are reduced inefficiently with a global reduced basis, depending on the complexity of the interactions between the physical models considered and the parametric dependency. In some cases, only a part of a decomposable system is reducible with a ROM, thus a possible solution is to implement a ROM-FOM coupling through an interface. In presence of moving shocks [7] affected by the parametrization, one may want to isolate these difficult features to approximate and apply different dimension reduction methodologies depending on the subdomain. These are the main reasons to develop domain decomposable or partitioned ROMs (DD-ROMs). Some approaches from the literature are the reduced basis element methods [62, 63], the static condensation reduced basis element method [48, 27], non-intrusive methods based on local regressions in each subdomain [86, 87], overlapping Schwarz methods [23, 50], optimization-based MOR approaches [71, 72] and hyper-reduced ROMs [60]. In this last case, local approximations are useful because the local reduced dimensions are smaller and therefore more accurate local regressions can be designed to perform non-intrusive surrogate modelling. Little has been developed for the DG method, even though its formulation imposes naturally flux and solution interface penalties at the internal boundaries of the subdomains, in perspective of performing model order reduction. In our case, the linear systems associated to the parametric models are algebraically partitioned in disjoint subdomains coupled with the standard penalties from the weak DG formulations, without the need to devise additional operators to perform the coupling as long as the interfaces' cuts fall on the cell boundaries. Another less explored feature of DD-ROMs is the possibility to repartition the computational domain, while keeping the data structures relative to each subdomain local in memory, with the aim of obtaining more efficient or accurate ROMs. In fact, one additional reason to subdivide the computational domain is to partition the solution manifold into local solution manifolds that have a faster decay of the Kolmogorov n-width. The repartition of the computational domain can be performed with _ad hoc_ domain decomposition strategies. To our knowledge, the only case found in the literature is introduced in [87], where the degrees of freedom are split in each subdomain minimising the communication and activity between them and balancing the computational load across them. Relevant is the choice of weights to assign to each degree of freedom: uniform weights, nodal values of Reynolds stresses for the turbulent Navier-Stokes equations or the largest singular value of the discarded local POD modes. In particular, the last option results in a balance of energy in \(L^{2}\) norm retained in each subdomain. We explore a different approach. It must be remarked that in any case, fixed the value of the reconstruction error of the training dataset in Frobenious norm \(\|\cdot\|_{F}\), there must be at least a local reduced basis dimension greater or equal to the global reduced basis space dimension. In fact, for the Eckhart-Young theorem, if \(X\in\mathbb{R}^{d\times n}\) is the snapshots matrix ordered by columns, we have that the projection into the first \(k\) modes \(\{v_{k}\}_{i=1}^{k}\) achieves the best approximation error in the Frobenious norm in the space of matrices of rank \(k\): \[P_{k}=\mathop{\rm argmin}_{P\in\mathbb{R}^{m\times n\times s.t.\ r(P)=k}}\|X- PX\|_{F},\quad P_{k}=\sum_{i=1}^{k}v_{i}\otimes v_{i}, \tag{61}\] where \(r(\cdot)\) is the matrix rank. So, in general, it is not possible to achieve a better training approximation error in the Frobenious norm \(\|\cdot\|_{F}\) employing a number of local reduced basis smaller than what would be needed to achieve the same accuracy with a global reduced basis. So, differently from [87], instead of balancing the local reduced basis dimension among subdomains, we repartition the computational domain in regions whose restricted solution manifold is easily approximable by linear subspaces and regions for which more modes are needed. Anyway, for truly decomposable systems we expect that the reconstruction error on the test set is lower when considering local reduced basis instead of global ones, as will be shown for the Maxwell equation in stationary regime test case with discontinuous piecewise constant parameters, see Figure 7. ### Implementation of Domain Decomposable ROMs Let us assume that the full-order model is implemented in parallel with distributed memory parallelism with \(K>1\) cores, i.e., each \(i\)-th core owns locally the data structures relevant only to its assigned subdomain \(\Omega_{i}\) of the whole computational domain \(\cup_{i=1}^{K}\Omega_{i}=\Omega_{h}\subset\mathbb{R}^{d}\), for \(i=1,\ldots,K\). We will employ the deal.II library [6] to discretize the FS with the DG method, assemble the associated linear systems and solve them in parallel [9]. In particular, we employ p4est[14] to decompose the computational domain, PETSc[8] to assemble the linear system and solve it at the full-order level and petsc4py[22] to assemble and solve the reduced order system. At the offline and online stages the computations are performed in a distributed memory setting in which each core assembles its own affine decomposition, so that the evaluation of the reduced basis and of the projected local operators is always performed in parallel. The weak formulation (35) is easily decomposable thanks to the additive properties of the integrals. We recall the definition of the weak formulation \(\forall y_{h}\in V_{h},\ z_{h}\in V^{*}\) \[\begin{split} a_{h}^{cf}(z_{h},y_{h})+s_{h}(z_{h},y_{h})=\sum_{T \in\mathcal{T}_{h}}(z_{h},\tilde{A}y_{h})_{L^{2}(T)}+\frac{1}{2}\sum_{F\in \mathcal{F}_{h}^{b}}\left((\mathcal{M}+\mathcal{D})z_{h},y_{h}\right)_{L^{2}(F )}+\sum_{F\in\mathcal{F}_{h}^{s}}\left(\mathcal{D}_{F}\{\!\{z_{h}\}\!\},\!\{y_ {h}\!\}\right)_{L^{2}(F)}+\\ \sum_{F\in\mathcal{F}_{h}^{s}}(S_{F}^{b}z_{h},y_{h})_{L^{2}(F)}+ \sum_{F\in\mathcal{F}_{h,i}^{s}}(S_{h}^{i}\!\!\{z_{h}\}\!\},\!\{\llbracket y_ {h}\!\rrbracket\})_{L^{2}(F)},\end{split} \tag{62}\] and we decompose it into the \(K\) subdomains as \[\begin{split} a_{h}^{cf}(z_{h},y_{h})+s_{h}(z_{h},y_{h})=& \sum_{i=1}^{K}\left(\sum_{T\in\mathcal{T}_{h,i}}(z_{h},\tilde{A}y_{h})_{L^{2}( T)}+\frac{1}{2}\sum_{F\in\mathcal{F}_{h,i}^{s}}\left((\mathcal{M}+\mathcal{D})z_{ h},y_{h}\right)_{L^{2}(F)}+\sum_{F\in\mathcal{F}_{h,i}^{s}}\left(\mathcal{D}_{F} \{\!\{z_{h}\}\!\},\!\{y_{h}\!\}\right)_{L^{2}(F)}+\\ &\sum_{F\in\mathcal{F}_{h,i}^{s}}(S_{F}^{b}z_{h},y_{h})_{L^{2}(F )}+\sum_{F\in\mathcal{F}_{h,i}^{s}}\left(S_{h}^{i}\!\!\{z_{h}\},\!\{y_{h}\!\} \right)_{L^{2}(F)}\right)+\sum_{j=i}^{K}\left(\sum_{F\in\mathcal{F}_{h,i,j}^{s} }\left(\mathcal{D}_{F}\{\!\{z_{h}\}\!\},\!\{y_{h}\!\}\right)_{L^{2}(F)}+\\ &\sum_{F\in\mathcal{F}_{h,i,j}^{s}}\left(S_{h}^{i}\!\!\{z_{h}\}, \!\{y_{h}\!\}\right)_{L^{2}(F)}\right),\end{split} \tag{63}\] \[l_{h}(y_{h})=\sum_{T\in\mathcal{T}_{h}}(f,y_{h})_{L^{2}(T)}+\frac{1}{2}\sum_{F \in\mathcal{F}_{h}^{b}}\left((M-D)g,y_{h}\right)_{L^{2}(F)}=\sum_{i=1}^{K} \left(\sum_{T\in\mathcal{T}_{h,i}}(f,y_{h})_{L^{2}(T)}+\frac{1}{2}\sum_{F\in \mathcal{F}_{h,i}^{s}}\left((M-D)g,y_{h}\right)_{L^{2}(F)}\right), \tag{64}\] where we have defined the internal subsets \(\mathcal{T}_{h,i}=\mathcal{T}_{h}\cap\Omega_{i}\), \(\mathcal{F}_{h,i}^{i}=\mathcal{F}_{h}^{i}\cap\bar{\Omega}_{i}\) and \(\mathcal{F}_{h,i}^{b}=\mathcal{F}_{h}^{b}\cap\bar{\Omega}_{i}\), \(\forall i=1,\ldots,K\) and the interfaces subsets \(\mathcal{F}_{h,i,j}^{i}=\mathcal{F}_{h}^{i}\cap\overline{\Omega}_{i}\cap \overline{\Omega}_{j}\) and \(\mathcal{F}_{h,i,j}^{b}=\mathcal{F}_{h}^{b}\cap\overline{\Omega}_{i}\cap \overline{\Omega}_{j}\), \(\forall i=1,\ldots,K\). We remark that the computational domain is always decomposed such that the cuts of the subdomains \(\{\partial\Omega_{i}\}_{i=1}^{K}\) fall on the interfaces of the triangulation \(\mathcal{F}_{h}^{i}\cup\mathcal{F}_{h}^{b}\). We define the bilinear and linear operators in \(V_{h}^{*}\), \[\begin{split}\mathcal{A}_{ii}=\sum_{T\in\mathcal{T}_{h,i}}( \bullet,\tilde{A}\bullet)_{L^{2}(T)}+\frac{1}{2}\sum_{F\in\mathcal{F}_{h,i}^{b} }\left((\mathcal{M}+\mathcal{D})\bullet,\bullet\right)_{L^{2}(F)}+\sum_{F\in \mathcal{F}_{h,i}^{i}}\left(\mathcal{D}_{F}\{\!\{\bullet\}\!\},\!\{\bullet \}\right)_{L^{2}(F)}+\\ \sum_{F\in\mathcal{F}_{h,i}^{s}}(S_{F}^{b}\bullet,\bullet)_{L^{2}( F)}+\sum_{F\in\mathcal{F}_{h,i}^{s}}(S_{h}^{i}\!\!\{\bullet\},\!\{\bullet \})_{L^{2}(F)},\quad\forall i=1,\ldots,K,\end{split} \tag{65}\] \[\mathcal{A}_{ij}=\mathcal{A}_{ji}=\sum_{F\in\mathcal{F}_{h,i,j}^{s} }\left(\mathcal{D}_{F}\{\!\{\bullet\}\!\},\!\{\bullet\}\right)_{L^{2}(F)}+\sum_{F \in\mathcal{F}_{h,i,j}^{s}}(S_{h}^{i}\!\!\{\bullet\},\!\{\bullet\},\!\{ \bullet\})_{L^{2}(F)},\quad\forall j,i=1,\ldots,K,\ i\neq j,\] (66) \[\mathcal{F}_{i}=\sum_{T\in\mathcal{T}_{h,i}}(f,\bullet)_{L^{2}(T)}+ \frac{1}{2}\sum_{F\in\mathcal{F}_{h,i}^{b}}\left((M-D)g,\bullet\right)_{L^{2}(F)},\quad\forall i=1,\ldots,K, \tag{67}\] and their matrix representation in the discontinuous Galerkin basis of \(V_{h}\), \[\left.\left(\mathcal{A}_{ii}\right)\right|_{V_{h}}=A_{ii},\qquad\mathcal{F}_{i} \big{|}_{V_{h}}=F_{i},\quad\forall i=1,\ldots,K,\qquad\left(\mathcal{A}_{ij} \right)\big{|}_{V_{h}}=\left.\left(\mathcal{A}_{ji}\right)\right|_{V_{h}}=A_{ ij}=A_{ji},\quad\forall j,i=1,\ldots,K,\ i\neq j, \tag{68}\] and in the local reduced basis \(V_{i}=\{\psi_{j,i}^{\rm RB}\}_{j=1}^{r}\subset V_{h}(\Omega_{i}),\ i=1,\ldots,K\), \[\left.\left(\mathcal{A}_{ii}\right)\right|_{V_{RB}}=B_{ii},\qquad\mathcal{F}_{ i}\big{|}_{V_{RB}}=L_{i},\quad\forall i=1,\ldots,K,\qquad\left(\mathcal{A}_{ ij}\right)\right|_{V_{RB}}=\left.\left(\mathcal{A}_{ji}\right)\right|_{V_{RB}}=B_{ ij}=B_{ji},\quad\forall j,i=1,\ldots,K,\ i\neq j. \tag{69}\] As anticipated, in our test cases the subdomains interface penalties are naturally included inside \(\{\mathcal{A}_{ij}\}_{i,j=1,\ldots,K}\). In practice, additional penalty terms could be implemented: \[\mathcal{S}_{ij}=\sum_{F\in\mathcal{F}_{h,i,j}^{k}}(S[\bullet],[\bullet])_{L^ {2}(F)},\qquad\mathcal{S}_{ij}\big{|}_{V_{h}}=S_{ij}\qquad\left.\left( \mathcal{A}_{ij}+\mathcal{S}_{ij}\right)\right|_{V_{RB}}=B_{ij},\quad\forall j,i=1,\ldots,K,\ i\neq j. \tag{70}\] A matrix representation of the projection of the full-order block matrix \((A_{ij})_{i,j=1}^{K}\in\mathbb{R}^{d\times d}\) into the reduced order block matrix \((B_{ij})_{i,j=1}^{K}\in\mathbb{R}^{Kr\times Kr}\) is shown in Figure 1 for \(K=4\). We remark that, differently from continuous Galerkin formulations, the DG penalization on jumps across the interfaces is already enough to couple the subdomains and there is no need of further stabilization, as shown in Figure 1. Nonetheless, additional interface penalties terms can be easily introduced, taking also into account DG numerical fluxes. The reduced dimension is the number of subdomains \(K\) times the local reduced basis dimensions \(\{r_{i}\}_{i=1}^{K}\), here supposed equal \(r=r_{i},\ i=1,\ldots,K\), but in general can be different. ### Repartitioning strategy A great number of subdomains can pollute the efficiency of the developed DD-ROMs at the online stage since the reduced dimension would be \(\sum_{i=1}^{K}r_{i}\) that scales linearly with the number of cores if the local reduced dimensions \(r_{i}\) are equal. In order to keep the computational savings in the assembly of the affine decomposition at the offline stage, we may want to preserve the distributed property of our ROM. One possible solution is to fix a reduced number of subdomains \(k\ll K\) such that \(\sum_{i=1}^{k}r_{i}\) is small enough to achieve a significant speedup with respect to the FOM. The additional cost with respect to a monodomain ROM is associated to the evaluation of the \(k\) local reduced basis with SVD and the assembly of the affine decomposition operators. The new \(k\) reduced subdomains do not need to be agglomerations of the FOM subdomains, hence, different strategies to assemble the new \(k\) reduced subdomains can be investigated. The number of subdomains \(K\) was kept the same as the FOM since it is necessary to collect the snapshots efficiently at the full-order level through _p4est_. However, if we decide to repartition our computational domain, we can develop decomposition strategies that reduce \(\sum_{i=1}^{K}r_{i}\). Ideally, having in mind the Eckhart-Young theorem, a possible strategy is to lump together all the dofs of the cells that have a fast decaying Kolmogorov n-width, and focus on the remaining ones. We test this procedure in the practical case \(k=2,\ K=4\) to perform numerical experiments in section 5.3. To solve the classification problem of partitioning the elements of the mesh into \(k\) subdomains, we describe here two scalar indicators that will be used as metrics. For \(k=2\) subdomains, it will be sufficient to choose the percentage of cells \(P_{l}\) corresponding to the lowest values of the chosen scalar indicator. Other strategies for \(k>2\) may also involve clustering Figure 1: Assembly of the reduced block matrix \(\{B_{i,j}\}_{i,j=1}^{4}\) through the projection onto the local reduced basis \(\{V_{i}\}_{i=1}^{4}\) of the full-order partitioned matrix \(A=\{A_{i,j}\}_{i,j=1}^{4}\) when considering 4 subdomains. The natural DG penalty terms are included in the matrix \(A\) without the need for additional penalty terms \(\{S_{i,j}\}_{i\neq j,\,i,j=1}^{4}\) to impose stability at the reduced level. algorithms and techniques to impose connectedness of the clusters, as done for local dimension reduction in parameter spaces in [76]. A first crude and cheap indicator to repartition the computational domain is the cellwise variance of the training snapshots, as it measures how well, in mean squared error, the training snapshots are approximated by their mean, \(\forall T\in\mathcal{T}_{h}\). **Definition 3** (Cellwise variance indicator).: _We define the cellwise variace indicator \(I_{var}:\mathcal{T}_{h}\to\mathbb{R}^{+}\),_ \[I_{var}(T)=\int_{T}\lVert\text{Var}(\{\mathbf{z}(\boldsymbol{\rho}_{i})\}_{i=1 }^{n})\rVert_{L^{2}(\mathbb{R}^{m})}\ d\mathbf{x},\qquad(\text{Var}(\{\mathbf{ z}(\boldsymbol{\rho}_{i})\}_{i=1}^{n}))_{l}=\tfrac{1}{n}\sum_{i=1}^{n}\left| \mathbf{z}_{i}(\boldsymbol{\rho}_{i})-\tfrac{1}{n}\sum_{j=1}^{n}\mathbf{z}_{l} (\boldsymbol{\rho}_{j})\right|^{2},\quad l=1,\ldots,m, \tag{71}\] _where \(n>0\) is the number of training DG solutions \(\{\mathbf{z}(\boldsymbol{\rho}_{i})\}_{i=1}^{n}\) with \(\mathbf{z}(\boldsymbol{\rho}_{i}):\Omega\subset\mathbb{R}^{d}\to\mathbb{R}^{m},\ \forall i\in\{1,\ldots,n\}\)._ Note that the indicator is a scalar function on the set of elements of the triangulation \(\mathcal{T}_{h}\). This is possible thanks to the assumption that boundaries of the subdomains belong to the interfaces of the elements of \(\mathcal{T}_{h}\). When this hypothesis is not fulfilled, we would need to evaluate additional operators to impose penalties at the algebraical interfaces between subdomains that are not included in the set \(\mathcal{F}_{h}^{i}\cup\mathcal{T}_{h}^{b}\), not to degrade the accuracy. The cellwise variance indicator is effective for all the test cases for which there is a relatively large region that is not sensitive to the parametric instances, as in our advection diffusion reaction test case in Section 5.3.3. Common examples are all the CFD numerical simulations that have a far field with fixed boundary conditions. However, the variance indicator may be blind to regions in which the snapshots can be spanned by a one or higher dimensional linear subspace and are not well approximated by a constant field, as in the compressible linear elasticity test case in Section 5.3.2. In these cases, a valid choice is represented by a cellwise Grassmannian dimension indicator. We denote with \(D_{\text{T}}\) the number of degrees of freedom associated to each element \(T\), assumed constant in our test cases. **Definition 4** (Cellwise Grassmannian dimension indicator).: _Fixed \(1\leq r_{T}\in\mathbb{N}\), and \(1\leq n_{neig}\in\mathbb{N}\), we define the cellwise Grassmannian dimension indicator \(I_{G}:\mathcal{T}_{h}\to\mathbb{R}^{+}\),_ \[I_{G}(T)=\lVert X_{T}-U_{T}U_{T}^{T}X_{T}\rVert_{F}, \tag{72}\] _where \(X_{T}\in\mathbb{R}^{n_{neigh}D_{T}\times n}\) is the snapshots matrix restricted to the cell \(T\) and its \(n_{neig}\) nearest neighbours, and \(U_{T}\in\mathbb{R}^{n_{neigh}D_{T}\times\tau_{T}}\) are the modes of the truncated SVD of \(X_{T}\) with dimension \(r_{T}\)._ The cellwise Grassmannian dimension indicator \(I_{G}\) is a measure of how well the training snapshots restricted to a neighbour of each cell are approximated by a \(r_{T}\) dimensional linear subspace. Employing this indicator, we recover an effective repartitioning of the computational subdomain of the compressible linear elasticity test case, see Section 5.3.2. The Grassmannian indicator has two hyper-parameters that we fix for each test case in section 5.3: the number of nearest-neighbour cells is \(n_{neigh}=3\) and the number of reduced local dimension used to evaluate the \(L^{2}\) reconstruction error is \(r_{T}=1\). The number of nearest-neighbour is chosen to deal with critical cases at the boundaries and the closest neighbouring cells are chosen based on the distance of barycenters. The reduced local dimension \(r_{T}=1\) is chosen very small as the computations must be done on very few cells. We remark that both indicators do not guarantee that the obtained subdomains belong to the same connected components and, though this might be a problem in terms of connectivity and computational costs for the FOM, at the reduced level this does not affect the online computational costs. Nevertheless, in the tests we perform, the obtained subdomains are connected. Now, the assembly of the affine decomposition proceeds as explained in Section 5.1 with the difference that at least one local reduced basis and reduced operator is split between at least 2 subdomains/cores. A schematic block matrix representation of the procedure is shown in Figure 2. ### Numerical experiments In this section, we test the presented methodology for different linear parametric partial differential equations: the Maxwell equations in stationary regime in section 5.3.1 (**MS**), the compressible linear elasticity equations in section 5.3.2 (**CLE**) and the advection diffusion reaction equations in section 5.3.3 (**ADR**). We study two different parametrizations for the test cases **MS** and **CLE**: one with parameters that affect the whole domain **MS1** and **CLE1**, and one with parameters that affect independently different subdomains **MS2** and **CLE2**. We show a case in which DD-ROMs work effectively **MS2** and a case **CL2** in which the performance is analogous to single domain ROMs, even if the parameters have a local influence. We test the effectiveness of the _a posteriori_ error estimates introduced in section 4.1, the accuracy of DD-ROMs for \(K=4\) and the results of repartitioning strategies with \(k=2\) subdomains. When performing a repartition of the computational domain \(\Omega\) in subdomains \(\{\Omega_{i}\}_{i=1}^{k}\) with reduced dimensions \(\{r_{\Omega_{i}}\}_{i=1}^{k}\), we call the subdomains with lower values of the variance indicator \(I_{\text{var}}\), see definition 3, _low variance_ regions and with lower values of the Grassmannian indicator \(I_{G}\), see definition 4, low Grassmannian reconstruction error_. The complementary subdomains are the _high variance_ and _high Grassmannian reconstruction error_ regions, respectively. We show a case (**CLE1**) in which the Grassmannian indicator detects a better partition in terms of local reconstruction error with respect to the variance indicator. We will observe that the relative errors in \(R\)-norm and energy norm and the \(L^{2}\) relative error estimator and \(L^{2}\) relative energy norm estimator are the most affected by the domain partitions. The open-source software library employed for the implementation of the full-order Friedrichs' systems discontinuous Galerkin solvers is deal.II[6] and we have used piecewise \(\mathbb{P}^{2}\) basis functions in all simulations. The partition of the computational domain is performed in deal.II through the open-source p4est package [14]. The distributed affine decomposition data structures are collected in the offline stage and exported in the sparse NumPy format [41]. The reduced order models and the repartition of the computational domains are implemented in Python with MPI-based parallel distributed computing mpi4py[21] and petsc4py[8] for solving the linear full-order systems through MUMPS[1], a sparse direct solver. #### 5.3.1 Maxwell equations in stationary regime (MS) We consider the parametric Maxwell equations in the stationary regime in \(d=3\) spatial dimensions, with \(m=6\) equations, on a torus \(\Omega\subset\mathbb{R}^{3}\) with inner radius \(r=0.5\) and outer radius \(R=2\) centered in \(\mathbf{0}\) and lying along the \((x,z)\) plane: \[\left(\begin{array}{c}\mu\mathbf{H}+\nabla\times\mathbf{E}\\ \sigma\mathbf{E}-\nabla\times\mathbf{H}\end{array}\right)=\left(\begin{array}[ ]{c}\mathbf{g}\\ \mathbf{f}\end{array}\right),\quad\forall\mathbf{x}\in\Omega, \tag{73}\] the tangential homogeneous boundary conditions \(\mathbf{n}\times\mathbf{E}=\mathbf{0}\) are applied with the boundary operator (18). We vary the parameters in the interval \(\boldsymbol{\rho}=(\mu,\sigma)\in[0.5,2]\times[0.5,3]\subset\mathbb{R}^{2}\), leading to \(\mu_{0}=\min(\mu,\sigma)\). We consider the exact solutions \[\mathbf{H}_{\text{exact}}(\mathbf{x}) =-\frac{1}{\mu}\left(\frac{2xy}{\sqrt{x^{2}+z^{2}}},-\frac{4y^{2} \sqrt{x^{2}+z^{2}}+\sqrt{x^{2}+z^{2}}(-12(x^{2}+z^{2})-15)+32(x^{2}+z^{2})}{4 (x^{2}+z^{2})},\frac{2xy}{\sqrt{x^{2}+z^{2}}}\right),\] \[\mathbf{E}_{\text{exact}}(\mathbf{x}) =\left(\frac{z}{\sqrt{x^{2}+z^{2}}},0,-\frac{x}{\sqrt{x^{2}+z^{2} }}\right)\cdot\left(r^{2}-y^{2}-\left(R-\sqrt{x^{2}+z^{2}}\right)^{2}\right).\] We remark that the exact solutions can be approximated with a linear reduced subspace of dimension \(1\), if we obtained the reduced basis with a partitioned SVD on the fields \((\mathbf{H},\mathbf{E})\) separately. We do not choose this approach and perform a monolithic SVD to test the convergence of the approximation with a DD-ROMs with respect to the local reduced dimensions. The source terms are defined consequently as \[\mathbf{g}(\mathbf{x})=0,\qquad\mathbf{f}(\mathbf{x})=\sigma\mathbf{E}_{ \text{exact}}-\nabla\times\mathbf{H}_{\text{exact}}. \tag{74}\] We consider two parametric spaces: \[\boldsymbol{\rho}=(\mu,\sigma)\in[0.5,2]\times[0.5,3] =\mathcal{P}_{1}\subset\mathbb{R}^{2},\qquad(\mathbf{MS1}) \tag{75a}\] \[\boldsymbol{\rho}=(\mu_{1},\sigma_{1},\mu_{2},\sigma_{2})\in[0.5,2]\times[0.5,3]\times[0.5,2]\times[0.5,3]=\mathcal{P}_{2}\subset\mathbb{R}^ {4},\qquad(\mathbf{MS2}) \tag{75b}\] where in the second case, the parameters \(\mu\) and \(\sigma\) are now piecewise constant: \[\mu=\begin{cases}\mu_{1},&x<0,\\ \mu_{2},&x\geq 0,\end{cases}\qquad\sigma=\begin{cases}\sigma_{1},&x<0,\\ \sigma_{2},&x\geq 0,\end{cases} \tag{76}\] Figure 2: Repartitioning of the reduced block matrix shown in Figure 1 from \(K=4\) subdomains to \(k=2\) repartitioned subdomains. The projection from the full-order matrix \(A=\{A_{i,j}\}_{i,j=1}^{4}\) to the reduced matrix \(B=\{B_{i,j}\}_{i,j=1}^{2}\) is sketched. It is performed locally in a distributed memory setting, the re-ordering shown by the arrows is reported only to visually see the which block structure of the full-order matrix \(A\) would correspond to the blocks of the reduced matrix \(B\). where \(\mathbf{x}=(x,y,z)\in\Omega\subset\mathbb{R}^{3}\). In Figure 3, we show solutions for \(\mu=\sigma=1\) and for discontinuous values of the parameters: \(\mu_{1}=\sigma_{1}=1\) in \(\{x<0\}\cap\Omega\) and \(\mu_{2}=\sigma_{2}=2\) in \(\{x\geq 0\}\cap\Omega\). The FOM partitioned and DD-ROM repartitioned subdomains are shown in Figure 4. For **MS1**, we choose the variance indicator to repartition the computational subdomain in two subsets: \(P_{l}=20\%\) of the cells for the _low variance_ part and \(80\%\) for the _high variance_ part. For **MS2**, we split the computational domain in two parts with the Grassmannian indicator and \(P_{l}=50\%\). At the end of this subsection a comparison of the effectiveness of DD-ROMs with and without discontinuous parameters will be performed, the associated error plots are reported in Figure 6 and Figure 7. We will see that, for this simple test case **MS2**, there is an appreciable improvement of the accuracy when the computational domain subdivisions match the regions \(\{x<0\}\cap\Omega\) and \(\{x\geq 0\}\cap\Omega\) in which \(\mu\) and \(\sigma\) are constant. Such subdivision is detected by the Grassmannian indicator with \(P_{l}=50\%\), as shown in Figure 4 on the right. This is the archetypal case in which DD-ROMs are employed successfully, in comparison with **MS1** for which there is no significant improvement with respect to classical global linear reduced basis. In Figure 5, we show how the different thresholds applied to the two indicators can affect the reconstruction error on a reduced space with \(r_{\Omega_{i}}=3\). All the lines plot the local relative error computed on different subdomains (either one of the \(k\) DD-ROM subdomains or on the whole domain). On the \(x\)-axis it is shown the percentage of cells that are grouped into the Figure 4: **MS. Left**: FOM computational domain partitioned in \(K=4\) subdomains inside deal.II. **Center**: **MS1**, DD-ROM repartition of the computational subdomain \(k=2\) with the cellwise variance indicator \(I_{\text{var}}\), definition 3: \(20\%\) of the cells belong to the _low variance_ part, represented in blue inside the torus, and the other \(80\%\) belong to the _high variance_ part, represented in red. **Right**: **MS2**, DD-ROM repartition with variance indicator \(P_{l}=50\%\). The computational domain is exactly split at the interfaces that separate the subdomains \(\{x<0\}\cap\Omega\) and \(\mu=\sigma=2\) in which the parameters \(\mu\) and \(\sigma\) are constant. Figure 3: **MS. Electric and magnetic fields of the Maxwell equations in stationary regime with Dirichlet homogeneous boundary conditions \(\mathbf{n}\times\mathbf{E}=\mathbf{0}\). The vectors of the magnetic and electric fields are scaled by \(0.5\) and \(2\) of their magnitude respectively. **Left**: **MS1**, \(\mu=\sigma=1\), test case errors shown in Figure 6. **Right**: **MS2**, \(\mu=\sigma=1\) in \(\{x<0\}\cap\Omega\) and \(\mu=\sigma=2\) in \(\{x\geq 0\}\cap\Omega\), test case errors shown in Figure 7. low variance or low Grassmannian DD-ROM subdomain. We observe that the cellwise variance indicator is a good choice for the purpose of repartitioning the subdomain from \(K=4\) to \(k=2\). Indeed, it is possible to build a low variance subdomain (value of the abscissa \(20\%\) in Figure 5) with a low local relative reconstruction error (\(5\cdot 10^{-4}\)) with respect to the global one (\(8.6\cdot 10^{-4}\)). This means that choosing the threshold \(P_{l}=20\%\) for the low variance subdomain, we should be able to use less reduced basis functions for that subdomain without affecting too much the global error. Test case **MS1**. We evaluate \(n_{\text{train}}=20\) training full-order solutions and \(n_{\text{test}}=80\) test full-order solutions, corresponding to a uniform independent sampling from the parametric domain \(\mathcal{P}_{1}\subset\mathbb{R}^{2}\). Figure 6 shows the result relative to the relative \(L^{2}\)-error and relative errors in energy norm, with associated _a posteriori_ estimators. The number abscissae \(0,5,10,\ldots,95\) represents the train parameters \(n_{\text{train}}=20\) while the others \(n_{\text{test}}=80\) parameters are the test set. For these studies, we have fixed the local reduced dimensions to \(r_{\Omega_{i}}=3,\;i=1,\ldots,K\) for \(K=4\), \(r_{\Omega}=3\) for the whole computational domain and \(r_{\Omega_{1}}=2,\;r_{\Omega_{2}}=3\) for the DD-ROM repartitioned case with \(k=2\). This choice of repartitioning with the \(20\%\) of _low variance_ cells and local reduced dimension \(r_{\Omega_{1}}=2\) does not deteriorate significantly the accuracy and the errors almost coincide for all approaches. However, unless the parameters \(\sigma,\mu\) assume different discontinuous values in the computational domain \(\Omega\), DD-ROMs are not advisable for this test case if the objective is improving the predictions' accuracy. Test case **MS2**. Similarly to the previous case, we evaluate \(n_{\text{train}}=20\) training full-order solutions and \(n_{\text{test}}=80\) test full-order solutions, corresponding to a uniform independent sampling from the parametric domain \(\mathcal{P}_{2}\subset\mathbb{R}^{4}\). As mentioned above, if we vary the parameters \(\boldsymbol{\rho}=(\mu,\sigma)\) discontinuously on the subdomains \(\{x\geq 0\}\cap\Omega\) and \(\mathbf{x}\in\{x<0\}\cap\Omega\), we obtain the results shown in Figure 7. It can be seen that repartitioning \(\Omega\) in \(k=2\) DD-ROM subdomains with the local Grassmannian indicator \(I_{G}\) and \(P_{l}=50\%\) produces effective DD-ROMs compared to the case of a single reduced solution manifold for the whole computational domain and for the DD-ROM with \(k=4\) for which the subdomains do not match \(\{x<0\}\cap\Omega\) and \(\{x\geq 0\}\cap\Omega\). In this case, we kept the local dimension of DD-ROM repartitioned case with \(k=2\) equal \(r_{\Omega_{1}}=r_{\Omega_{2}}=3\). For this simple test case, there is an appreciable improvement for some test parameters in the accuracy for \(k=2\) instead of \(K=4\) or a classical global linear basis ROM. In Table 1, we list the computational times and speedups for a simulation with the different methods. For an error convergence analysis with respect to the size of the reduced space, we refer to Appendix C. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{FOM} & \multicolumn{3}{|c|}{ROM} & \multicolumn{3}{|c|}{DD-ROM} \\ \hline \(N_{h}\) & time & \(r\) & time & speedup & \(r_{i}\) & time & speedup \\ \hline \hline 6480 & 254.851 [ms] & 3 & 51.436 [\(\mu\)s] & \(\sim\) 495 & [3, 3, 3, 3] & 62.680 [\(\mu\)s] & \(\sim\) 406 \\ \hline \end{tabular} \end{table} Table 1: **MS1**. Average computational times and speedups for ROM and DD-ROM approaches for Maxwell equations. The speedup is computed as the FOM computational time over the ROM one. The FOM runs in parallel with 4 cores, so “FOM time” refers to wallclock time. Figure 5: **MS1**. Local relative \(L^{2}\)-reconstruction errors of the snapshots matrix restricted to the two subdomains of the repartitioning performed with the indicator \(I_{\text{var}}\) (in red and light-blue), Definition 3, and \(I_{\text{G}}\) (in orange and blue), Definition 4. The relative \(L^{2}\)-reconstruction error attained on the whole domain is shown in black for the indicator \(I_{\text{var}}\) and in brown for the indicator \(I_{\text{G}}\). The local reduced dimensions used to evaluate the local reconstruction errors is \(r_{\Omega_{i}}=3,\;i=1,2\). Figure 6: **MS1. Errors and estimators for Maxwell equations corresponding to the \(n_{\text{train}}=20\) uniformly sampled training snapshots corresponding to the abscissae \(0,5,10,\ldots,95\), and \(n_{\text{test}}=80\) uniformly sampled test snapshots, corresponding to the other abscissae. The reduced dimensions of the ROMs are \(\{r_{\Omega_{i}}\}_{i=1}^{K}=[3,3,3,3]\) for \(K=4\) partitions, \(r_{\Omega}=3\) for \(k=1\) partition, and \(\{r_{\Omega_{i}}\}_{i=1}^{k}=[2,3]\) for \(k=2\) partitions. For the case \(k=2\) we employed the cellwise variance indicator \(I_{G}\), Definition 3, with \(P_{l}=20\%\). It can be seen that even reducing the local dimension from \(3\) to \(2\) of one of the \(k=2\) repartitioned subdomains, the accuracy of the predictions does not decrease sensibly.** Figure 7: **MS2**. Errors and estimators for Maxwell equations with discontinuous \(\mu\) and \(\sigma\) corresponding to the \(n_{\text{train}}=20\) uniformly sampled training snapshots corresponding to the abscissae \(0,5,10,\ldots,95\), and \(n_{\text{test}}=80\) uniformly sampled test snapshots, corresponding to the other abscissae. The reduced dimensions of the ROMs are \(\{r_{\Omega_{i}}\}_{i=1}^{K}=[3,3,3,3]\) for \(K=4\) partitions, \(r_{\Omega}=3\) for \(k=1\) partition, and \(\{r_{\Omega_{i}}\}_{i=1}^{k}=[3,3]\) for \(k=2\) partitions. For the case \(k=2\) we employed the cellwise local Grassmannian dimension indicator \(I_{G}\), Definition 3, with \(P_{l}=50\%\). The subdivisions detected exactly match the subdomains \(\{x<0\}\cap\Omega\) and \(\{x\geq 0\}\cap\Omega\) in which the parameters are constant. An improvement of the predictions can be appreciated for some test parameters when employing \(k=2\) repartitions. #### 5.3.2 Compressible linear elasticity (CLE) Next, we consider the parametric compressible linear elasticity system in \(d=3\) physical dimensions with a cylindrical shell along the z-axis as domain: the inner radius is \(1\), outer radius \(3\) and height \(10\), and the base centered in \(\mathbf{0}\). The \(m=12\) equations of the FS are \[\left(\begin{array}{c}\boldsymbol{\sigma}-\mu_{1}(\nabla\cdot\mathbf{u}) \mathbb{I}_{3}-2\mu_{2}\frac{\left(\nabla\mathbf{u}+(\nabla\mathbf{u})^{t} \right)}{2}\\ -\frac{1}{2}\nabla\cdot(\boldsymbol{\sigma}+\boldsymbol{\sigma}^{t})+\mu_{3} \mathbf{u}\end{array}\right)=\left(\begin{array}{c}0\\ \mathbf{f}\end{array}\right),\quad\forall\mathbf{x}\in\Omega\subset\mathbb{R}^{ 3}, \tag{77}\] where \(\boldsymbol{\rho}=(\mu_{1},\mu_{2},\mu_{3})\in[100,1000]^{2}\times[1,1]=\mathcal{ P}\subset\mathbb{R}^{3}\) and \(\mathbf{f}=(0,-1,0)\). The system can be rewritten as FS as in (20). We define the boundaries \[\Gamma_{D}=\partial\Omega\cap\{z=0\},\qquad\Gamma_{N}=\partial\Omega\setminus \Gamma_{D}. \tag{78}\] Mixed boundary conditions are applied with the boundary operator (24): homogeneous Dirichlet boundary conditions are imposed on \(\Gamma_{D}\) and homogeneous Neumann boundary conditions on \(\Gamma_{N}\). We consider two parametric spaces: \[\boldsymbol{\rho} =(\mu_{1},\mu_{2})\in[100,1000]^{2}=\mathcal{P}_{1}\subset\mathbb{ R}^{2},\qquad\mbox{\bf(CLE1)} \tag{79a}\] \[\boldsymbol{\rho} =(\mu_{1},\mu_{2},f_{1},f_{2})\in[100,1000]^{2}\times[-2,2]^{2}= \mathcal{P}_{2}\subset\mathbb{R}^{4},\qquad\mbox{\bf(CLE2)} \tag{79b}\] where in the second case, the source term \(\mathbf{f}\) is now piecewise constant: \[\mathbf{f}=\begin{cases}f_{1}\cdot(0,-1,0),&z<5,\\ f_{2}\cdot(0,-1,0),&z\geq 5.\end{cases} \tag{80}\] We show two sample solutions for \(\mu_{1}=\mu_{2}=1000\) in Figure 8 for **CLE1** and \(\mu_{1}=\mu_{2}=1000\), \(f_{1}=1\) and \(f_{2}=-1\) for **CLE2**, on the left and on the right, respectively. The partitioned and repartitioned subdomains are shown in Figure 9. For the first case **CLE1** we employ a mesh of \(24\) cells and \(7776\) dofs, for the second **CLE2** a mesh of \(60\) cells and \(19440\) dofs. Test case **CLE1**. This test case presents no region for which the restricted solutions are more or less approximable with a constant field, as would be detected by the variance indicator: as shown in Figure 10, the local relative \(L^{2}\)-reconstruction error in the region with _low variance_, assigned by \(I_{\mathrm{var}}\), deteriorates from the value \(2\cdot 10^{-3}\) of the abscissae \(0\%\) and \(100\%\) to \(1\cdot 10^{-2}\) of the abscissae \(4\%\). Nonetheless, despite the parametric solutions are not approximable efficiently with a constant field, they are well represented by a one dimensional linear subspace in the region located by the cellwise Grassmannian dimension indicator \(I_{G}\), for \(P_{l}=12\%\). The associated _low local Grassmannian dimension_ region for \(P_{l}=12\%\) is shown in Figure 9 in blue. Also in this test case, the employment of DD-ROMs is not advisable, since there are little gains in the local relative \(L^{2}\)-reconstruction error for the _low local Grassmannian dimensional_ region (values around \(3\cdot 10^{-3}\), in orange for the abscissa \(P_{l}=12\%\), in Figure 10). The choice of local reduced dimensions \(r_{\Omega_{1}}=2\) and \(r_{\Omega_{2}}=3\) does not affect greatly the errors shown in Figure 11. Also in this case, we evaluate \(n_{\mathrm{train}}=20\) training full-order solutions and \(n_{\mathrm{test}}=80\) test full-order solutions, Figure 8: **CLE. Left:** solution of the compressible linear elasticity FS **CLE1** with parameter values \(\mu_{1}=\mu_{2}=1000\). The cylindrical shell displacement \(\mathbf{u}\), and with a different colorbar also the field \(\sigma\mathbf{e}_{\mathbf{z}}\), named sigma_3, are shown. At the extremity close to \(z=0\) homogeneous Dirichlet boundary conditions are imposed. **Right:** solution of the test case **CLE2** with discontinuous values of the source terms along the computational domain \(\{z<5\}\cap\Omega\) and \(\{z\geq 5\}\cap\Omega\): \(\mu_{1}=\mu_{2}=1000\), \(f_{1}=1\) and \(f_{2}=-1\). corresponding to a uniform independent sampling from the parametric domain \(\mathcal{P}\subset\mathbb{R}^{3}\). Also for these studies, we have fixed the local dimensions to \(r_{\Omega_{i}}=3,\ i=1,\ldots,K\) for \(K=4\), \(r_{\Omega}=3\) for the whole computational domain and \(r_{\Omega_{1}}=2,\ r_{\Omega_{2}}=3\) for the repartitioned case with \(k=2\). Test case **CLE1**. Similarly to the previous case, we evaluate \(n_{\rm train}=20\) training full-order solutions and \(n_{\rm test}=80\) test full-order solutions, corresponding to a uniform independent sampling from the parametric domain \(\mathcal{P}_{2}\subset\mathbb{R}^{4}\). This time, if we vary the parameters \(f_{1}\) and \(f_{2}\) inside different subdomains \(\{z\geq 5\}\cap\Omega\) and \(\{z<5\}\cap\Omega\), we obtain the results shown in Figure 12. It can be seen that repartitioning \(\Omega\) in \(k=2\) DD-ROM subdomains with the local Grassmannian indicator \(I_{G}\) and \(P_{l}=50\%\) does not produce more accurate DD-ROMs compared to the case of a single reduced solution manifold for the whole computational domain and for the DD-ROM with \(k=4\). In this case, we kept the local dimension of DD-ROM repartitioned case with \(k=2\) equal \(r_{\Omega_{1}}=r_{\Omega_{2}}=3\). For this simple test case, there is not an appreciable improvement for some test parameters in the accuracy for \(k=2\) instead of \(K=4\) or a classical global linear basis ROM. The reason is that even if the parameters \(f_{1}\) and \(f_{2}\) affect different subdomains of \(\Omega\), the solutions on the whole domain are still well correlated. Differently from the previous test case **MS2** from section 5.3.1, this is a typical case for which DD-ROMs are not effective, even if the parametrization affects independently two regions of the whole domain \(\Omega\). In Table 2, we list the computational times and speedups for a simulation with the different methods. For an error analysis with respect to the size of the reduced space, we refer to Appendix C. Figure 10: **CLE1.** Local relative \(L^{2}\)-reconstruction errors of the snapshots matrix for elasticity equations restricted to the two subdomains of the repartitioning performed with the indicator \(I_{\rm var}\) (in red and light-blue), Definition 3, and \(I_{\rm G}\) (in orange and blue), Definition 4. The relative \(L^{2}\)-reconstruction error attained on the whole domain is shown in black for the indicator \(I_{\rm var}\) and in brown for the indicator \(I_{\rm G}\). The local reduced dimensions used to evaluate the local reconstruction errors is \(r_{\Omega_{i}}=3,\ i=1,2\). Figure 9: **CLE. Left:** computational subdomains partitioned in \(K=4\) subdomains by \(\mathtt{petsc4py}\) inside \(\mathtt{deal.II}\). **Center:** test case **CLE1** repartition of the computational subdomain \(k=2\) with the cellwise Grassmannian dimension indicator \(I_{\rm G}\), Definition 4: \(12\%\) of the cells belong to the _low local Grassmannian dimension_ part, represented in blue inside the torus, and the other \(88\%\) belong to the _high local Grassmannian dimension_ part, represented in red. **Right:** test case **CLE2** repartition of the computational subdomain \(k=2\) with the cellwise Grassmannian dimension indicator \(I_{\rm G}\) and \(P_{l}=50\%\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{FOM} & \multicolumn{3}{c|}{ROM} & \multicolumn{3}{c|}{ROM} & \multicolumn{3}{c|}{DD-ROM} \\ \hline \(N_{h}\) & time & \(r\) & time & speedup & \(r_{i}\) & time & speedup \\ \hline \hline 7776 & 411.510 [ms] & 3 & 80.444 [\(\mu\)s] & \(\sim 5115\) & [3, 3, 3,3] & 85.108 [\(\mu\)s] & \(\sim 4835\) \\ \hline 19440 & 2.080 [s] & 3 & 69.992 [\(\mu\)s] & \(\sim 29718\) & [3, 3, 3,3] & 94.258 [\(\mu\)s] & \(\sim 22067\) \\ \hline \end{tabular} \end{table} Table 2: **CLE.** Average computational times and speedups for ROM and DD-ROM approaches for Maxwell equations. The speedup is computed as the FOM computational time over the ROM one. The FOM runs in parallel with 4 cores, so “FOM time” refers to wallclock time. The first row correspond to test case **CLE1**, the second to test case **CLE2**. Figure 11: **CLE1.** Errors and estimators for elasticity equations corresponding to the \(n_{\text{train}}=20\) uniformly sampled training snapshots corresponding to the abscissae \(0,5,10,\ldots,95\), and \(n_{\text{test}}=80\) uniformly sampled test snapshots, corresponding to the other abscissae. The reduced dimensions of the ROMs are \(\{r_{\Omega_{i}}\}_{i=1}^{K}=[3,3,3,3]\) for \(K=4\) partitions, \(r_{\Omega}=3\) for \(k=1\) partition, and \(\{r_{\Omega_{i}}\}_{i=1}^{k}=[2,3]\) for \(k=2\) partitions. For the case \(k=2\) we employed the cellwise local Grassmannian dimension indicator \(I_{G}\), Definition 4, with \(P_{l}=12\%\). Figure 12: **CLE2. Errors and estimators corresponding to the \(n_{\text{test}=80}\) uniformly sampled test snapshots corresponding to the abscissae \(0,5,10,\ldots,95\), and \(n_{\text{train}=20}\) uniformly sampled training snapshots, corresponding to the other abscissae. The reduced dimensions of the ROMs are \(\{r_{\Omega_{i}}\}_{i=1}^{K}=[3,3,3,3]\) for \(K=4\) partitions, \(r_{\Omega}=3\) for \(k=1\) partition, and \(\{r_{\Omega_{i}}\}_{i=1}^{k}=[3,3]\) for \(k=2\) partitions. For the case \(k=2\) we employed the cellwise variance dimension indicator \(I_{\text{var}}\), Definition 3, with \(P_{l}=50\%\).** #### 5.3.3 Scalar concentration advected by an incompressible flow (ADR) We consider the parametric semi-linear advection diffusion reaction equation in \(d=2\) dimensions, with \(m=3\) equations, rewritten in mixed form: \[\begin{cases}\kappa^{-1}\sigma+\nabla u=0,&\text{ in }\Omega,\\ \nabla\cdot\sigma+\mathbf{v}\cdot\nabla u+u=f,&\text{ in }\Omega,\\ \sigma\cdot\mathbf{n}=0,&\text{ on }\Gamma_{N}\cup\Gamma_{D,0},\\ u=\sum_{i=1}^{P}\mu_{i}\chi_{I_{i}},&\text{ on }\Gamma_{D},\end{cases} \tag{81}\] where \(\kappa=0.05\) is fixed for this study, \[\boldsymbol{\rho}=(\mu_{1},\ldots,\mu_{P})\in\mathcal{P}\subset\mathbb{R}^{P}, \qquad\mathcal{P}=\{\boldsymbol{\rho}\in\{0,1\}^{P}|\mu_{i}=1,\ \mu_{j}=0,\ \forall j\in\{0,\ldots,99\}\backslash\{i\}\}, \tag{82}\] and \(\{\chi_{I_{i}}\}_{i=0}^{N_{\text{par}}}\) are the characteristic functions of the symmetric intervals \(I_{i}=0\times[-i0.01+1.5,i0.01+2.5]\), with \(N_{\text{par}}=99\). The domain is shown in Figure 13. The advection velocity \(\mathbf{v}\) is obtained from the following incompressible Navier-Stokes equation at \(t=2\)s: \[\begin{cases}\partial_{t}\mathbf{v}+\mathbf{v}\cdot\nabla\mathbf{v}-\nu \Delta\mathbf{v}+\nabla p=\mathbf{0},&\text{ in }\Omega\\ \nabla\cdot\mathbf{v}=0,&\text{ in }\Omega\\ \mathbf{v}\times\mathbf{n}=0,\ p=0,&\text{ on }\Gamma_{N}\\ \mathbf{v}=0,&\text{ on }\Gamma_{D,0}\\ \mathbf{v}(t=0)=\mathbf{v}_{b},&\text{ on }\Gamma_{D}\end{cases} \tag{83}\] with initial conditions on the boundary \(\Gamma_{D}\), \(\mathbf{v}_{b}=\mathbf{v}(x,y,t=0)=(6y(4.1-y)/4.1^{2},0)\in\mathbb{R}^{2}\) and \(\nu\in\mathbb{R}\) such that the Reynolds number is \(Re=100\). The implementation is the one of step-35 of the tutorials of the deal.II library [6]. Homogeneous Neumann boundary on \(\Gamma_{N}\cup\Gamma_{D,0}\) and Dirichlet non-homogeneous boundary conditions on \(\Gamma_{D}\) are applied with the boundary operator (30). A sample solution is shown in Figure 14 for \(\mu_{i}=0,\ i=0,\ldots,98\) and \(\mu_{99}=1\), \(\kappa=0.05\). We remark that, for the moment, we consider only fixed values of \(\kappa=0.05\). For a convergence of ROMs to vanishing viscosity solutions with graph neural networks, see Section 6. Figure 14: **ADR. Left**: scalar concentration \(u\) of the advection diffusion reaction equations (81), with \(\mu_{i}=0,\ i=0,98\) and \(\mu_{99}=1\), \(\kappa=0.05\). **Right**: advection velocity employed for the FS (81), obtained as the velocity \(\mathbf{v}\) from the INS (83) at \(t=2\)s. Figure 13: **ADR.** Computational domain of the advection diffusion reaction equation FS (81) and the incompressible Navier-Stokes equations (83). The boundary conditions specified for each system are reported in the text. The FOM partitioned and DD-ROM repartitioned subdomains are shown in Figure 15. We choose the variance indicator to repartition the computational subdomain in two subset: \(21\%\) of the cells for the _low variance_ part and \(79\%\) for the _high variance_ part. With respect to the previous test cases, now it is evident the change in the order of magnitude of the local relative \(L^{2}\)-reconstruction error in Figure 16, especially for the cellwise variance indicator \(I_{\text{var}}\). We expect that lowering the local reduced dimension of the _low variance_ repartitioned region will not affect sensibly the accuracy. We use for the monodomain approach \(r_{\Omega}=5\) reduced basis as well as \(r_{\Omega_{i}}=5\) for \(i=1,\ldots,K\) for the FOM partitioned subdomains. In the DD-ROM approach, we can use even \(r_{\Omega_{1}}=2\) and \(r_{\Omega_{2}}=5\) for the lower and higher variance subdomains, respectively, without affecting the error of the ROM solution, as we see in Figure 17. Indeed, the accuracy in terms of \(L^{2}\) and energy norms is essentially identical for all approaches, even with so little number of basis functions for the DD-ROM one. Again, we evaluate \(n_{\text{train}}=20\) training full-order solutions and \(n_{\text{test}}=80\) test full-order solutions, corresponding to the parameter choices \(\mu_{i}=1\) and \(\mu_{\bar{i}}=0\), for \(i=0,\ldots,99\), with fixed viscosity \(\kappa=0.05\), where \(\bar{i}\) represents all the indices in \(\{0,\ldots,99\}\) except from \(i\). So, the training snapshots correspond to \(i=0,5,10,\ldots,95\). For these studies we have fixed the local dimensions to \(r_{\Omega_{i}}=5,\ i=1,\ldots,K\) for \(K=4\), \(r_{\Omega}=5\) for the whole computational domain and \(r_{\Omega_{1}}=2,\ r_{\Omega_{2}}=5\) for the repartitioned case with \(k=2\), as mentioned. In Table 3, we list the computational times and speedups for a simulation with the different methods. ## 6 Graph Neural Networks approximating Vanishing Viscosity solutions In this section, we want to highlight how the well-known concept of vanishing viscosity solutions can be related to FS. In hyperbolic problems, the uniqueness of the weak solution is not guaranteed, already for very simple problems, e.g. inviscid Burgers' equations. In order to filter out the physically relevant solution, the concept of vanishing viscosity solution has been introduced, _inter alia_[37], and, consequently, vanishing viscosity methods have been developed, e.g. [64, 26]. Figure 16: **ADR.** Local relative \(L^{2}\)-reconstruction errors of the snapshots matrix of the advection diffusion reaction equation restricted to the two subdomains of the repartitioning performed with the indicator \(I_{\text{var}}\) (in red and light-blue), Definition 3, and \(I_{\text{G}}\) (in orange and blue), Definition 4. The relative \(L^{2}\)-reconstruction error attained on the whole domain is shown in black for the indicator \(I_{\text{var}}\) and in brown for the indicator \(I_{\text{G}}\). The local reduced dimensions used to evaluate the local reconstruction errors is \(r_{\Omega_{i}}=3,\ i=1,2\). Figure 15: **ADR.** Domain of advection diffusion reaction equation. **Left**: computational subdomains partitioned in \(K=4\) subdomains by petsc4py inside deal.II. **Right**: DD-ROM repartition of the computational subdomain \(k=2\) with the cellwise variance indicator \(I_{\text{var}}\), Definition 3: \(21\%\) of the cells belong to the _low variance_ part, represented in blue, and the other \(79\%\) belong to the _high variance_ part in red. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{FOM} & \multicolumn{2}{c|}{ROM} & \multicolumn{2}{c|}{ROM} & \multicolumn{2}{c|}{DD-ROM} \\ \hline \(N_{h}\) & time & \(r\) & time & speedup & \(r_{i}\) & time & speedup \\ \hline \hline 131328 & 3.243 [s] & 5 & 79.112 [\(\mu\)s] & \(\sim 40992\) & [5, 5, 5, 5] & 59.912 [\(\mu\)s] & \(\sim 54129\) \\ \hline \end{tabular} \end{table} Table 3: **ADR.** Average computational times and speedups for ROM and DD-ROM approaches for Maxwell equations. The speedup is computed as the FOM computational time over the ROM one. The FOM runs in parallel with 4 cores, so “FOM time” refers to wallclock time. Figure 17: **ADR.** Errors and estimators for advection diffusion reaction equation corresponding to the \(n_{\text{train}=20}\) uniformly sampled train snapshots corresponding to the abscissae \(0,5,10,\ldots,95\), and \(n_{\text{test}=80}\) uniformly sampled test snapshots, corresponding to the other abscissae. The reduced dimensions of the ROMs are \(\{r_{\Omega_{i}}\}_{i=1}^{K}=[5,5,5,5]\) for \(K=4\) partitions, \(r_{\Omega}=5\) for \(k=1\) partition, and \(\{r_{\Omega_{i}}\}_{i=1}^{k}=[2,5]\) for \(k=2\) partitions. For the case \(k=2\) we employed the cellwise variance indicator \(I_{G}\), Definition 3, with \(P_{l}=21\%\). We will consider the topic of vanishing viscosity solutions from the different perspective of model order reduction. It is known that slow decaying Kolmogorov n-width solution manifolds result in ineffective linear reduced order models. The origin of this problem rests theoretically on the regularity of the parameter to solution map [18, 19], and with less generality on the nature of some PDEs (e.g. advection dominated PDEs, nonlinearities, complex dynamics), on the size of the parameter space, and on the smoothness of the parametric initial data or parametric boundary conditions [5], mainly. A possible way to obtain more approximable solution manifolds is through regularization or filtering [88, 85], e.g. adding artificial viscosity. Heuristically, the objective is to smoothen out the parametric solutions of the PDEs, for example removing sharp edges, local features, complex patterns, with the aim of designing more efficient ROMs for the filtered solution manifolds. Then, the linear ROMs will be applied to different levels of _regularization_, still remaining in the regime where they have good approximation properties. Finally, the original (vanishing viscosity) solutions will be recovered with a regression method from the succession of filtered linear ROMs. This is realized without the need to directly reduce with a linear reduced manifold the original solution manifold, thus avoiding the problem of its approximability with a linear subspace and the slow Kolmogorov n-width decay. In our case, we consider regularization by viscosity levels: the vanishing viscosity solutions \(u_{\nu}\) with viscosity \(0\leq\nu\ll 1\), will be recovered as the limit \(\lim_{i\to\infty}u_{\nu_{i}}=u_{\nu}\) of a potentially infinite succession of viscosity levels \(\{\nu_{i}\}_{i=0}^{\infty},\;\nu_{0}>\nu_{1}>\cdots>0\), each associated to its efficient reduced order model. In practice, \(\{\nu_{i}\}_{i=0}^{\infty}\approx\{\nu_{i}\}_{i=0}^{q}\), where \(q\) is the number of additional viscosity ROMs. It is clear the connection with multi-fidelity and super-resolution methods [35, 56]. The rationale of the approach is supported by the proofs of convergence to vanishing viscosity solutions of hyperbolic PDEs under various hypotheses [65, 57, 25, 38]. The framework is general and can be applied in particular to FS. We will achieve this for the advection-diffusion-reaction problem changing the viscosity constant \(\mathbb{R}\ni\kappa>0\) in (81). While this choice is specific for the model we are considering, a more general approach could consist in adding a viscous dissipative term to the generic FS obtaining another FS: \[\begin{cases}Au=f+\mathbf{\kappa\Delta u},&\text{in }\Omega\\ (\mathcal{D}-\mathcal{M})(u-g)=0,&\text{on }\partial\Omega\end{cases} \to\begin{cases}\begin{cases}\kappa^{-1}\sigma+\nabla u=0\\ \nabla\cdot\sigma+Au=f\\ (\mathcal{D}-\mathcal{M})(u-g)=0,&\text{on }\partial\Omega,\end{cases}\end{cases} \tag{84}\] recalling that the additional degrees of freedom are needed only for the high viscosity ROMs and FOMs (to collect the snapshots) and not the full-order vanishing viscosity solutions. This is only an example of how the procedure could be applied to any FS. In fact, the methodology is not designed specifically for FS. The overhead of the methodology is related to the evaluation of the snapshots, the assembling of each level of viscosity \(\{\nu_{i}\}_{i=0}^{q}\), and the computational costs of the regression method. We remark that the full matrices of the affine decomposition of each \(\{ROM_{\nu_{i}}\}_{i=0}^{q}\) are the same. This is the price necessary to tackle the realization of reduced order models of parametric PDEs affected by a slow Kolmogorov n-width decay with our approach. With respect to standard techniques for nonlinear manifold approximation, the proposed one is more interpretable as a mathematical limit of a succession of solutions to the vanishing viscosity one. Moreover, it has a faster training stage relying on the efficiency of the \(\{ROM_{\nu_{i}}\}_{i=0}^{q}\). To the authors' knowledge, cheap analytical ways to obtain the vanishing viscosity solution from a finite succession of high viscosity ones are not available, so we will rely on data-driven regression methods. ### Graph neural networks augmented with differential operators Generally, machine learning (ML) architectures are employed in surrogate modelling to approximate nonlinear solution manifolds, otherwise linear subspaces are always preferred. The literature is vast on the subject and there are many frameworks that develop surrogate models with ML architectures. They promise to define data-driven reduced order models that infer solutions for new unseen parameters provided that there are enough data to train such architectures. This depends crucially on the choice of the encoding and inductive biases employed to represent the involved datasets: the training computational time and the amount of training data can change drastically. On this matter, convolutional autoencoders (CNN) are one of the most efficient architectures to approximate nonlinear solution manifolds [59] for data structured on Cartesian grids, mainly thanks to their shift-equivariance property. For fields on unstructured meshes the natural choice are Graph neural networks (GNNs). Since their employment, GNNs architectures from the ML community have been enriched with physical inductive biases and other tools from numerical analysis. We want to test one of the first implementations and modifications of GNNs [81]. We also want to remark that in the literature, there are still very few test cases of ROMs that employ GNNs with more than \(\geq 50000\) degrees of freedom. The difficulty arises when the training is performed on large meshes, thus the need for tailored approaches. The majority of GNNs employed for surrogate modelling are included in autoencoders [33, 68] or are directly parametrized to infer the unseen solution with a forward evaluation. These architectures may become heavy, especially for non-academic test cases. One way to tackle the problem of parametric model order reduction of slow Kolmogorov n-width solution manifolds is to employ GNNs only to recover the high-fidelity solution in a multi-fidelity setting, through super-resolution. Since efficient ROMs are employed to obtain the lower levels of fidelity (high viscosity solutions in our case), the solution manifold dimension reduction is performed only at those levels, avoiding the costly and heavy in memory training of autoencoders of GNNs. We describe the implementation of augmented GNNs as in [81], with the difference that we need to train only a map from a collection of DD-ROMs solutions to the full-order vanishing viscosity solution, and not an autoencoder with pooling and unpooling layers to perform dimension reduction. The GNN we will employ is rather thin with respect to autoencoder GNNs used to perform dimension reduction. Its details are reported in Table 4. We represent with \[\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{W}),\quad\mathcal{V}\in\mathbb{ R}^{n_{\text{nodes}}\times f},\quad\mathcal{E}\in\mathbb{R}^{n_{\text{edges}}\times 2}, \quad\mathcal{W}\in\mathbb{R}^{n_{\text{attr}}\times d}, \tag{85}\] a graph with node features \(\mathcal{V}\), edges \(\mathcal{E}\) and edge attributes \(\mathcal{W}\). The number \(f\) represents the nodal features dimension. We denote with \(\mathbf{e}_{ij}=(i,j)\in\mathbb{N}^{2}\) the edge between the nodes \(\mathbf{n}_{i},\mathbf{n}_{j}\in\mathbb{R}^{f}\): \(\mathbf{e}_{ij}\) corresponds to a row of \(\mathcal{E}\), and \(\mathbf{n}_{i},\mathbf{n}_{j}\) correspond to the \(i\)-th and \(j\)-th rows of \(\mathcal{V}\), for \(i,j=1,\ldots,n_{\text{nodes}}\). Similarly, \(\boldsymbol{\omega}_{ij}\) represents the edge attributes of edge \(\mathbf{e}_{ij}\). We have \(n_{\text{edges}}=n_{\text{attr}}\). For their efficiency, GNNs rely on a message passing scheme composed of propagation and aggregation steps. Supposing that the graph is sparsely connected their implementation is efficient. When the graph is supported on a mesh, it is natural to consider the generalized support points of finite element spaces as nodes of the graph and the sparsity pattern of the linear system associated to the numerical model as the adjacency matrix of the graph. We employ only Lagrangian nodal basis of discontinuous finite element spaces, but the framework can be applied to more general finite element spaces. As edge attributes \(\boldsymbol{\omega}_{ij}\), we will employ the difference \(\boldsymbol{\omega}_{ij}=\mathbf{x}_{i}-\mathbf{x}_{j}\in\mathbb{R}^{d}\) between the corresponding spatial coordinates associated to the nodes \(\mathbf{n}_{i},\mathbf{n}_{j}\in\mathbb{R}^{f}\). The nodes adjacent to node \(\mathbf{n}_{i}\) are represented with the set \(\mathcal{N}_{\text{neigh}(i)}\) for all \(i=1,\ldots,n_{\text{nodes}}\). We consider only the two following types of GNN layers: a continuous kernel-based convolutional operator \(l_{\text{N}\text{Nconv}}\)[36, 78] and the GraphSAGE operator \(l_{\text{SAGEconv}}\)[40], \[\mathcal{V}_{\text{out}} =l_{N\text{N}\text{conv}}(\mathcal{V}_{\text{inp}},\mathcal{E}, \mathcal{W})=\mathcal{V}_{\text{inp}}W_{3}+\text{Avg}_{1}(\mathcal{V}_{\text{inp }},h(\mathcal{W}))+\mathbf{b}_{3},\quad h(\mathcal{W})=\text{ReLU}(\mathcal{W} W_{1}+\mathbf{b_{1}})W_{2}+\mathbf{b_{2}}, \tag{86}\] \[\mathcal{V}_{\text{out}} =l_{SAGE\text{conv}}(\mathcal{V}_{\text{inp}},\mathcal{E}, \mathcal{W})=\mathcal{V}_{\text{inp}}W_{6}+\text{Avg}_{2}(\text{ReLU}( \mathcal{V}_{\text{inp}}W_{4}+\mathbf{b_{4}}))W_{5}+\mathbf{b}_{5}, \tag{87}\] with weight matrices dimensions, \[W_{1}\in\mathbb{R}^{2\times l},\ W_{2}\in\mathbb{R}^{l\times(f _{\text{inp}}\times f_{\text{out}})},\ W_{3}\in\mathbb{R}^{f_{\text{inp}} \times f_{\text{out}}},\ W_{4}\in\mathbb{R}^{f_{\text{inp}}\times f_{\text{inp }}},\ W_{5},W_{6}\in\mathbb{R}^{f_{\text{inp}}\times f_{\text{out}}}, \tag{88}\] \[\mathbf{b}_{1}\in\mathbb{R}^{l},\ \mathbf{b}_{2}\in\mathbb{R}^{(f_{ \text{inp}}\times f_{\text{out}})},\ \mathbf{b}_{3},\mathbf{b}_{5}\in\mathbb{R}^{f_{\text{out}}},\ \mathbf{b}_{5}\in\mathbb{R}^{f_{\text{inp}}},\] (89) \[h(\mathcal{W})\in\mathbb{R}^{n_{\text{edges}}\times(f_{\text{inp }}\times f_{\text{out}})},\quad\mathcal{W}=\{W_{s}^{h}\}_{s=1}^{n_{\text{ enders}}},\ W_{s}^{h}\in\mathbb{R}^{f_{\text{inp}}\times f_{\text{out},s}},\ \forall s=1,\ldots,n_{\text{edges}}, \tag{90}\] with the following average operators used as aggregation operators, \[(\text{Avg}_{1}(\mathcal{V},\{W_{s}^{h}\}_{s=1}^{n_{\text{edges}}}))_{i}=\frac {1}{\mathcal{N}_{\text{neigh}(i)}}\sum_{s\in\mathcal{N}_{\text{neigh}(i)}}W_{s }^{h}\mathbf{n}_{s},\qquad(\text{Avg}_{2}(\mathcal{V}))_{i}=\frac{1}{\mathcal{N }_{\text{neigh}(i)}}\sum_{s\in\mathcal{N}_{\text{neigh}(i)}}\mathbf{n}_{s}, \qquad\forall i=1,\ldots,n_{\text{nodes}}, \tag{91}\] where \(\mathcal{V}_{\text{inp}}\in\mathbb{R}^{n_{\text{nodes}}\times f_{\text{inp}}}, \mathcal{V}_{\text{out}}\in\mathbb{R}^{n_{\text{nodes}}\times f_{\text{out}}}\) are the input and output nodes with feature dimensions \(f_{\text{inp}},f_{\text{out}}\). We remark that, differently from graph neural networks with heterogeneous layers, i.e., with changing mesh structure between different layers, in this network the edges \(\mathcal{E}\) and edge attributes \(\mathcal{W}\) are kept fixed, only the node features change. The feed-forward neural network \(h:\mathbb{R}^{n_{\text{edge}}\times d}\rightarrow\mathbb{R}^{f_{\text{inp}}, \times f_{\text{out},s}}\) defines a weight matrix \(W_{s}^{h}\in\mathbb{R}^{f_{\text{inp}},\times f_{\text{out},s}}\) for each edge \(s=1,\ldots,n_{\text{edges}}\). The number \(l\) is the hidden layer dimension of \(h\). The aggregation operators are defined from the edges \(\mathcal{E}\) that are related to the sparsity pattern of the linear system of the numerical model. So, the aggregation is performed on the stencils of the numerical scheme chosen for every layer of the GNN architecture in Table 4. Many variants are possible, in particular, we do not employ pooling and unpooling layers to move from different meshes: we always consider the same adapted mesh. Since our GNNs work on the nodal features, a good strategy is to augment their dimensions as proposed in [81]. In fact, in the majority of applications of GNNs for physical models the input features dimensions is the dimension of the physical fields considered and it is usually very small. Considering FS, the fields' dimension is \(m\). To augment the input features, we will filter them with some differential operators discretized on the same mesh in which the GNN is supported. We consider the following differential operators \[\Delta:V_{h}(\Omega) \to V_{h}(\Omega),\] (Laplace operator) (92) \[\mathbf{v}\cdot\nabla:V_{h}(\Omega) \to V_{h}(\Omega),\] (Advection operator) (93) \[\nabla_{x}:V_{h}(\Omega) \to V_{h}(\Omega),\] (Gradient x-component) (94) \[\nabla_{y}:V_{h}(\Omega) \to V_{h}(\Omega),\] (Gradient y-component) (95) for a total of four possible feature augmentation operators, where, in our case, \(\mathbf{v}\) is the advection velocity from the incompressible Navier-Stokes equations (83). We employ the representation of the previous differential operators with respect to the polynomial basis of Lagrangian shape functions, so they act on the vectors of nodal evaluations in \(\mathbb{R}^{N_{h}}\). As in [81], we consider three sets of possible augmentations: \[\mathcal{O}_{1} =\{\mathbb{I}_{N_{h}},\Delta,\mathbf{v}\cdot\nabla,\nabla_{x},\nabla _{y}\}, \tag{96}\] \[\mathcal{O}_{2} =\{\mathbb{I}_{N_{h}},\nabla_{x},\nabla_{y}\}\] (97) \[\mathcal{O}_{3} =\{\mathbb{I}_{N_{h}}\} \tag{98}\] where \(\mathbb{I}_{N_{h}}\) is the identity matrix in \(\mathbb{R}^{N_{h}}\), \(|\mathcal{O}_{1}|=5=n_{\text{aug}}\), \(|\mathcal{O}_{2}|=3=n_{\text{aug}}\) and \(|\mathcal{O}_{3}|=1=n_{\text{aug}}\). We will reconstruct only the scalar concentration \(u\) with the GNN, so, in our case, the field dimension is 1, which is the output dimension. The input dimension depends on the number of high viscosity DD-ROMs employed that we denote with \(q\). Given a single parametric instance \(\boldsymbol{\rho}\in\mathbb{R}^{P}\) the associated solutions of \(\{\text{D-ROM}_{\kappa_{1}}\}_{i=1}^{q}\) are \(\{\mathbf{u}^{\text{RB}}(\boldsymbol{\rho}_{i})\}_{i=1}^{q}\). We divide the snapshots \(\{\mathbf{u}^{\text{RB}}(\boldsymbol{\rho}_{i})\}_{i=1}^{n_{\text{train}}+n_{ \text{test}}}\) in training \(\{\mathbf{u}^{\text{RB}}(\boldsymbol{\rho}_{i})\}_{i\in I_{\text{train}}}\) and test snapshots \(\{\mathbf{u}^{\text{RB}}(\boldsymbol{\rho}_{i})\}_{i\in I_{\text{test}}}\), with \(|I_{\text{train}}|=n_{\text{train}}\) and \(|I_{\text{test}}|=n_{\text{test}}\). We have decided to encode the reconstruction of the vanishing viscosity solution \(\mathbf{u}_{q+1}\) learning the difference \(\mathbf{u}_{q+1}(\boldsymbol{\rho})-\mathbf{u}_{q}^{\text{RB}}(\boldsymbol{ \rho})-\overline{\mathbf{u}_{q+1}}\text{train}\) with the mesh-supported-augmented GNN (MSA-GNN) described in Table 4: \[\mathbf{u}_{q+1}(\boldsymbol{\rho})=ReLU\left(\mathbf{u}_{q}^{\text{RB}}( \boldsymbol{\rho})+\text{MSA-GNN}\left(\{O_{a}\{\mathbf{u}_{i}^{\text{RB}}( \boldsymbol{\rho})\}_{i=1}^{q},\{\mathbf{u}_{i}^{\text{RB}}(\boldsymbol{\rho} )-\mathbf{u}_{i-1}^{\text{RB}}(\boldsymbol{\rho})\}_{i=1}^{q-1}\}\right)_{a=2 }^{n_{\text{aug}}}\right)+\overline{\mathbf{u}_{q+1}}_{\text{train}}\right), \tag{99}\] where \(\overline{\mathbf{u}_{q+1}}_{\text{train}}=\frac{1}{n_{\text{train}}}\sum_{i=1 }^{n_{\text{train}}}\mathbf{u}_{q+1}(\boldsymbol{\rho}_{i})\). Learning the difference instead of the solution itself helps in getting more informative features. The input dimension is therefore \(3n_{\text{aug}}=15\) for \(\mathcal{O}_{1}\) and \(3n_{\text{aug}}=9\) for \(\mathcal{O}_{2}\). ### Decomposable ROMs approximating vanishing viscosity (VV) solution through GNNs In this section, we test the proposed multi-fidelity approach that reconstruct the lowest viscosity level with the GNN. We consider the FS (81), with three levels of viscosity, from highest to lowest: \(\kappa_{1}=0.05\), \(\kappa_{2}=0.01\) and \(\kappa_{3}=0.0005\). We want to build a surrogate model that efficiently predicts the parametric solutions of the FS (81) for unseen values of \(\boldsymbol{\rho}\in\mathbb{P}\) with fixed viscosity \(\kappa_{3}=0.0005\). These solutions will be referred to as _vanishing viscosity_ solutions. The other two viscosity levels are employed to build the D-ROM\({}_{\kappa_{1}}\) and D-ROM\({}_{\kappa_{2}}\) with viscosities \(\kappa_{1}=0.05\) and \(\kappa_{2}=0.01\), respectively. The parametrization affects the inflow boundary condition and is the same as the one described in section 5.3.3, see equation (82). We also employ the same number of training 20 and test 80 parameters. The DD-ROMs provided for \(\kappa_{1}=0.05\) and \(\kappa_{2}=0.01\) can be efficiently designed with reduced dimensions \(\{r_{\Omega_{i}}\}_{i=1}^{K}[5,5,5,5]\). To further reduce the cost, we employ an even coarser mesh for ROM\({}_{\kappa_{1}}\) and ROM\({}_{\kappa_{2}}\) and a finer mesh for the vanishing viscosity solutions. The former is represented on the left of Figure 18, the latter on the right. The degrees of freedom related to the coarse mesh are 43776, while the ones on the fine one are 175104. For the training of the GNN we use the open source software library PyTorch Geometric [32]. The employment of efficient samplers that partition the graphs on which the training set is supported is crucial to lower the otherwise heavy memory burden [40]. We preferred samplers that partition the mesh with METIS [54] as it is often employed in this context. We decided to train the GNN with early stopping at 50 epochs as our focus is also in the reduction of the training time of the NN architectures used for model order reduction. It corresponds on average to less than 60 minutes of training time. The batch size is 100 and we clustered the whole domain in 100000 subgraphs in order to fit the batches in our limited GPU memory. Each augmentation strategy and additional fidelity level, do not affect the whole training time as they only \begin{table} \begin{tabular}{l|c|c|c} \hline \hline Net & Weights \([f_{\text{inp}},f_{\text{out}}]\) & Aggregation & Activation \\ \hline \hline Input NNConv & \([3n_{\text{aug}},18]\) & Avg\({}_{1}\) & ReLU \\ \hline SAGEconv & [18, 21] & Avg\({}_{2}\) & ReLU \\ \hline SAGEconv & [21, 24] & Avg\({}_{2}\) & ReLU \\ \hline SAGEconv & [24, 27] & Avg\({}_{2}\) & ReLU \\ \hline SAGEconv & [27, 30] & Avg\({}_{2}\) & ReLU \\ \hline Output NNConv & [30, 1] & Avg\({}_{1}\) & - \\ \hline \hline NNConvFilters & First Layer \([2,l]\) & Activation & Second Layer \([l,f_{\text{inp}}f_{\text{out}}]\) \\ \hline \hline Input NNConv & [2, 12] & ReLU & [12, \(3n_{\text{aug}}\cdot 18\)] \\ \hline Output NNConv & [2, 8] & ReLU & [8, 30] \\ \hline \hline \end{tabular} \end{table} Table 4: Mesh supported augmented GNN increase the dimension of the input features from a minimum of \(1\) (\(1\) fidelity, no augmentation) to a maximum of \(15\) (all augmentations \(\mathcal{O}_{3}\), \(2\) fidelities). As optimizer we use ADAM [55] stochastic optimizer. Every architecture is trained on a single GPU NVIDIA Quadro RTX 4000. Figures 19, 20 and 21 show the results of the algorithm for parameters with index \(i\in\{0,50,99\}\). In particular, we show on the left columns the FOM simulations, in the center column the ROM simulations and the error in the right column. Moreover, in the different rows, we have different viscosity levels. The first three rows use the classical DD-ROM approach. We can immediately see that the vanishing viscosity \(\kappa=\kappa_{3}=0.0005\) level shows strong numerical oscillations along the whole solution, which are not present in the FOM method. This phenomenon is observable also for higher viscosity levels but it is less pronounced and concentrated on the left of the domain, where the discontinuity are imposed as boundary conditions (see error plots). Finally, in the last row, we show the results of the GNN approach, which uses the first two viscosity levels to predict the vanishing viscosity one. Contrary the DD-ROM, we do not observe many numerical oscillations in the reduced solutions and they are much more physically meaningful. Thinking about extending this approach for more complicated problems, as Euler's equations, one could guarantee the presence of the correct amount of shocks and the right location or maintaining the positivity of density and pressure close to discontinuities. In Figure 22, we show a quantitative measure of the error of the reduced approaches presented in terms of relative \(L^{2}\) error. Overall, we can immediately see that the new GNN approach can always reach errors of the order of \(1-2\%\) for the Figure 19: **VV.** Scalar concentration advected by incompressible flow for \(i=0\). Comparison of ROM approach at different viscosity levels \(\kappa\in\{0.05,0.01,0.0005\}\) and GNN for \(\kappa=0.0005\). FOMs on the left, reduced solution at the center and error on the right. Figure 18: **VV.** Left part of the computational domain partitioned in \(4\) for distributed parallelism: coarse mesh (**left**), fine mesh (**right**). The solution with viscosity \(\kappa\in\{\mathbf{0.05,0.01}\}\) are evaluated on the coarse mesh with \(4868\) cells and \(43776\) dofs, those with \(\kappa=\mathbf{0.0005}\) on the finer with \(19456\) cells and \(175104\) dofs. Figure 21: **VV.** Scalar concentration advected by incompressible flow for \(i=99\). Comparison of ROM approach at different viscosity levels \(\kappa\in\{0.05,0.01,0.0005\}\) and GNN for \(\kappa=0.0005\). FOMs on the left, reduced solution at the center and error on the right. Figure 20: **VV.** Scalar concentration advected by incompressible flow for \(i=50\). Comparison of ROM approach at different viscosity levels \(\kappa\in\{0.05,0.01,0.0005\}\) and GNN for \(\kappa=0.0005\). FOMs on the left, reduced solution at the center and error on the right. vanishing viscosity solutions, with few peaks in the extrapolatory regime of \(8\%\), while the classical DD-ROM on the vanishing viscosity solutions perform worse, with errors around \(6\)-\(10\%\). On the other hand, the DD-ROM for higher viscosity levels have lower errors around \(3\%\) for \(\kappa_{2}\) and \(0.5\%\) for \(\kappa_{1}\), hence, they are still reliably representing those solutions. On the different GNN approaches, in Figure 22 at the top we compare the different augmentations \(\mathcal{O}_{1}\), \(\mathcal{O}_{2}\) and \(\mathcal{O}_{3}\) and how many levels of viscosity we keep into considerations to derive the vanishing viscosity solution. The usage of multiple fidelity levels (two viscosity levels) is a great improvement for all the augmentations proposed and it can make gain a factor of \(2\) in terms of accuracy. There are slight differences with the used augmentations and, in particular, we observe that the \(\mathcal{O}_{1}\) augmentation, with all operators, guarantee better performance, while there are no appreciable differences between \(\mathcal{O}_{2}\) and \(\mathcal{O}_{3}\). Clearly, one could come up with many other augmentation possibilities choosing more operators, but at a cost of increasing the dimensions of the GNN and the offline training costs. We believe that all the presented options already perform much better with respect to classical approaches and can already be used without further changes. In Table 5, we compare the computational times necessary to compute the FOM solutions, the DD-ROM ones, the training time for the GNN and the online costs of the GNN. As mentioned before, we employ only one GPU NVIDIA Quadro \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{FOM} & \multicolumn{4}{c|}{DD-ROM} \\ \hline \(\kappa\) & \(N_{h}\) & time & \(r_{i}\) & time & speedup & mean \(L^{2}\) error \\ \hline \hline 0.05 & 43776 & 3.243 [s] & [5, 5, 5, 5] & 59.912 [\(\mu\)s] & 54129 & 0.00595 \\ \hline 0.01 & 43776 & 3.236 [s] & [5, 5, 5, 5] & 79.798 [\(\mu\)s] & 40552 & 0.0235 \\ \hline 0.0005 & 175104 & 9.668 [s] & [5, 5, 5, 5] & 95.844 [\(\mu\)s] & 100872 & 0.0796 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(\kappa\) & GNN training time & Single forward GNN online time & Total online time & GNN speedup & mean \(L^{2}\) error \\ \hline \hline 0.0005 & \(\leq 60\) [min] & \multicolumn{4}{c|}{2.661 [s]} & \multicolumn{4}{c|}{17.166 [s]} & \(\sim 56\) & 0.0217 \\ \hline \end{tabular} \end{table} Table 5: **VV.** Computational costs for scalar advected by a incompressible flow problem with GNNs approximating vanishing viscosity solutions (VV). The speedup is computed as the FOMs computational time over the ROM one. The speedup of the GNN is with respect to the FOM with viscosity \(\nu=0.0005\). The FOM runs in parallel with \(K=4\) cores as the DD-ROMs, so “FOM time” and “DD-ROM time” refers to wallclock time. Regarding the GNN results, “Single forward GNN online time” refers to a single online evaluation while “Total online time” refers to the evaluation of the 100 training and test snapshots altogether with only two separate GNN forward evaluations with batches of 50 inputs each. The speedup is evaluated as “FOM time” over ”Total online time” divided by 100. Figure 22: **VV.** Relative errors for the scalar conservation advected by incompressible flow problem. The parameters corresponding to the snapshots used for the GNNs and DD-ROMs training correspond to the abscissae \(0,5,10,\ldots,95\) the rest are test parameters. The dashed red background highlights the extrapolation range. **Top:** errors on train and test set with different GNN approaches given by the three augmentation \(\mathcal{O}_{1}\), \(\mathcal{O}_{2}\) and \(\mathcal{O}_{3}\) and by using either \(1\) viscosity level (\(1\) fidelity) or \(2\) (all fidelities) and errors for DD-ROM with the same viscosity level \(\nu=0.0005\). **Bottom:** errors for DD-ROM approaches at different viscosity levels. The reduced dimensions of the ROMs are \(\{r_{\Omega_{i}}\}_{i=1}^{K}=[5,5,5,5]\) with \(K=4\) partitions. RTX 4000 with 8GB of memory. Typical GNNs applications that involve autoencoders to perform nonlinear dimension reduction are much heavier. The training time of the GNNs for the different choice of augmentation operators vary between 48 minutes and 60 minutes approximately. We believe that in the near future more optimized implementations will reduce the training costs of GNNs. The computational time of the evaluation of a single forward of the GNN is on average 2.661 seconds but vectorization ensures the evaluation of multiple online solutions altogether: with our limited memory budget we could predict all the 100 training and test snapshots with just 2 batches of 50 stacked inputs each. The "Total online time" computed as previously described is 17.166 seconds that is 171.66 milliseconds per online solution with a speedup of around 56 with respect to the 9.668 seconds for the FOM. Although the speedup for the GNN simulations are not as remarkable as for the DD-ROM, we want to highlight that the accuracy of the GNN solutions are qualitatively much better than the DD-ROM for that viscosity level, and physically more meaningful. This aspect is a major advantage with respect to classical linear ROMs that is probably worth the loss of computational advantage. In perspective, when dealing with nonlinear and more expensive FOM for different equations, the GNN approach will not require any extra computational costs, while FOMs and ROMs model might need special treatments for the nonlinearity that would make their costs increase. ## 7 Conclusions We argue that Friedrichs' systems represent a valuable framework to study and devise reduced order models of many parametric PDEs at the same time: among them the ones studied in this work and others, like mixed elliptic and hyperbolic problems, complex and time-dependent FS and also nonlinear PDEs whose linearization results in FS, e.g. the Euler equations. The advantages include the availability of _a posteriori_ error estimators and the easy to preserve mathematical properties of positivity and symmetry from the full-order formulations to the reduced-order ones. We underlined in section 4.2 how optimally stable reduced-order models can be obtained from the ultraweak formulation. A more efficient numerical solver for Friedrichs' systems is the hybridized discontinuous Galerkin method [17]. These are possible future directions of research. Working with discontinuous Galerkin discretizations is not only crucial from the possibly mixed elliptic and hyperbolic nature of Friedrichs' systems, but also to design domain decomposable reduced-order models with a minimum effort: in fact, penalties at the subdomains interfaces are inherited directly from the full-order models. We demonstrated with numerical experiments the limits and the ranges of application of domain decomposable ROMs: generally, with respect to single domain ROMs, there are benefits only when the model under study is truly decomposable, that is when the parameters affect independently different subdomains and the respective solutions are poorly correlated for unseen parametric instances. The results we showed in our academic benchmarks were obtained with the aim to tackle more complex multi-physics models like fluid-structure interaction systems. A typical application of DD-ROMs for FS is represented by parametric PDEs with a mixed elliptic and hyperbolic nature and possibly solution manifolds more and less linearly approximable respectively. The repartitioning strategies we developed are suited to adapt the reduced local dimension of the linear approximants, especially when the parameters influence only a limited region like in test case ADR 5.3.3. The implementation of _ad hoc_ physics inspired indicators can be a future direction of research. The Friedrichs' systems formulation itself does not solve the problems caused by a slow decaying Kolmogorov n-width. DD-ROMs can help in this regard, isolating regions with a slow Kolmogorov n-width for which nonlinear approximants can be employed and regions with a fast decaying Kolmogorov n-width for which classical linear projection-based ROMs provide efficient and reliable predictions. Related to this subject and motivated also by the heavy computational resources that graph neural networks require when employed for model order reduction, we introduced a new paradigm for surrogate modeling: the inference with GNNs of vanishing viscosity solutions from a succession of higher viscosity projection-based ROMs. The approach is, of course, general and can be applied to PDEs that are not FS. The crucial hypotheses underneath this approach is the approximability with linear spaces of the solution manifolds corresponding to higher viscosity levels. We showed that the additional computational costs are not too large in our test case in section 6. Possible directions of research include more complex problems and different regularization or filtering choices, other than additional viscous terms. Transformation into dissipative system In some cases, the term \(\mathcal{A}^{0}=0\) or property (2b) is not satisfied, but there is a way to recover the previous framework. We want to recover a dissipative [61] or accretive system [52]. For example the linearized Euler equations in entropy variables [79] have \(\mathcal{A}^{0}=0\). The condition of uniform positive definiteness \[\exists\mu_{0},\quad\mathcal{A}^{0}+(\mathcal{A}^{0})^{t}-\mathcal{X}\geq 2 \mu_{0}\mathbb{I}_{m}\text{ a.e. in }\Omega, \tag{100}\] is still valid if there exist \(\boldsymbol{\xi}\in\mathbb{R}^{d},\ \|\boldsymbol{\xi}\|=1\) and \(\beta\in\mathbb{R},\ \beta>0\) such that after the transformation \[v(x)=e^{-\beta\xi\cdot x}z(x), \tag{101}\] the resulting system \[\sum_{i}A^{i}\partial_{i}v(x)+\beta\sum_{i=1}^{d}\xi_{i}A^{i}v(x)=e^{-\beta( \xi\cdot x)}f, \tag{102}\] satisfies, with the newly found \(A^{0}=\beta\sum_{i=1}^{d}\xi_{i}A^{i}\), \[\exists\mu_{0},\quad A^{0}+(A^{0})^{t}-\mathcal{X}=2\beta\sum_{i=1}^{d}\xi_{i }A^{i}-\mathcal{X}\geq 2\mu_{0}\mathbb{I}_{m}\text{ a.e. in }\Omega. \tag{103}\] In some cases, such \(\xi\) and \(\beta\) exist, for example if the symmetric matrix \(\sum_{i=1}^{d}\xi_{i}A^{i}\) has at least one positive eigenvalue for some \(\xi\) for almost every \(x\in\Omega\), then taking \(\beta\) sufficiently large is enough to satisfy the condition. It is also sufficient that \(\sum_{i=1}^{d}\xi_{i}(x)A^{i}(x)\) has at least a positive eigenvalue for almost every \(x\in\Omega\) where \(\xi=\xi(x)\), see [52, Example 28]. _Remark 2_.: A more general transformation is \[v(x)=w(x)z(x), \tag{104}\] so that the positive definiteness condition becomes \[\exists\mu_{0},\quad\mathcal{A}^{0}+(\mathcal{A}^{0})^{t}-\mathcal{X}=2\sum_ {i=1}^{d}\partial_{i}(-\log w)\mathcal{A}^{i}-\mathcal{X}\geq 2\mu_{0} \mathbb{I}_{m}\text{ a.e. in }\Omega. \tag{105}\] ## Appendix B Constructive method to define boundary operators We report a procedure to define a boundary operator \(M\in\mathcal{L}(V,V^{\prime})\) starting from some specified boundary conditions. We exploit Theorem 4.3, Lemma 4.4 and Corollary 4.1 from [31]. It can be seen that the most common Dirichlet, Neumann and Robin boundary conditions can be found for some FS [28, 29, 24], following this procedure. **Lemma 1** (Theorem 4.3, Lemma 4.4 and Corollary 4.1 from [31]).: _Let us assume that \((V_{0},V_{0}^{*})\) satisfy (10) and that \(V_{0}+V_{0}^{*}\subset V\) is closed. We denote with \(P:V\to V_{0}\) and \(Q:V:\to V_{0}^{*}\) the projectors onto the subspaces \(V_{0}\subset V\) and \(V_{0}^{*}\subset V\) of the Hilbert space \(V\), respectively. Then, the boundary operator \(\mathcal{M}\in\mathcal{L}(V,V^{\prime})\) defined as_ \[\begin{split}\langle\mathcal{M}u,v\rangle_{V^{\prime},V}& =\langle\mathcal{D}Pu,Pv\rangle_{V^{\prime},V}-\langle\mathcal{ D}Qu,Qv\rangle_{V^{\prime},V}+\\ &\langle\mathcal{D}(P+Q-PQ)u,v\rangle_{V^{\prime},V}-\langle \mathcal{D}u,(P+Q-PQ)v\rangle_{V^{\prime},V}\end{split} \tag{106}\] _is admissible and satisfies \(V_{0}=\ker(\mathcal{D}-\mathcal{M})\) and \(V_{0}^{*}=\ker(\mathcal{D}+\mathcal{M}^{*})\). In particular,_ 1. _If_ \(V=V_{0}+V_{0}^{*}\)_, then_ \(\mathcal{M}\) _is self-adjoint and_ \[\langle\mathcal{M}u,v\rangle_{V^{\prime},V}=\langle\mathcal{D}Pu,Pv\rangle_{V^{ \prime},V}-\langle\mathcal{D}Qu,Qv\rangle_{V^{\prime},V}\,.\] (107) 2. _If_ \(V_{0}=V_{0}^{*}\)_, then_ \(\mathcal{M}\) _is skew-symmetric and_ \[\langle\mathcal{M}u,v\rangle_{V^{\prime},V}=\langle\mathcal{D}Pu,v\rangle_{V ^{\prime},V}-\langle\mathcal{D}Pv,u\rangle_{V^{\prime},V}\,.\] (108) We remark that, for fixed \((V_{0},V_{0}^{*})\), admissible boundary operators \(\mathcal{M}\in\mathcal{L}(V,V^{\prime})\) that satisfy \(V_{0}=\ker(\mathcal{D}-\mathcal{M})\) and \(V_{0}^{*}=\ker(\mathcal{D}+\mathcal{M}^{*})\) are not unique. The boundary operator defined in Lemma 1 is just a possible explicit definition, in general. As an exercise, we show how to find the definition of the operator \(\mathcal{M}\) for our linear compressible elasticity FS, from Section 2.1.2. We want to impose the boundary conditions \(\mathbf{u}_{|\Gamma_{D}}=0\) and \((\boldsymbol{\sigma}\cdot\mathbf{n})_{|\Gamma_{N}}=0\), so, \[V_{0}=V_{0}^{*}=\{(\mathbf{u},\boldsymbol{\sigma})\in V\mid\mathbf{u}_{|\Gamma _{D}}=0,\quad(\boldsymbol{\sigma}\cdot\mathbf{n})_{|\Gamma_{N}}=0\}=H_{ \boldsymbol{\sigma},\Gamma_{N}}\times[H_{\Gamma_{D}}^{1}(\Omega)]^{d}, \tag{109}\] since we defined \(V=H_{\boldsymbol{\sigma}}\times[H^{1}(\Omega)]^{d}\), with \(H_{\boldsymbol{\sigma}}=\{\boldsymbol{\sigma}\in[L^{2}(\Omega)]^{d\times d}\mid \nabla\cdot(\boldsymbol{\sigma}+\boldsymbol{\sigma}^{t})\in[L^{2}(\Omega)]^{d}\}\), the traces \(\gamma_{D}:[H^{1}(\Omega)]^{d}\to[H^{\frac{1}{2}}(\Gamma_{D})]^{d}\) and \(\gamma_{N}:H_{\boldsymbol{\sigma}}^{d}\to[H^{-\frac{1}{2}}(\Gamma_{N})]^{d}\) on \(\Gamma_{D}\) and \(\Gamma_{N}\) are well-defined. In particular, \[H_{\boldsymbol{\sigma},\Gamma_{N}}=\{\boldsymbol{\sigma}\in H_{\boldsymbol{ \sigma}}\mid\gamma_{N}(\boldsymbol{\sigma})=(\boldsymbol{\sigma}\cdot \mathbf{n})_{|\Gamma_{N}}=0\},\qquad[H_{\Gamma_{D}}^{1}(\Omega)]^{d}=\{ \mathbf{u}\in[H^{1}(\Omega)]^{d}\mid\gamma_{D}(\mathbf{u})=\mathbf{u}_{| \Gamma_{D}}=0\}. \tag{110}\] Moreover, \((V_{0},V_{0}^{*})\) satisfy the properties of cone formalism (10). Thus, we can use the definition (108) of Lemma 1: \[\begin{split}\langle\mathcal{M}(\boldsymbol{\sigma},\mathbf{u}), (\boldsymbol{\tau},\mathbf{v})\rangle_{V^{\prime},V}=&\langle \mathcal{D}P(\boldsymbol{\sigma},\mathbf{u}),(\boldsymbol{\tau},\mathbf{v}) \rangle_{V^{\prime},V}-\langle\mathcal{D}P(\boldsymbol{\tau},\mathbf{v}),( \boldsymbol{\sigma},\mathbf{u})\rangle_{V^{\prime},V}\\ =&-\langle\frac{1}{2}(\boldsymbol{\sigma}+ \boldsymbol{\sigma}^{t})\cdot\mathbf{n},\mathbf{v}\rangle_{-\frac{1}{2},\frac {1}{2},\Gamma_{D}}+\langle\frac{1}{2}(\boldsymbol{\tau}+\boldsymbol{\tau}^{t} )\cdot\mathbf{n},\mathbf{u}\rangle_{-\frac{1}{2},\frac{1}{2},\Gamma_{D}}+\\ &\langle\frac{1}{2}(\boldsymbol{\sigma}+\boldsymbol{\sigma}^{t} )\cdot\mathbf{n},\mathbf{v}\rangle_{-\frac{1}{2},\frac{1}{2},\Gamma_{N}}- \langle\frac{1}{2}(\boldsymbol{\tau}+\boldsymbol{\tau}^{t})\cdot\mathbf{n}, \mathbf{u}\rangle_{-\frac{1}{2},\frac{1}{2},\Gamma_{N}},\end{split} \tag{111}\] where \(P:V=H_{\boldsymbol{\sigma}}\times[H^{1}(\Omega)]^{d}\to V_{0}=H_{\boldsymbol{ \sigma},\Gamma_{N}}\times[H_{\Gamma_{D}}^{1}(\Omega)]^{d}\) is the projector into the subspace \(V_{0}\) of the Hilbert graph space \(V\) with scalar product: \[((\boldsymbol{\sigma},\mathbf{u}),(\boldsymbol{\tau},\mathbf{v}))_{V}=( \mathbf{u},\mathbf{v})_{[L^{d}(\Omega)]^{d}}+(\boldsymbol{\sigma},\boldsymbol {\tau})_{[L^{2}(\Omega)]^{d\times d}}+(A(\boldsymbol{\sigma},\mathbf{u}),A( \boldsymbol{\tau},\mathbf{v})). \tag{112}\] ## Appendix C ROM convergence studies In this section, we validate the DD-ROM implementation, checking the convergence towards the FOM solutions with respect to the dimension of the reduced space. Uniform local reduced dimensions are employed \(\{r_{\Omega_{i}}\}_{i=1}^{K}\) and \(\{r_{\Omega_{i}}\}_{i=1}^{k}\). For each convergence study 20 uniformly independent samples are used as training dataset and 50 uniformly independent samples as test dataset. Figure 23: **MS1. The convergence of DD-ROMS with uniform local reduced dimensions \(\{r_{\Omega_{i}}\}_{i=1}^{K}\) and \(\{r_{\Omega_{i}}\}_{i=1}^{k}\) is assessed. The uniform value of the local reduced dimensions is reported in the abscissae. For this test case an improvement of the accuracy with respect to the single domain reduced basis is not observed.** In Figure 23, we show the \(L^{2}\)-error, the \(R\)-error and the energy error decay and their respective error estimators for the Maxwell equations test case **MS1**, section 5.3.1, with constant parameters \(\mu\) and \(\sigma\) on the whole domain. We clearly see an exponential behavior in the error as we add basis functions. On the other hand, we do not observe strong differences between the ROM, DD-ROM with repartition and DD-ROM with deal.II subdomains, for this simple test case. Similar results can be observed in Figure 24, where the same analysis is applied for the compressible linear elasticity test **CLE1** from section 5.3.2. From these results, it should be clear that the employment of local reduced basis is not always useful to increase the accuracy of the predictions. Nonetheless, it may be used to locally reduce the dimension of the linear approximants. Possible benefits include the adaptation of the computational resources (higher dimensional reduced basis are chosen only where it is necessary) and the possibility to speedup parametric studies and non-intrusive surrogate modelling thanks to the further reduced local dimensions [86, 87]. Typical cases where DD-ROMs are effective to increase the accuracy of the predictions are truly decomposable systems where the parameters affect independently different regions of the computational domain, as in test case **MS2** in section 5.3.1. ## Acknowledgements This work was partially funded by European Union Funding for Research and Innovation -- Horizon 2020 Program -- in the framework of European Research Council Executive Agency: H2020 ERC CoG 2015 AROMA-CFD project 681447 "Advanced Reduced Order Methods with Applications in Computational Fluid Dynamics" P.I. Professor Gianluigi Rozza. We also acknowledge the PRIN 2017 "Numerical Analysis for Full and Reduced Order Methods for the efficient and accurate solution of complex systems governed by Partial Differential Equations" (NA-FROM-PDEs). Davide Torlo has been funded by a SISSA Mathematical fellowship within Italian Excellence Departments initiative by Ministry of University and Research.
2304.14328
Extraction of the Sivers function with deep neural networks
Deep Neural Networks (DNNs) are a powerful and flexible tool for information extraction and modeling. In this study, we use DNNs to extract the Sivers functions by globally fitting Semi- Inclusive Deep Inelastic Scattering (SIDIS) and Drell-Yan (DY) data. To make predictions of this Transverse Momentum-dependent Distribution (TMD), we construct a minimally biased model using data from COMPASS and HERMES. The resulting Sivers function model, constructed using SIDIS data, is also used to make predictions for DY kinematics specific to the valence and sea quarks, with careful consideration given to experimental errors, data sparsity, and complexity of phase space.
I. P. Fernando, D. Keller
2023-04-27T17:00:15Z
http://arxiv.org/abs/2304.14328v2
# A Modern Global Extraction of the Sivers Function ###### Abstract Deep Neural Networks (DNNs) are a powerful and flexible tool for information extraction and modeling. In this study, we use DNNs to extract the Sivers functions by globally fitting Semi-Inclusive Deep Inelastic Scattering (SIDIS) and Drell-Yan (DY) data. To make predictions of this Transverse Momentum-dependent Distribution (TMD), we construct a minimally biased model using data from COMPASS and HERMES. The resulting Sivers function model, constructed using SIDIS data, is also used to make predictions for DY kinematics specific to the valence and sea quarks, with careful consideration given to experimental errors, data sparsity, and complexity of phase space. ###### Contents * I Introduction * II Kinematics and Formalism * II. SIDIS process * II. DY process * III Fitting \(\mathcal{N}_{q}(x)\) * IV The Extraction Technique * IV.1 Data selection * IV.2 MINUIT fits for \(SU(3)_{\text{flavor}}\) * IV.3 DNN Method Testing * IV.4 DNN model from real data * V Results * V.1 DNN fit to SIDIS data * V.2 Sivers in Momentum Space * V.3 Sivers First Transverse Moment * V.4 Projections * V.4.1 SIDIS Projections * V.4.2 DY Projections * V.4.3 The 3D Tomography of Proton * VI Conclusion and Discussion ## I Introduction The development of modern information extraction techniques has far outpaced experimental progress. Even with decades of experimental results, there is limited data on Transverse Momentum Dependent Parton Distribution Functions (TMDs) for global analysis that can be applied to a 3D phenomenological interpretation. Theoretical efforts are providing an ever-evolving framework and toolbox to interpret the data, leading to new areas of research that optimize information extraction unique to spin physics and the internal structure of hadrons. Artificial intelligence (AI) is accelerating data-driven research with the superior capacity of deep neural networks (DNNs) to be used as function approximators. DNNs can learn to approximate relationships contained within data, provided there are no limits to the number of neurons, layers, and computing power. With enough data, such approximations can be made with high accuracy. The Sivers function is the most studied of the eight leading-twist TMD distributions that pertain to polarized nucleons. The Sivers distribution is naively time-reversal odd and is expected to be process-dependent, which leads to a distribution equal in magnitude but opposite in sign in Semi-Inclusive Deep Inelastic Scattering (SIDIS) compared to the Drell-Yan (DY) process [1; 2]. The Sivers distribution, as measured from the transverse single-spin asymmetries, provides information on the correlation between the nucleon's spin and the angular distribution of outgoing hadrons in SIDIS or the di-muons in DY. The quark Sivers function, \(\Delta^{N}f_{q/p^{\uparrow}}\), describes the number density in momentum space of unpolarized quarks inside the transversely polarized target nucleon with nuclear spin. A non-zero Sivers function indicates a contribution of quark orbital angular momentum to the target's spin. TMD extraction and modeling with sensitivity to TMD evolution are critical as they provide predictive power in the collinear limit. Representing evolution also accounts for the momentum-dependent QCD interactions between partons inside the hadron, which affect their distribution and can significantly impact observables. Some theoretical approaches use the parton model approximation without TMD evolution and demonstrate good agreement with experimental results. These calculations assume that evolution effects in asymmetries are suppressed, as asymmetries are ratios of cross-sections where evolution and higher-order effects should cancel out [3; 4; 5]. This approach may not capture the full complexity of QCD evolution for TMD distributions. On the other hand, studies that incorporated TMD evolution, as seen in [6; 7], encountered difficulties and did not achieve better agreement with the Drell-Yan data compared to earlier analyses [4; 5; 8]. This situation poses a challenge in establishing the status of the TMD factorization theorem since its principal components include scale dependence and the nonperturbative and universal Collins-Soper (CS) kernel. However, the work in [9; 10] demonstrated the conditional universality of the Sivers function using a simultaneous fit to SIDIS, Drell-Yan, and W/Z boson production data, including TMD evolution and the universal nonperturbative CS kernel extracted in SV19 [11] from unpolarized measurements. In contrast, the working principle of DNNs as a means of extraction can facilitate the necessary flexibility to capture not only the TMD and DGLAP evolution features but also the full complexity of QCD. The challenge, of course, is that once captured in the model, disentangling these feature components is nontrivial, and we expect it to be the focus of much future work. The phenomenology used to interpret and analyze experimental data relies on TMD factorization [12; 13; 14; 15; 16; 17; 18; 19; 20; 21], proven for single-spin asymmetries (SSAs) described in terms of convolutions of TMDs. The Sivers function has been previously extracted from SIDIS data by several groups, with generally consistent results [4; 22; 23; 24; 25; 26]. However, all previous phenomenological fits of the Sivers function (and other TMDs) require a Gaussian ansatz characterizing the shape of the distribution combined with an assumed form of the Bjorken-x-dependent \(\mathcal{N}_{q}(x)\). This leads to ambiguity in determining both the quantitative results from the fit and the qualitative features of the momentum distributions and their associated dynamics. The function form of \(\mathcal{N}_{q}(x)\) is usually offered only as a placeholder and is assumed to at least contain the appropriate ingredients to facilitate the extraction. This is undoubtedly a considerable simplification but one that has permitted significant progress. In the following analysis, we perform a global fit with the goal of testing the extraction ability of DNNs to maximize information extraction and minimize both the fit error and the analytical ambiguity associated with the interpretation of \(\mathcal{N}_{q}(x)\). The exceptional capacity of DNNs to be ideal for function approximation is rigorously provable through the Universal Approximation Theorem [27; 28]. This is the advantage of DNNs over other machine learning (ML) approaches. In this regard, even the mere existence of a function implies that DNNs can be used to represent it and work with it without actually knowing the function form. With such a high-level abstraction, one can make use of the available data and make assessments not otherwise possible, even given an arbitrary degree of complexity. The complexity can be contained in the data relationships as well as in the experimental uncertainties. In order to make optimal use of experimental uncertainties, a detailed analysis must be provided on their estimated scale and correlations. DNNs are also Turing-complete, implying the potential to simulate the computational aspects of any real-world general-purpose computing process. The implications are that there is potential for a type of generalizable framework that can be utilized and further developed over time without the knowledge of the exact rigorous details of the underlying mechanisms. Provided appropriately detailed global models, higher-level symbolic regressions can then be performed to infer the strict mathematical form. However, such an approach requires access to a significant amount of experimental data that holds the necessary information. Without the necessary amount of quality data, no matter the number of nodes or sophistication of architecture, DNNs are limited in what useful information they can extract, along with any other technique. Even with this constraint, DNNs can make considerable advancements with the use of sparse data with large experimental uncertainties. Generally, fitting with large errors or performing computational tasks with inherent fuzzy logic are tasks that are difficult to make optimal use of modern computational resources. DNNs are uniquely suited for such challenges. In the remainder of this article, a first-level DNN extraction of \(\mathcal{N}_{q}(x)\) is performed to deduce the Sivers function from a global analysis using HERMES and COMPASS data. This investigation is exploratory, with the intention of developing tools and techniques that minimize error and maximize utility, which we hope to expand upon in further work. In the next section, Section II, we present the formalism of the Sivers function and the kinematics for both SIDIS and DY. In Section III, a discussion of the fitting techniques of \(\mathcal{N}_{q}(x)\) is presented with a focus on the methodology of the DNN approach. Section IV explains in detail the extraction technique, starting with model testing. We perform a baseline fit using the classical Minuit \(\chi^{2}\) minimization algorithm and then perform the DNN fit, demonstrating with pseudo-data the fidelity of the procedure. We then walk through the final DNN fit to experimental data for the polarized proton and the deuteron separately. The results of the fits are presented in Section V, and finally, in Section VI, some concluding remarks are provided. ## II Kinematics and Formalism With the spin of the proton perpendicular to the transverse plane, the Sivers function is expected to reflect an anisotropy of quark momentum distributions for the up and down quarks, indicating that their motion is in opposite directions [1; 2]. This is manifestly due to quark orbital angular momentum (OAM). The most interesting and relevant aspects of the OAM, such as magnitude and partonic distribution shape as a function of the proton's state, cannot be determined by the Sivers effect alone. However, systematic studies can be performed to investigate the full 3D momentum distribution of the quarks in a transversely polarized proton, which can be used in concert with other information to exploit multi-dimensional partonic degrees of freedom using a variety of hard processes. Here, we focus specifically on SIDIS and DY, but it should be noted that there is significant potential for broader model development that can come from combining all available data from multiple processes with additional constraints using the simultaneous DNN fitting approach presented here. The Sivers function describes a difference in probabilities, which implies that it may not be positive definite. Making a comparison between the Sivers function from the DY process and the SIDIS process is still the focus of much experimental and theoretical effort. Under time reversal, the future-pointing Wilson lines are replaced by past-pointing Wilson lines that are appropriate for factorization in the DY process. This implies that the Sivers function is not uniquely defined and cannot exhibit process universality, as it depends on the contour of the Wilson line. This feature of the Sivers function is directly tied to the QCD interactions between the quarks (or gluons) active in the process, resulting in a conditional universality, as shown in [29], \[\Delta^{N}f_{q/p^{\uparrow}}\left(x,k_{\perp}\right)\bigr{|}_{\text{SIDIS}}=- \left.\Delta^{N}f_{q/p^{\uparrow}}\left(x,k_{\perp}\right)\right|_{\text{DY}}. \tag{1}\] This fundamental prediction still needs to be tested. Direct sign tests [30; 4; 8] can be performed, but experimental proof would require an analysis over a broad phase space of both SIDIS and DY, with consideration given to flavor and kinematic sensitivity for both valence and sea quarks. Our analysis will, in part, rely on this relationship rather than making direct tests of the validity of the sign change. ### SIDIS process The Semi Inclusive Deep Inelastic Scattering (SIDIS) process involves scattering a lepton off of a polarized nucleon and measuring the scattered lepton and a fragmented hadron. In the nucleon-photon center of mass frame, the nucleon three-momentum \(\vec{p}\) is along the \(z\)-axis and its spin-polarization \(\vec{S}_{T}\) is on the plane perpendicular (transverse) to the \(\hat{z}\)-axis. In Fig. 1 the structquark, virtual-photon (with four-momentum \(\vec{q}\)), and the lepton belong to a plane called "Lepton Plane" (represented in yellow). The fragmented-hadron with momentum \(\vec{p}_{h}\) and its projection onto the \(\hat{x}-\hat{y}\) (i.e. \(\vec{p}_{hT}\)) belong to so-called "Hadron Plane" (represented in transparent). Thus, the transverse momentum \(\vec{k}_{\perp}\) of the structquark and \(\vec{p}_{hT}\) are falling onto the transverse-plane (represented in transparent-blue) perpendicular to both lepton plane and hadron plane. The azimuthal angle \(\phi_{h}\) of the produced hadron \(h\), and is the angle between the lepton plane and the hadron plane [31]. The differential cross-section for the SIDIS process depends on both collinear parton distribution functions (PDFs) \(f_{q/p}(x;Q^{2})\) and fragmentation functions \(D_{h/q}(z;Q^{2})\), where \(q\) is the quark flavor, \(p\) represents the target proton, \(h\) is the hadron type produced by the process, and \(z\) is the momentum fraction of the final state hadron with respect to the virtual photon. A simplified version of the SIDIS differential cross-section can be written up to \(\mathcal{O}(k_{\perp}/Q)\) as [32; 25], \[\frac{d^{5}\sigma^{lp\to lhX}}{dxdQ^{2}dzd^{2}p_{\perp}}=\sum_{q}e _{q}^{2}\int d^{2}\mathbf{k}_{\perp}\ \left(\frac{2\pi\alpha^{2}}{x^{2}s^{2}}\frac{\hat{s}^{2}+\hat{u}^{2}}{Q^{4}}\right)\] \[\times\hat{f}_{q/p^{\uparrow}}(x,k_{\perp})D_{h/q}(z,p_{\perp})+ \mathcal{O}(k_{\perp}/Q)\, \tag{2}\] where \(\hat{s},\hat{u}\) are partonic Mandelstam invariants, and \(\hat{f}_{q/p^{\uparrow}}(x,k_{\perp})\) is the unpolarized quark distribution, \[\hat{f}_{q/p^{\uparrow}}(x,k_{\perp}) =f_{q/p}(x,k_{\perp})+\frac{1}{2}\Delta^{N}f_{q/p^{\uparrow}}(x,k _{\perp})\vec{S}_{T}\cdot(\hat{p}\times\hat{k}_{\perp})\] \[=f_{q/p}(x,k_{\perp})-\frac{k_{\perp}}{m_{p}}f_{1T}^{\perp g}(x,k _{\perp})\vec{S}_{T}\cdot(\hat{p}\times\hat{k}_{\perp}) \tag{3}\] with transverse momentum \(k_{\perp}\) inside a transversely polarized (with spin \(\vec{S}_{T}\)) proton with three-momentum \(\vec{p}\), where \(\Delta^{N}f_{q/p^{\uparrow}}(x,k_{\perp})\) denotes Sivers functions that carry the nucleon's spin-polarization effects on the quarks which can be considered as a modulation to the unpolarized quark PDFs [4], \[\Delta^{N}f_{q/p^{\uparrow}}(x,k_{\perp})=2\mathcal{N}_{q}(x)h(k_{\perp})f_{q/ p}(x,k_{\perp}) \tag{4}\] where, \[f_{q/p}(x,k_{\perp}) =f_{q}(x)\frac{1}{\pi\langle k_{\perp}^{2}\rangle}e^{-k_{\perp}^ {2}/\langle k_{\perp}^{2}\rangle}, \tag{5}\] \[h(k_{\perp}) =\sqrt{2e}\frac{k_{\perp}}{m_{1}}e^{-k_{\perp}^{2}/m_{1}^{2}}. \tag{6}\] Here \(\mathcal{N}_{q}(x)\) is considered as a factorized \(x\)-dependent function with a form that has yet to be formally established, and \(m_{1}\) is a parameter that allows the \(k_{\perp}\) Gaussian dependence of the Sivers function to be different Figure 1: Kinematics of the SIDIS process in the nucleon-photon center-of-mass frame. from that of the unpolarized TMDs [4]. \(f_{q}(x;Q^{2})\) is the co-linear PDF for flavor \(q\) that is obtained from CTEQ6l [33] grid through LHAPDF [34], whereas the fragmentation functions for \(\pi^{\pm,0}\) are from [35], and for \(K^{\pm}\) are from [36] (DSS _formalism_), from recent global analyses of fragmentation functions at next-to-leading-order (NLO) accuracy in QCD. In terms of the cross-section ratios, the Sivers asymmetry in the SIDIS process can be written as, \[A_{UT}^{\sin(\phi_{h}-\phi_{S})}(x,y,z,p_{hT})=\frac{d\sigma^{l \uparrow p\to hlX}-d\sigma^{l\downarrow p\to lhX}}{d\sigma^{l\uparrow p \to hlX}+d\sigma^{l\downarrow p\to hlX}}, \tag{7}\] and can be parameterized [4] and further re-arranged as, \[A_{UT}^{\sin(\phi_{h}-\phi_{S})}(x,z,p_{hT})\] \[=\mathcal{A}_{0}(z,p_{hT},m_{1})\left(\frac{\sum_{q}\mathcal{N} _{q}(x)e_{q}^{2}f_{q}(x)D_{h/q}(z)}{\sum_{q}e_{q}^{2}f_{q}(x)D_{h/q}(z)}\right), \tag{8}\] where, \[\mathcal{A}_{0}(z,p_{hT},m_{1})\] \[\quad=\frac{\sqrt{2e}zp_{hT}}{m_{1}}\frac{[z^{2}\langle k_{\perp} ^{2}\rangle+\langle p_{\perp}^{2}\rangle]\langle k_{S}^{2}\rangle^{2}}{[z^{2} \langle k_{S}^{2}\rangle+\langle p_{\perp}^{2}\rangle]^{2}\langle k_{\perp}^{ 2}\rangle}\] \[\quad\times\exp\left[-\frac{p_{hT}^{2}z^{2}\left(\langle k_{S}^{ 2}\rangle-\langle k_{\perp}^{2}\rangle\right)}{(z^{2}\langle k_{S}^{2} \rangle+\langle p_{\perp}^{2}\rangle)\left(z^{2}\langle k_{\perp}^{2}\rangle+ \langle p_{\perp}^{2}\rangle\right)}\right], \tag{9}\] \[\langle k_{S}^{2}\rangle=\frac{m_{1}\langle k_{\perp}^{2}\rangle}{m_{1}^{2}+ \langle k_{\perp}^{2}\rangle}, \tag{10}\] and fragmentation functions \(D_{h/q}(z,p_{\perp})\) (before \(p_{\perp}\)-integration), \[D_{h/q}(z,p_{\perp})=D_{h/q}(z)\frac{1}{\pi\langle p_{\perp}^{2}\rangle}\exp^ {-p_{\perp}^{2}/\langle p_{\perp}^{2}\rangle}, \tag{11}\] with \(\langle k_{\perp}^{2}\rangle=0.57\pm 0.08\) GeV\({}^{2}\) and \(\langle p_{\perp}^{2}\rangle=0.12\pm 0.01\) GeV\({}^{2}\) from the fits [37; 38] to HERMES multiplicities [39]. Note that we use the shorthand notation for the PDFs, FFs as well as TMDs by omitting \(Q^{2}\) in the expressions for the sake of convenience as is done in the literature. Through this azimuthal asymmetry, the SIDIS process provides information about the correlations between the transverse momentum of the partons leaving through the fragmented target and the spin of the target itself. In this regard, SIDIS allows one to study the structure of individual hadrons by selecting these decay fragments at the detection level. In general, SIDIS provides access to a wide range of TMDs, and allows for studying TMDs of hadrons carrying different flavors and polarizations. For our present analysis, HERMES and COMPASS have the best-polarized proton target data for SIDIS, while COMPASS has the best-polarized neutron target data. In the COMPASS data, the neutron target is actually a polarized deuteron but the neutron carries over 90% of the deuteron polarization when polarized in solid-state form. The JLab data on polarized \({}^{3}\)He is of a different class of experiments and will not be combined with the polarized deuteron data from COMPASS. It is worth noting that the uncertainties in the experimental data can greatly differ depending on the choice of polarized target. ### DY process Consider the Drell-Yan process \(A^{\uparrow}B\to l^{+}l^{-}X\), where \(A^{\uparrow}\) is a transversely polarized target, and \(B\) is the hadron beam. In the hadronic c.m frame, the 4-momentum \(q\) and the invariant mass squared (\(Q_{M}\)) of the final-state di-lepton pair, Feynman-\(x\) (\(x_{F}\)) and the Mandelstam variable \(s\) are related as, \[q=(q_{0},q_{T},q_{L})\,, q^{2}=Q_{M},\qquad x_{F}=\frac{2q_{L}}{\sqrt{s}},\] \[s=(p_{A}+p_{B})^{2}\,. \tag{12}\] In the kinematical region of, \[q_{T}^{2}\ll Q_{M},\qquad k_{\perp}\simeq q_{T}, \tag{13}\] at order \(\mathcal{O}(k_{\perp}/Q_{M})\), and in the hadronic c.m frame, the Sivers Single Spin Asymmetry can be given as [3; 40], Figure 2: Kinematics of the DY process in the hadronic center-of-mass frame. \[A_{N}^{\sin(\phi_{\gamma}-\phi_{S})} (x_{F},Q_{M},q_{T})=\frac{\int_{0}^{2\pi}d\phi_{\gamma}\left(d\sigma^ {A^{\dagger}B\to l^{+}l^{-}X}-d\sigma^{A^{\dagger}B\to l^{+}l^{-}X}\right)\sin( \phi_{\gamma}-\phi_{S})}{\frac{1}{2}\int_{0}^{2\pi}d\phi_{\gamma}\left(d\sigma ^{A^{\dagger}B\to l^{+}l^{-}X}+d\sigma^{A^{\dagger}B\to l^{+}l^{-}X}\right)}, \tag{14}\] \[=\frac{\int_{0}^{2\pi}d\phi_{\gamma}\left(\sum_{q}\int d^{2}k_{ \perp 2}d^{2}k_{\perp 1}\delta^{2}(k_{\perp 1}+k_{\perp 1}-q_{T})\Delta^{N}f_{q/A^{ \dagger}}(x_{1},k_{\perp 1})f_{\bar{q}/B}(x_{2},k_{\perp 2})\hat{\sigma}_{0}^{q \bar{q}}\right)\sin(\phi_{\gamma}-\phi_{S})}{\int_{0}^{2\pi}d\phi_{\gamma} \left(\sum_{q}\int d^{2}k_{\perp 2}d^{2}k_{\perp 1}\delta^{2}(k_{\perp 1}+k_{ \perp 1}-q_{T})f_{q/A}(x_{1},k_{\perp 1})f_{\bar{q}/B}(x_{2},k_{\perp 2})\hat{ \sigma}_{0}^{q\bar{q}}\right)} \tag{15}\] where, \[\hat{\sigma}_{0}^{q\bar{q}} =e_{q}^{2}\frac{4\pi\alpha^{2}}{9Q_{M}}, \tag{16}\] \[x_{1,2} =\frac{\pm x_{F}+\sqrt{x_{F}^{2}+4Q_{M}/s}}{2}. \tag{17}\] Note that here we follow the same convention as in [41, 42, 3, 24] for the azimuthal angle in the \(A-B\) center-of-mass frame with the hadron \(A^{\dagger}\) moving along the positive \(z\)-axis and hadron \(B\) along negative \(z\)-axis. Thus the mixed product \(\vec{S}_{T}\cdot(\hat{p}\times\hat{k}_{\perp i})\) upon integration in \(k_{\perp}\), (where \(i=\{1,2\}\)) yields a \(\sin(\phi_{\gamma}-\phi_{S})=\cos\phi_{\gamma}\) (when \(\phi_{S}=\pi/2\)) dependence for the Sivers asymmetry, which implies an overall \([-\sin^{2}(\phi_{\gamma}-\phi_{S})]\) in Eq.(15). For the case in which polarized hadron \(A^{\dagger}\) moves along the \(-\hat{z}\) axis (i.e. for the processes \(BA^{\dagger}\to l^{+}l^{-}X\)), the corresponding overall factor is \([+\sin^{2}(\phi_{\gamma}-\phi_{S})]\). The analytical integration of the numerator and denominator of Eq. (15) can be written as, \[A_{N}^{\sin(\phi_{\gamma}-\phi_{S})} (x_{F},Q_{M},q_{T})\] \[=\frac{\int d\phi_{\gamma}\:\mathcal{C}(x_{F},Q_{M},q_{T},\phi_{ \gamma})\sin(\phi_{\gamma}-\phi_{S})}{\int d\phi_{\gamma}\:\mathcal{D}(x_{F},Q _{M},q_{T})}, \tag{18}\] where, \[\mathcal{C} (x_{F},Q_{M},q_{T},\phi_{\gamma})\] \[\equiv\frac{d^{4}\sigma^{\uparrow}}{dx_{F}dQ_{M}d^{2}q_{T}}- \frac{d^{4}\sigma^{\downarrow}}{dx_{F}dQ_{M}d^{2}q_{T}} \tag{19}\] \[=\frac{4\pi\alpha^{2}}{9sQ_{M}}\left(\frac{q_{T}}{m_{1}}\sqrt{2e} \frac{\langle k_{S}^{2}\rangle^{2}\exp[-q_{T}^{2}/\left(\langle k_{S}^{2} \rangle+\langle k_{\perp 2}^{2}\rangle\right)]}{\pi\left(\langle k_{S}^{2}\rangle+ \langle k_{\perp 2}^{2}\rangle\right)^{2}\langle k_{\perp 2}^{2}\rangle}\right)\] \[\times\sin(\phi_{\gamma}-\phi_{S})\sum_{q}\frac{e_{q}^{2}}{x_{1}+x _{2}}\Delta^{N}f_{q/A^{\dagger}}(x_{1})f_{\bar{q}/B}(x_{2}), \tag{20}\] and \[\mathcal{D} (x_{F},Q_{M},q_{T})\] \[\equiv\frac{1}{2}\left[\frac{d^{4}\sigma^{\uparrow}}{dx_{F}dQ_{M }d^{2}q_{T}}+\frac{d^{4}\sigma^{\downarrow}}{dx_{F}dQ_{M}d^{2}q_{T}}\right]\] \[=\frac{4\pi\alpha^{2}}{9sQ_{M}}\left(\frac{\exp[-q_{T}^{2}/\left( \langle k_{\perp 1}^{2}\rangle+\langle k_{\perp 2}^{2}\rangle\right)]}{\pi\left( \langle k_{\perp 1}^{2}\rangle+\langle k_{\perp 2}^{2}\rangle\right)}\right)\] \[\times\sum_{q}\frac{e_{q}^{2}}{x_{1}+x_{2}}f_{q/A}(x_{1})f_{\bar{ q}/B}(x_{2}), \tag{21}\] and it can be further simplified as, \[A_{N}^{\sin(\phi_{\gamma}-\phi_{S})}(x_{F},M,q_{T})\] \[=\mathcal{B}_{0}(q_{T},m_{1})\frac{\sum_{q}\frac{e_{q}^{2}}{x_{1}+ x_{2}}\mathcal{N}_{q}(x_{1})f_{q/A}(x_{1})f_{\bar{q}/B}(x_{2})}{\sum_{q}\frac{e_{q}^{2}}{x_{1}+ x_{2}}f_{q/A}(x_{1})f_{\bar{q}/B}(x_{2})} \tag{22}\] where, \[\mathcal{B}_{0}(q_{T},m_{1})=\frac{q_{T}\sqrt{2e}}{m_{1}}\frac{Y_{1}(q_{T},k_{ S},k_{\perp 2})}{Y_{2}(q_{T},k_{\perp 1},k_{\perp 2})} \tag{23}\] and, \[Y_{1}(q_{T},k_{S},k_{\perp 2}) =\Biggl{(}\frac{\langle k_{S}^{2}\rangle^{2}}{\langle k_{\perp 2}^{2} \rangle\left(\langle k_{S}^{2}\rangle+\langle k_{\perp 2}^{2}\rangle\right)^{2}}\Biggr{)}\] \[\times\exp\bigg{(}\frac{-q_{T}^{2}}{\langle k_{S}^{2}\rangle+ \langle k_{\perp 2}^{2}\rangle}\Biggr{)}, \tag{24}\] \[Y_{2}(q_{T},k_{\perp 1},k_{\perp 2}) =\Biggl{(}\frac{1}{\langle k_{\perp 1}^{2}\rangle+\langle k_{\perp 2}^{2} \rangle}\Biggr{)}\] \[\times\exp\bigg{(}\frac{-q_{T}^{2}}{\langle k_{\perp 1}^{2}\rangle+ \langle k_{\perp 2}^{2}\rangle}\Biggr{)}, \tag{25}\] \[\frac{1}{\langle k_{S}^{2}\rangle}=\frac{1}{m_{1}^{2}}+\frac{1}{\langle k_{\perp 1 }^{2}\rangle}, \tag{26}\] with the assumption \(\langle k_{\perp 1}^{2}\rangle=\langle k_{\perp 2}^{2}\rangle=\langle k_{\perp}^{2} \rangle=0.25\) GeV\({}^{2}\) as in [3]. Through this azimuthal asymmetry, the Drell-Yan process allows one to preferentially probe from the target and beam hadrons to create the quark anti-quark annihilation process of interest resulting in a dimuon pair in the detector. SIDIS only permits the measurement of a convolution of the TMDs function with a fragmentation function, whereas Drell-Yan allows the direct measurement of the TMDs without the complications of fragmentation functions and final state interactions. Coupled with its innate sensitivity to sea quarks, Drell-Yan is a critical process for determining the TMDs of the sea quarks. ## III Fitting \(\mathcal{N}_{q}(x)\) To obtain accurate three-dimensional tomographic information on quarks and gluons inside the nucleon, it is critical to extract TMDs with minimal model dependence and little to no unknown biases. Fitting and statistical analysis tools such as MINUIT rely on \(\chi^{2}\) minimization or log-likelihood functions to compute the best-fit parameter values and uncertainties, including correlations between parameters. This class of algorithms has well-established statistical methods that have been used for decades in various scientific fields, making them a reliable and trusted tool. In frequentist statistics, the reliability of a \(\chi^{2}\) minimization fitting method can be evaluated through the concept of hypothesis testing. The fitting method minimizes the difference between the observed data and the expected theoretical model, expressed through a \(\chi^{2}\) statistic. The \(\chi^{2}\) statistic follows a known distribution, and the probability of obtaining a value as extreme as the observation can be directly calculated. The reliability of the \(\chi^{2}\) minimization fitting method depends greatly on the ability to accurately estimate the theoretical uncertainties and the degree to which the model approximates the observed data. When these conditions are met, the method can provide a reliable estimate of the parameters that describe the model and its uncertainties. However, chi-square fits can be sensitive to the choice of initial parameter values and may not always converge to the correct solution. Fitting with DNNs can provide considerable advantages and does not inherently sacrifice the statistical framework provided by chi-square fits but it's worth touching on some key attributes needed in the method in order to best maintain quality statistical relevance and interpretation of resulting fits. To preserve the statistical robustness and reliability of traditional \(\chi^{2}\) minimization fitting when using a DNN, it is important to carefully consider the data quality, model selection, validation, interpretation, and testing criteria. The quality of the data used to train the DNN should be as high as possible to ensure the DNN learns the correct relationships between inputs and outputs. This is crucial because the reliability of the DNN is only as good as the quality of the data it is trained on. Quantifying differences between the training data and the real data used in the fit can be challenging and can lead to unknown biases and systematic errors. In the method used here, Monte Carlo data, which has been tuned and matched to the experimental data, is utilized. This is done by successively extracting information from the experimental data to impose into the generated Monte Carlo data and then using the improved Monte Carlo data to further refine the extraction technique. The choice of DNN architecture, activation functions, regularization techniques, and other hyperparameters should be carefully selected to minimize over-fitting and maximize generalization performance. Cross-validation techniques can be used to tune these hyperparameters and ensure the best possible fit to the data. The quality of the fit should be quantified with a metric that is well-defined and can be interpreted statistically. This could still be the \(\chi^{2}\) statistic but may also be a variety of possible loss functions. The trained DNN should be validated on an independent test dataset to ensure that it generalizes well to new data and that it does not over-fit the training data. This is critical because over-fitting can lead to an unreliable and unstable model. When interpreting the results of the DNN fit, it's important to carefully examine the relationships between the inputs and outputs. To better understand how the DNN makes its predictions, techniques such as feature importance and attention mechanisms can be used. While DNNs can have a reliable statistical interpretation, it requires a more detailed analysis than traditional algorithms like MINUIT. Directed testing of the model predictions and verification of reliability through multiple trials is crucial. Studies to test accuracy and precision are useful along with quantifying the robustness of the extraction method itself once a DNN architecture has been chosen. In this regard, it's important to prove that the method can be flexible as well as correct. It is not normally useful to have a model and method that yields highly accurate results but only for a fine slice of phase space under particular conditions. The conventional chi-square minimization routines are limited in their flexibility and applicability to more complex problems because they assume a specific functional form of the relationship between inputs and outputs. In contrast, DNNs can learn complex and nonlinear relationships, making them suitable for tasks that require some degree of abstraction and where there is no known specific functional form of the relationship. DNNs are proven to be universal approximators and can handle large amounts of data while generalizing well to new data, which improves their accuracy and robustness. This is especially helpful in the present application, where DNNs can be used to build better models with new experimental data as it becomes available. The ambiguity of \(\mathcal{N}_{q}(x)\) in the literature is one of the main motivations for applying the DNN technique to extract Sivers functions. Our goal in this paper is solely to lay out the procedure in order to strategically utilize the applications of DNNs. In most Sivers function extractions in the literature [24, 25, 26, 24, 25, 26, 43, 44, 45, 46, 47], the treatment of \(\mathcal{N}_{q}(x)\) differs either by its parameterization or by the way \(q\) is treated in \(\mathcal{N}_{q}(x)\). For example, in [9; 10; 25; 26], all anti-quarks (\(\bar{u}\), \(\bar{d}\), and \(\bar{s}\)) were treated the same (or combined) and referred to as the "unbroken sea." Our first step is a generalization of the MINUIT fit parameterization of \(\mathcal{N}_{q}(x)\) for all light quark flavors, and the section IV.2 summarizes the corresponding MINUIT fit results using \(iminuit\) (python version of MINUIT) [48]. In these fits we use the same dataset as in [4], and obtained the fit parameters for \(\mathcal{N}_{q}(x)\) defined as, \[\mathcal{N}_{q}(x)=N_{q}x^{\alpha_{q}}(1-x)^{\beta_{q}}\frac{(\alpha_{q}+ \beta_{q})^{(\alpha_{q}+\beta_{q})}}{\alpha_{q}^{\alpha_{q}}\beta_{q}^{\beta_{q }}}. \tag{27}\] This expression is generalized for all light quark flavors, where \(N_{q}\) is a scalar for quark flavor \(q\). After our consistency check, this parameterization is used as a pseudodata generator to train and test the DNN model. We emphasize that our original MINUIT fit parameterization is then used as a tool to demonstrate that the DNN model is capable of predicting (or confirming) the 19-parameter model used to generate pseudodata, as illustrated in section IV.3. After building confidence in the DNN model from these preliminary tests, we move towards extracting the Sivers functions from experimental data from the SIDIS process with a polarized-proton target (see section IV.4). Previous work on global fits to SIDIS data considered the data from polarized-proton targets, polarized-deuteron targets, and polarized \({}^{3}\)He gas targets as a combined dataset. In [9], isospin symmetry was assumed for \(f_{1T,u}^{\perp}\) and \(f_{1T,d}^{\perp}\) for COMPASS2009 [49] and COMPASS2017 [50] datasets. As the nuclear effects on the Sivers functions are not very well understood, separately extracting the same observable with different types of polarized targets that contain different nuclear effects could provide some insight. Significant data would be required for each target type. The DNN's unique capacity to manage abstraction allows for the absorption of additional complexity in a semi-model-independent way. To obtain the most information from the data and fit results we attempt to decompose some of this abstraction using the following conditions, * DNN fits to SIDIS data from proton and deuteron targets are performed independently to obtain separate models. * No kinematic cuts are applied to take full advantage of the available data and to allow the DNN to extract all possible features. * A Sivers function for each light-quark flavor is obtained to ensure the \(SU(3)_{\text{flavor}}\) breaking effects in QCD are also contained. The technique to achieve the extraction under the aforementioned conditions is somewhat novel, so explicit details are provided step by step for clarity in the next section. ## IV The extraction technique \(\mathcal{N}_{q}(x)\) in Eq. (4), in the factorized form \(\mathcal{N}_{q}(x)h(k_{\perp})f_{q/p}(x,k_{\perp})\). This approach assumes the validity of Eq. (4), leading to model dependencies associated with this assertion and the Gaussian interpretation for the \(k_{\perp}\) distributions of the partons. This introduces a bias to the analysis, but it is a known bias, and something that can be revisited after the method has been thoroughly explored. As such, we do not consider this to be a fully model-independent extraction, but every attempt is made to make it minimally model-dependent. The DNN treatment of \(\mathcal{N}_{q}(x)\) enables the flexibility of handling all the light quark-flavors \(q=\{u,\bar{u},d,\bar{d},s,\bar{s}\}\) independently. The method presented is somewhat analogous to the approach of treating \(\mathcal{N}_{q}(x)\) as a function of \(x\) being parameterized with the added flexibility of the DNN. It is natural to extend this study towards a symbolic regression of \(\mathcal{N}_{q}(x)\) but that is not the scope of the present work. Additionally, we hope the DNN treatment of just the \(\mathcal{N}_{q}(x)\) term, among the other possible decompositions of the Sivers function, can shed light on how best to move forward and take full advantage of modern computational tools. Figure 4: A generic representation of the DNN architecture for \(\mathcal{N}_{q}(x)\), where \(q=\{u,d,s,\bar{u},\bar{d},\bar{s}\}\), and \(a_{m}^{(n)}\) represent the node \(m\) in the hidden layer \(n\). The figure represents only up to \(n=3\) for demonstration purposes. We use a relationship for \(\mathcal{N}_{q}(x)\), similar to the seminal work [4], as a tool to generate our pseudodata for testing accuracy and reproducibility only. Although this definition is not used in any aspect of the final DNN fit results, we acknowledge the bias in the approach, which we attempt to minimize and account for through studies of systematic uncertainty in the extraction methodology. The generic feedforward DNN structure for \(\mathcal{N}_{q}(x)\) that we use in this work is represented in Fig. (4). As we consider the \(SU(3)_{\text{flavor}}\) breaking in QCD, we have \(\mathcal{N}_{u}(x)\), \(\mathcal{N}_{\bar{u}}(x)\), \(\mathcal{N}_{d}(x)\), \(\mathcal{N}_{\bar{d}}(x)\), \(\mathcal{N}_{s}(x)\), and \(\mathcal{N}_{\bar{s}}(x)\) to handle the six light-quark-flavors independently. Bjorken-\(x\) is the only input as the initial layer of each \(\mathcal{N}_{q}(x)\), and the final layer is a single-node output. The \(m_{1}\) in \(\mathcal{A}_{0}(z,phT,m_{1})\) as defined in Eq. (9) is treated as a free parameter, with the initialization obtained from our first chi-square minimization fit (discussed in Section IV.2), and then allowed to vary throughout the DNN training process with SIDIS data, as shown in Fig. 3. The DNN model results are then used to infer the projections for both SIDIS kinematics and DY kinematics (see Fig. 13). A deep feedforward architecture is used with the hidden layers expanded with multiple numbers of nodes which are initialized randomly with Gaussian sampling of weights around \(zero\) with a standard deviation of 0.1. The degree of potential Non-linearity is introduced into the network by the choice of the activation function. The selection of the activation function can have a substantial impact on DNN performance and training dynamics. We chose the \(Relu6\) activation function. This activation is a variant of the Rectified Linear Unit (\(ReLU\)) function. The \(ReLU6\) activation function has been shown to empirically perform better under low-precision conditions by encouraging the model to learn sparse features earlier, which is beneficial for learning complex patterns and relationships from the experimental data. We also use _Least Absolute Shrinkage and Selection Operator Regression_ which is a regularization technique used to prevent overfitting and improve the model's performance and generalization ability, while also encouraging sparsity and feature selection. We also use L1 regularization. L1 regularization encourages sparsity in the activation by adding a penalty term to the loss function that is proportional to the absolute value of the weights [51]. By adding this regularization term, the most important inputs are weighted the most, so that noisy or redundant information is discarded. The strength of the regularization is controlled by the magnitude of the regularization coefficient, which is set to \(10^{-12}\). Additionally, we use a dynamically decreasing learning rate. The learning rate is automatically reduced by 10% if the _training loss_ has not decreased within the last 200 epochs1 (i.e. _patience_ = 200). The optimizer used was _Adam_ while the loss function used was _Mean Squared Error_. During the hyperparameter optimization process, there were slight deviations in the number of layers, nodes per layer, initial learning rate, batch size, and the number of epochs, but the basics of the scheme just described remain consistent for all DNNs used. Footnote 1: An epoch is a complete cycle of the passing of training data through the algorithm. Our strategy is to first perform an exercise using only pseudodata to verify the extraction method that will ultimately be used on the real experimental data. First, we devise a _generating function_ for the SIDIS Sivers asymmetry data using a conventional \(\chi^{2}\)-minimization routine (MINUIT in this case), without following the popular assumption of "unbroken sea" [4] in order to generalize the treatment of quarks and antiquarks. We perform a series of conventional MINUIT fits step-wise to obtain the final 19 parameters for the case of broken \(SU(3)_{\text{flavor}}\) symmetry in QCD. Then, we produce pseudodata (or replicas) for the SIDIS asymmetry by sampling from the mock experimental errors using the _generating function_ with kinematics and binning in \(x\), \(z\) and \(p_{hT}\) as in the experimental data. Then a DNN model is constructed with all hyperparameters tuned in order to achieve the highest possible accuracy and precision. Here our nomenclature becomes quite specific and we refer to the resulting distribution of DNN fits as a _DNN model_. The first model obtained with the method for a particular set of data is referred to as the **First Iteration**. At this stage, we use the distribution of fit results to obtain the mean and the error band from the initial DNN model to re-parameterize the _generating function_ so that it produces more realistic pseudodata. The DNN fits are performed again improving the quality (both accuracy and precision) of the resulting fits to result in a **Second Iteration** DNN model. One can repeat the number of iterations until the resulting model is no longer improving within the experimental uncertainties. In this way, the DNN model approaches the best approximation of the Sivers functions in comparison to the _true_ values put into the _generating function_. After confirming that the method works well, the extraction of Sivers function using the SIDIS experimental data is performed. We treat the data for a polarized proton-target and deuteron-target separately for two reasons. First, fitting these together would introduce another bias that would need to be managed directly. This is the case even assuming isospin symmetry in the \(u\) and \(d\)-quarks' Sivers functions. Second, our approach leaves open the possibility to explore the nuclear dependence of the Sivers functions. The construction of the DNN models for proton and deuteron is analogous. To perform the fit another **First Iteration**, as previously described, is performed by developing and tuning a DNN model using the data from the real experiment rather than pseudo-data from the generating function. In the proceeding iterations, we use the generating function in order to tune the hyperparameters to achieve the highest possible quality of fit in comparison with the results from the **First Iteration**. Once a tuned model is obtained, we perform an extended study for evaluating the algorithmic uncer tainty2 as well as the systematic uncertainty of the DNN extraction method. Footnote 2: Algorithmic uncertainty is the degree of increase in the distribution of the resulting fits that is not directly from propagated experimental error. To elaborate on the pedagogy of this method, we organize the remainder of this section into the following subsections: (IV.1) Data selection, (IV.2) MINUIT fits for the case of \(SU(3)_{\text{flavor}}\), (IV.3) DNN model training with pseudodata, (IV.4) DNN model training with real experimental data. ### Data selection No data points were left out of our dataset intentionally because they were suspect or classified as outliers. No kinematic cuts to exclude data points were applied. There is more world data available that could be included in our fit. Still, we chose to limit our data based on the similarity of process and experimental configuration to preserve consistency in this trial global fit with DNNs. In this regard, we focus our attention on the fixed target SIDIS and DY data. For the proton DNN fits, some data points will be left out of the training process for validation studies but they will be reincorporated after the appropriate number of epochs is determined for the optimal model performance. For the neutron DNN fits, the polarized \({}^{3}\)He data from Jefferson Lab [52] is used to test the new projections of the DNN model trained on the deuteron COMPASS data only. Table 1 summarizes the kinematic coverage, the number of data points, and reaction types of the datasets that are considered in this work. In addition to the SIDIS datasets that are used in the fits, the polarized DY dataset from the COMPASS experiment is also listed in Table 1 as we demonstrate the predictive capability of the DNN model by comparing the projections with the real data points. The DY projections are made using the trained SIDIS DNN model assuming a sign change expected from _conditional_ universality. For the case of training the DNN model related to proton-target we use HERMES2009 [53], COMPASS2015 [54] and HERMES2020 [55] data points associated with 1D kinematic binning, leaving the HEREMES2020[55] data associated with the 3D kinematic binning to compare with the projections from the trained model. The COMPASS2009 [49] dataset with a polarized-deuteron target is used for the neutron Sivers extraction as a separate DNN model. For the initial \(\chi^{2}\)-minimization fit with MINUIT the same datasets are used as in [4] for consistency which is HERMES2009 [53], COMPASS2009 [49], and COMPASS2015 [54]. This fit is described in the next subsection. ### MINUIT fits for \(Su(3)_{\text{flavor}}\) The analysis begins with a \(\chi^{2}\)-minimization fit with MINUIT similar to the approach in [4] except we expand the number of parameters to treat each of the light-quark flavors separately. The results of the MINUIT fits are shown in Table 2. Fit 1 is from the original fit results from Anselmino _et al_ directly from [4]. Here the \(\mathcal{N}_{q}(x)\) for the \(u\) and \(d\) quark used is Eq. (27) but \(\mathcal{N}_{\bar{q}}(x)=N_{\bar{q}}\) for anti-quarks. In this fit, there are three parameters \(\alpha_{q}\) and \(\beta_{q}\) and \(N_{q}\) for each quark flavor, and for each anti-quark it's just \(N_{\bar{q}}\) plus \(m_{1}\). This results in a 9-parameter fit. Fit 2 is a test to reproduce the same parameterization as in Fit 1. We note that in Fit 2 non of the 9 parameters are fixed or have bounds imposed. Both of these first columns only consider \(u\) and \(d\) quarks and antiquarks. The Fit 1 parameters were used as the initial values to perform Fit 2. The difference in these two sets of fit parameters demonstrates the challenge of systematic consistency with this method though some pa \begin{table} \begin{tabular}{c c c c} Dataset & Kinematic coverage & Reaction points \\ \hline HERMES2009 & \(0.023<x<0.4\) & \(p^{\uparrow}+\gamma^{*}\rightarrow\pi^{+}\) & 21 \\ (SIDIS) & \(0.2<z<0.7\) & \(p^{\uparrow}+\gamma^{*}\rightarrow\pi^{-}\) & 21 \\ [53] & \(0.1<p_{nT}<0.9\) & \(p^{\uparrow}+\gamma^{*}\rightarrow\pi^{0}\) & 21 \\ & \(Q^{2}>1\) GeV\({}^{2}\) & \(p^{\uparrow}+\gamma^{*}\to K^{+}\) & 21 \\ & & \(p^{\uparrow}+\gamma^{*}\to K^{-}\) & 21 \\ \hline HERMES2020 & \(0.023<x<0.6\) & \(p^{\uparrow}+\gamma^{*}\rightarrow\pi^{+}\) & 27, **64** \\ (SIDIS) & \(0.2<z<0.7\) & \(p^{\uparrow}+\gamma^{*}\rightarrow\pi^{-}\) & 27, **64** \\ [55] & \(0.1<p_{nT}<0.9\) & \(p^{\uparrow}+\gamma^{*}\rightarrow\pi^{0}\) & 27 \\ & \(Q^{2}>1\) GeV\({}^{2}\) & \(p^{\uparrow}+\gamma^{*}\to K^{+}\) & 27, **64** \\ & \(p^{\uparrow}+\gamma^{*}\to K^{-}\) & 27, **64** \\ \hline COMPASS2015 & \(0.006<x<0.28\) & \(p^{\uparrow}+\gamma^{*}\rightarrow\pi^{+}\) & 26 \\ (SIDIS) & \(0.2<z<0.8\) & \(p^{\uparrow}+\gamma^{*}\rightarrow\pi^{-}\) & 26 \\ [54] & \(0.15<p_{nT}<1.5\) & \(p^{\uparrow}+\gamma^{*}\to K^{+}\) & 26 \\ & \(Q^{2}>1\) GeV\({}^{2}\) & \(p^{\uparrow}+\gamma^{*}\to K^{-}\) & 26 \\ \hline COMPASS2009 & \(0.006<x<0.28\) & \(d^{\uparrow}+\gamma^{*}\rightarrow\pi^{+}\) & 26 \\ (SIDIS) & \(0.2<z<0.8\) & \(d^{\uparrow}+\gamma^{*}\rightarrow\pi^{-}\) & 26 \\ [49] & \(0.15<p_{nT}<1.5\) & \(d^{\uparrow}+\gamma^{*}\to K^{+}\) & 26 \\ & \(Q^{2}>1\) GeV\({}^{2}\) & \(d^{\uparrow}+\gamma^{*}\to K^{-}\) & 26 \\ \hline \hline JLAB2011 & \(0.156<x<0.396\) & \({}^{3}He^{+}+\gamma^{*}\rightarrow\pi^{+}\) & 4 \\ (SIDIS) [52] & \(0.50<z<0.58\) & \({}^{3}He^{+}+\gamma^{*}\rightarrow\pi^{-}\) & 4 \\ & \(0.24<p_{nT}<0.43\) & & \\ & \(1.3<Q^{2}<2.7\) & & \\ COMPASS2017 & \(0.1<x_{N}<0.25\) & \(p^{\uparrow}+\pi^{-}\to l^{+}l^{-}X\) & 15 \\ (DY) [50] & \(0.3<x_{\pi}<0.7\) & & \\ & \(4.3<Q_{M}<8.5\) & & \\ & \(0.6<q_{T}<1.9\) & & \\ \hline \end{tabular} \end{table} Table 1: The SIDIS and DY datasets considered in the fits include the DY data, which is used to demonstrate the predictive capability of the DNN model. Specifically, we make projections using the trained SIDIS DNN model, assuming a sign change to predict the real experimental DY data points. For the HERMES2020 dataset, data is available with both 1D and 3D kinematic bins. The 3D bin numbers are indicated in bold font. rameters match reasonably well. For fit 3 we use the same convention but add in the strange quark so there is an additional four parameters \(N_{s}\), \(\alpha_{s}\), \(\beta_{s}\), and \(N_{\bar{s}}\) which leads to a 13-parameter fit. In order to initialize the 13 parameters in Fit 3, we use the corresponding values for those parameters from Fit 2 and zeros for the rest. Fit 4 uses Eq. (27) for both quarks and antiquarks so that the treatment of all three light-quark-flavors is the same. In addition to the parameters from Fit 3, Fit 4 contains six more parameters for the antiquarks. The result of Fit 4 leads to a larger \(N_{\bar{q}}\) value to compensate for the fact that \(\alpha_{\bar{s}}\) and \(\beta_{\bar{s}}\) are now present in the fit. However, the motivation behind performing Fit 4 in this way is to generalize the \(\mathcal{N}_{q}(x)\) in a flavor-independent fashion for both quarks and anti-quarks. Fit 4 is the final fit that we will use to generate pseudodata for testing the DNN fits and for calculating the model's accuracy. ### DNN Method Testing We develop a systematic method of constructing, optimizing, and testing the DNN fits by using pseudodata to ensure a quality extraction from the experimental data. Our approach uses a combination of Monte Carlo sampling and synthetic data generation. The pseudodata points are randomly generated by sampling within multi-Gaussian distributions centered around each experimental data point, with variance given by the experimental uncertainty. Many pseudodata DNN fits (instances) are performed together to obtain the uncertainty of the resulting DNN model (mean and distribution). The general approach is to use existing experimental data to parameterize a fit function and then use it to generate new synthetic data (replicas) with similar characteristics. The pseudodata is generated with a known Sivers function so that the extraction technique can be explicitly tested. An error bar is assigned to each new data point which is taken directly from the experimental uncertainties reported for the complete set of kinematic bins. This approach aims to produce pseudodata that simulates the experimental data as closely as possible with particular sensitivity to phase space. It does this so that the test metrics are also relevant for the real experimental extraction. To do this the pseudodata generator must be very well-tuned to the kinematic range of the experimental data. Hence, the _generating function_ contains as much feature space information as possible. It's important to emphasize here that the metrics that we use to quantify the improvement in the **Second Iteration** compared to the **First Iteration** are sensitive to phase space. The _accuracy_ (proximity of the mean of the DNN fits to the _true_ Sivers) is defined as, \[\epsilon_{q}(x,k_{\perp})=\left(1-\frac{|\Delta^{N}f^{(\text{true})}_{q/p^{ \text{\tiny T}}}-\Delta^{N}f^{(\text{mean})}_{q/p^{\text{\tiny T}}}|}{\Delta ^{N}f^{(\text{true})}_{q/p^{\text{\tiny T}}}}\right)\times 100\%, \tag{28}\] and _precision_ (the standard deviation of replicas), as \[\sigma_{q}(x,k_{\perp})=\sqrt{\frac{\sum_{i}\left(\Delta^{N}f^{(i)}_{q/p^{ \text{\tiny T}}}-\Delta^{N}f^{(\text{mean})}_{q/p^{\text{\tiny T}}}\right)^{2 }}{N}}. \tag{29}\] The _generating-function_ used to produce the _true_ value of the Sivers is improved in the process of optimizing the DNN hyperparameters. This approach improves the _generating-function_ and the DNN fit with each iteration. As a result, more realistic data can be generated in each iteration, which enables better hyperparameter optimization and testing for the experimental data in the subsequent iteration. Note that experimental data still refers to pseudodata replicas that are generated using the real experimental data rather than the _generating-function_. In the pseudodata test, the same number of replicas are used in the **First Iteration** and in the **Second Iteration**. The number of replicas should be kept the same across consecutive iterations to control statistical error variation from the replicas. To most accurately propagate the experimental uncertainty using the replica approach requires a sufficient amount of replicas so that only negligible statistical error from the replicas is added. For the present study, our pseudodata uncertainty is simplified and is represented by a single error bar which contains the experimental statistical error and systematic error provided by the data publication. 3 \begin{table} \begin{tabular}{c c c c c} Parameter & Fit 1 & Fit 2 & Fit 3 & Fit 4 \\ \hline \(m_{1}\) & 0.8\(\pm\)0.9 & 3.87\(\pm\)0.31 & 7.0\(\pm\)0.6 & 7.0\(\pm\)4.0 \\ \(N_{u}\) & 0.18\(\pm\)0.04 & 0.475\(\pm\)0.03 & 0.89\(\pm\)0.05 & 0.89\(\pm\)0.06 \\ \(\alpha_{u}\) & 1.0\(\pm\)0.6 & 2.41\(\pm\)0.16 & 2.78\(\pm\)0.17 & 2.75\(\pm\)0.11 \\ \(\beta_{u}\) & 6.6\(\pm\)5.2 & 15.0\(\pm\)1.4 & 19.4\(\pm\)1.6 & 20.0\(\pm\)2.0 \\ \(N_{\bar{u}}\) & -0.01\(\pm\)0.03 & -0.032\(\pm\)0.017 & -0.07\(\pm\)0.06 & -0.12\(\pm\)0.06 \\ \(\alpha_{a}\) & - & - & - & 0.4\(\pm\)0.5 \\ \(\beta_{\bar{s}}\) & - & - & - & 20.0\(\pm\)16.0 \\ \(N_{d}\) & -0.52\(\pm\)0.20 & -1.25\(\pm\)0.19 & -2.33\(\pm\)0.31 & -2.4\(\pm\)0.4 \\ \(\alpha_{d}\) & 1.9\(\pm\)1.5 & 1.5\(\pm\)0.4 & 2.5\(\pm\)0.4 & 2.7\(\pm\)0.6 \\ \(\beta_{d}\) & 10\(\pm\)11 & 7.0\(\pm\)2.6 & 15.8\(\pm\)3.2 & 17.0\(\pm\)4.0 \\ \(N_{\bar{d}}\) & -0.06\(\pm\)0.06 & -0.05\(\pm\)0.11 & -0.29\(\pm\)0.27 & -0.7\(\pm\)0.5 \\ \(\alpha_{\bar{d}}\) & - & - & - & 1.5\(\pm\)0.6 \\ \(\beta_{\bar{d}}\) & - & - & - & 20\(\pm\)17 \\ \(N_{s}\) & - & - & -14.0\(\pm\)10.0 & -20.0\(\pm\)40.0 \\ \(\alpha_{s}\) & - & - & 4.9\(\pm\)3.3 & 4.7\(\pm\)3.0 \\ \(\beta_{s}\) & - & - & 3.0\(\pm\)4.0 & 2.3\(\pm\)3.1 \\ \(N_{\bar{s}}\) & - & - & -0.1\(\pm\)0.2 & 20.0\(\pm\)5.0 \\ \(\alpha_{s}\) & - & - & - & 9.5\(\pm\)1.4 \\ \(\beta_{\bar{s}}\) & - & - & - & 20.0\(\pm\)14.0 \\ \hline \(\chi^{2}/N_{data}\) & 1.29 & 1.59 & 1.69 & 1.66 \\ \end{tabular} \end{table} Table 2: Collection of MINUIT fit results. Fit 1 is from Anselmino et al [4], Fit 2: Re-fit as similar to [4], Fit 3: fit results including strange-quarks, Fit 4: fit results with the same treatment for all three light-quark-flavors. Footnote 3: The _Success_ is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as the _Success_, which is the same as _Success_, which is the same as the _Success and \(\mathcal{C}_{d}^{f}\) for results from SIDIS data from experiments associated with polarized-deuterium target. Here \(i\) and \(f\) indicate the **First Iteration** and **Second Iteration** respectively. The listed learning rate (multiplied by \(10^{-4}\)) is the initial learning rate as a dynamically decreasing learning rate is used. (This is explained in Sec. IV). The accuracy \(\varepsilon_{q}(x,k_{\perp})\) is defined in Eq. (28), and the results in this table correspond to the maximum deviation of the mean of the replicas from the true values \(\varepsilon_{q}^{max}\); whereas the precision \(\sigma_{q}(x,k_{\perp})\) is defined in Eq. (29) and the results are the maximum standard deviations of the replicas \(\sigma_{q}^{max}\), and are in the units of \(\times 10^{-3}\). It is worth noting that the improvement in both _accuracy_\(\varepsilon_{q}^{max}\) and _precision_\(\sigma_{q}^{max}\) can be observed from the closeness of the solid line (mean of the 1000 DNN replicas) to the dashed line (_generating function_) for quarks (upper plots) and antiquarks (lower plots). The improvement of the DNN model in each case is significant to the point where it is difficult to distinguish the solid line and the corresponding dashed line, indicating a high degree of accuracy and precision. Also, we observed that the training loss in the **Second Iteration** is about one order of magnitude less than the one from the **First Iteration**. ### DNN model from real data In contrast to the testing of the DNN model with pseudodata from the _generating function_, we now describe the steps to apply the DNN fit method to real experimental data. The pseudodata test from Sec. IV.3 is using the combined proton and deuteron data as in all previous work on global fits of the Sivers function. In the following extraction with real experimental data, the proton and deuteron data are fitted separately. To take full advantage of the information provided by the model testing in the previous section, the steps from Sec. IV.3 are performed again separately for proton and deuteron data. The starting hyperparameters for the first DNN fit in this section including the architecture, initial learning rate, batch size, as well as optimal number of epochs, are all determined based on the best accuracy and precision using the well-tuned pseudodata from the generating function first in each case. The provide more information on the initial hyperparameter so that even the **First Iteration** is a quality fit. In the following steps, two distinct DNN models are developed, one for the _proton_ quarks Sivers asymmetry and one for _neutron_ quark Sivers asymmetry. The following procedure is common to both. 1. **First Iteration**: construct a DNN fit and tune its hyperparameters by training with the experimental data. Use 10% of the data for the validation in each epoch.5 Footnote 5: For this section we list this as **First Iteration** though the real experimental data has already been fit to improve the _generating function_ for the pseudodata test. With every subsequent iteration, the pseudodata test and the DNN fit to the experimental data improve. 2. Identify the optimum number of epochs when the validation loss exceeds the training loss (see Fig. (7) as an example). 3. Perform a DNN fit, with the optimized hyperparameters from Step 1 and the number of epochs determined from Step 2, using all the data without leaving any for validation. 4. Improve the _generating function_: 1. Use the tuned DNN model in step 3 to infer the asymmetry over the 3D kinematics \((x,z,p_{hT})\) in fine bins. 2. Perform a MINUIT fit to obtain the new _generating function_. 3. Produce pseudodata for the SIDIS asymmetry using the _generating function_ in the previous step. \begin{table} \begin{tabular}{c c c c c c c} Hyperparameter & \(\mathcal{C}_{0}^{i}\) & \(\mathcal{C}_{0}^{f}\) & \(\mathcal{C}_{p}^{i}\) & \(\mathcal{C}_{p}^{f}\) & \(\mathcal{C}_{d}^{i}\) & \(\mathcal{C}_{d}^{f}\) \\ \hline Hidden Layers & 5 & 7 & 5 & 7 & 5 & 8 \\ Nodes/Layer & 256 & 256 & 550 & 550 & 256 & 256 \\ Learning Rate & 1 & 0.125 & 5 & 1 & 10 & 1 \\ Batch Size & 200 & 256 & 300 & 300 & 100 & 100 \\ Number of epochs & 1000 & 1000 & 300 & 300 & 200 & 200 \\ Training Loss & 0.6 & 0.05 & 1.5 & 1 & 2 & 1 \\ \hline \(\varepsilon_{max}^{max}\) & 95.67 & 99.27 & 55.21 & 94.04 & 56.80 & 93.02 \\ \(\varepsilon_{a}^{max}\) & 42.62 & 98.09 & 52.57 & 96.70 & 34.83 & 91.40 \\ \(\varepsilon_{d}^{max}\) & 80.46 & 98.89 & 55.69 & 93.13 & 52.44 & 89.27 \\ \(\varepsilon_{d}^{max}\) & 74.59 & 97.08 & 55.37 & 95.44 & 46.60 & 92.58 \\ \(\varepsilon_{a}^{max}\) & 45.53 & 79.27 & 49.54 & 90.64 & 36.34 & 93.41 \\ \(\varepsilon_{s}^{max}\) & 59.27 & 91.13 & 33.89 & 82.51 & 65.57 & 91.45 \\ \hline \(\sigma_{uu}^{max}\) & 3 & 0.1 & 5 & 2 & 2 & 0.4 \\ \(\sigma_{uu}^{max}\) & 2 & 0.2 & 6 & 2 & 8 & 2 \\ \(\sigma_{d}^{max}\) & 10 & 1 & 20 & 6 & 2 & 1 \\ \(\sigma_{d}^{max}\) & 7 & 4 & 20 & 8 & 7 & 1 \\ \(\sigma_{u}^{max}\) & 2 & 0.2 & 4 & 1 & 6 & 2 \\ \(\sigma_{s}^{max}\) & 1 & 0.1 & 4 & 2 & 6 & 3 \\ \hline \end{tabular} \end{table} Table 3: The summary of the optimized sets of hyperparameters: The indications in the table are \(\mathcal{C}_{0}^{i}\) and \(\mathcal{C}_{0}^{f}\) for results from the pseudodata from the generating function, \(\mathcal{C}_{p}^{i}\), and \(\mathcal{C}_{p}^{f}\) for results from SIDIS data from experiments associated with polarized-proton target, \(\mathcal{C}_{d}^{i}\) and \(\mathcal{C}_{d}^{f}\) for results from SIDIS data from experiments associated with polarized-deuterium target, where \(i\) and \(f\) indicates the **First Iteration** and **Second Iteration** respectively. The initial learning rate is also listed (\(\times 10^{-4}\)) as is the final training loss (\(\times 10^{-3}\)). The accuracy and precision in each case are the maxima over the phase space. 5. Perform Step 1 and Step 2 with the pseudodata from Step 4 (from the improved _generating function_). 6. **Second Iteration**: perform a DNN fit with the optimized hyperparameters from Step 5 using experimental data without leaving any data for validation. 7. Perform a comparison of the Sivers functions extracted at the **First Iteration** vs **Second Iteration** in terms of the _accuracy_, _precision_ and the magnitude of the _loss-function_ at the final epoch. Although the architectural specifics such as the number of hidden layers, nodes per layer, and learning rate may be modified in the hyperparameter optimization step, the number of epochs and the number of replicas remain the same. The overall feedforward architecture structure remains consistent as well for simplicity. Table 3 shows the optimized set of hyperparameter configurations are represented with \(\mathcal{C}_{\{0,p,d\}}^{\{i,f\}}\) where \(i,f\) represent the **First Iteration** and the **Second Iteration**, and \(\{0,p,d\}\) represent with _pseudodata_, with _proton_ data and with _deuteron_ data respectively. Clearly, the pseudodata from the _generating function_ still plays a critical role in the tuning and testing of the fit of the real experimental data. Accuracy is necessarily determined using pseudodata so that the mean of the final DNN model can be compared directly to the _true_ Sivers asymmetry. This procedure for determining accuracy assumes that the _true_ Sivers asymmetry from the experiment can be approximated by the well-tuned _generating function_ after the final iteration. There is a systematic error associated with analyzing accuracy this way but this type of error can be estimated. After completing the extraction method described above, we perform a systematic uncertainty assessment on the overall method of extraction. To test the reliability of the extraction, we adjust the parameters of the _generating function_ and repeat the extraction process by again following the full set of steps for several variations of pseudodata. By using the Sivers asymmetry generated from the optimized _generating function_, the absolute differences between the mean of the DNN model and the true value over \(k_{\perp}\) was used to estimate the systematic uncertainties of the final DNN model from this extraction technique. ## V Results In this section, we present the results from two separate DNN models: the _proton_-DNN and the _deuteron_-DNN, along with their optimized hyperparameters. Only SIDIS data was used to train the DNN models in this exploratory AI-based extraction technique. The optimized hyperparameter configurations are provided for both models in columns \(\mathcal{C}_{p}^{i(f)}\) and \(\mathcal{C}_{d}^{i(f)}\), with the subscripts \(p\) and \(d\), respectively, in Table 3. To quantitatively represent the improvement made by performing the steps mentioned in the previous section, we present our accuracy and precision results in the lower part of Table Figure 6: The qualitative improvement of the extracted Sivers functions for \(u\) (blue), \(d\) (red), and \(s\) (green) quarks at \(x=0.1\) and \(Q^{2}\)=2.4 GeV\({}^{2}\) using the optimized _proton_-DNN model at the **Second Iteration** (solid-lines with dark-colored error bands with 68% CL), compared to the **First Iteration** (dashed-lines with light-colored error bands with 68% CL). Figure 7: An example for the determination of the optimum number of epochs (about 300 in this case: marked by the vertical red dashed-line) at the **First Iteration** based on cross-over coordinates between the train (blue curve) and the validation (orange curve) losses. Then, the train-loss behaviors until the optimum number of epochs with all data points at the **First Iteration** and at the **Second Iteration** are represented by the green-curve and the red-curve respectively. ### DNN fit to SIDIS data We now explore the results and compare some of our final fits and projections with those of other global fits. To do this we calculate Pearson's reduced \(\chi^{2}\) statistic based on the results from the DNN model and experimental data point so that the \(\chi^{2}\) value provided from other work can be used to make a quantitative comparison. Note that the \(\chi^{2}\) values indicated in our plots are calculated after the analysis is complete, rather than as a part of the minimization process. The plots of SIDIS Sivers asymmetry data and our resulting DNN models (for _proton_ and _deuteron_) are shown in Fig. 8. Each plot includes the partial \(\chi^{2}\) values for the particular \(x,z\), and \(p_{hT}\) bins for each hadron type. HERMES2009 [53] (_top-left_), HERMES2020 [55] (_top-right_), and COMPASS 2015 [54] (_bottom-left_) are described with the _proton_-DNN model with a total \(\chi^{2}/N_{pt}\) = 1.04, whereas the COMPASS2009 [49] (_bottom-right_) dataset is described with the _deuteron_-DNN with a total \(\chi^{2}/N_{pt}\) = 0.81. In comparison with [4], there are some improvements in describing the proton SIDIS data on \(\pi^{\pm}\) and \(K^{+}\) in HERMES 2009, which can be noticed quantitatively based on the partial \(\chi^{2}\) values from the DNN model. This indicates that the possible effects attributed to the TMD evolution [6; 44; 56] and were assumed to be the cause of the larger \(\chi^{2}\) values in [4] for proton data on \(\pi^{+}\) may have been somewhat integrated into the DNN model. Although HERMES 2020 [55] reported SIDIS data in 1D kinematic bins as well as with 3D kinematic bins, in our fits we use the data in the form of 1D kinematic bins to be consistent with the rest of the datasets in our fits. The _deuteron_-DNN model's description of COMPASS2009 [49] data is shown in the bottom-left sub-figure of Fig. 8. Without applying any cuts on the data, the DNN model yields a total \(\chi^{2}/N_{pt}\) of 0.80, covering the full range in \(x\), \(z\), and \(p_{hT}\) kinematic projections from the COMPASS2009 dataset. This is in contrast to the limited kinematic coverage considered in [7; 5; 9; 47]; notably, the data points at \(p_{hT}>~{}1\) GeV are described somewhat better by the _deuteron_-DNN model compare to the fits in [4; 25]. This suggests that performing dedicated fits to data specific to polarized nucleon targets enables better information extraction, which is true for both DNN and other fitting approaches. The advantage of DNNs in this case is to perform well even with limited data. We did not include JLab [52] data in our _deuteron_-DNN model fits to use it as a projection test for the neutron Sivers asymmetry. Our projection indicates good agreement with the \({}^{3}\)He data, but both the data and the projection are largely consistent with zero. It is also important to note that in this work we are _not_ imposing any isospin symmetry condition (\(f_{1T}^{\perp u}=f_{1T}^{\perp d}\) and/or \(f_{1T}^{\perp\bar{u}}=f_{1T}^{\perp\bar{d}}\)) for the SIDIS data with the deuteron target as was done in [9]. The successful construction of the two different proton and neutron Sivers functions may indicate that our DNN approach can be particularly useful for analyzing data from polarized nucleons in different nuclei, potentially opening up a new way of exploring the nuclear effects associated with TMDs. ### Sivers in Momentum Space The extracted Sivers functions, including the systematic uncertainties from the DNN models at \(x=0.1\) and \(Q^{2}=2.4\) GeV, are shown in Fig. 9 represented by the _mean_ with 68% Confidence Level (CL) _error-bands_. The corresponding optimized hyperparameter configurations for the _proton_-DNN model and _deuteron_-DNN model are \(\mathcal{C}_{p}^{f}\) and \(\mathcal{C}_{d}^{f}\), respectively, as given in Table 3. The Sivers functions extracted using the _deuteron_-DNN model show consistency with zero, considering the accompanying systematic uncertainties. However, this is still a significant result, given the limitation in statistics from the SIDIS data with a deuteron target. The Sivers functions extracted using the _proton_-DNN model have small systematic uncertainties. Note that we use \(\Delta^{N}f_{q/p^{\uparrow}}(x,k_{\perp})\) notation, as in [4], to represent the Sivers functions in our plots, and one can use Eq. (3) to convert to \(f_{1T}^{\perp q}(x,k_{\perp})\) notation. Comparing the extracted Sivers functions in \(k_{\perp}\)-space to other extractions in the literature can also be useful, although we have not included those curves in our plots. In summary, the _proton_-DNN model extractions are relatively precise, with narrower error bands compared to those in [3; 4; 6; 7; 9; 10; 25]. ### Sivers First Transverse Moment The first transverse moment of the Sivers functions can be obtained through \(d^{2}k_{\perp}\)-integration of the Sivers functions [4], \[\Delta^{N}f_{q/p^{\uparrow}}^{(1)}(x)=\int d^{2}k_{\perp}\frac{k _{\perp}}{4m_{p}}\Delta^{N}f_{q/p^{\uparrow}}(x,k_{\perp})\] \[=-f_{1T}^{\perp(1)q}(x)=\frac{\sqrt{\frac{e}{2}}(k_{\perp}^{2})m_ {1}^{3}}{m_{p}((k_{\perp}^{2})+m_{1}^{2})^{2}}\mathcal{N}_{q}(x)f_{q}(x;Q^{2}) \tag{30}\] The extracted first transverse moments of the Sivers functions including the systematic uncertainties from the DNN models are given in Fig. 11 with 68% CL _error-bands_ using the optimized hyperparameter configurations \(\mathcal{C}2\) and \(\mathcal{C}3\) in Table 3 respectively for _proton_-DNN model and _deuteron_-DNN model. The calculated moments using the _deuteron_-DNN model are consistent with zero, based on the systematic uncertainties. Comparing the results in Fig. 1 of [47], we see that the \(xf_{1T}^{\perp(1)u}\) from the DNN model is more consistent with [5; 6] in the vicinity of \(x=0.1\), although it is consistent Figure 8: The DNN fit results of the SIDIS Sivers asymmetries (red) accompanied by 68% CL error-bands in comparison with the actual data (blue). The _proton_-DNN model is trained with HERMES2009, HERMES2020, and COMPASS2015, whereas the _deuteron_-DNN model is trained with the COMPASS2009 data. The calculated partial \(\chi^{2}\) values are provided as quantitative assessments for all kinematic bins. with [26] at \(x=0.01\). The \(xf_{1T}^{\perp(1)d}\), in general, is consistent with the extractions from [4; 5; 6; 25; 47; 57]. Additionally, the extracted behavior of \(xf_{1T}^{\perp(1)u}\) and \(xf_{1T}^{\perp(1)d}\) is consistent with the qualitative observation in [25], \[\Delta^{N}f_{u/p^{\uparrow}}^{(1)}(x) = -\Delta^{N}f_{d/p^{\uparrow}}^{(1)}(x)\] \[\text{or}\ \ f_{1T}^{\perp(1)u}(x) = -f_{1T}^{\perp(1)d}(x) \tag{31}\] which was originally a prediction from the large-\(N_{c}\) limit of QCD [58]. Most importantly, the DNN model is able to capture the feature of the \(u\) and \(d\) quarks orbiting in opposite directions without imposing this constraint directly as done in [42]. In terms of the quantitative assessment, Eq. (31) could be accurate at the large-\(N_{c}\) limit, if the isospin breaking effects are also included at the next to leading order in \(\mathcal{O}(1/N_{c})\). In regards to the light sea-quarks, the _proton_-DNN model extracts the features such as \(\Delta^{N}f_{\bar{u}/p^{\uparrow}}^{(1)}(x)>0\) and \(\Delta^{N}f_{d/p^{\uparrow}}^{(1)}(x)<0\), even considering the scale of the uncertainties. Additionally, the _proton_-DNN model is consistent with \[\Delta^{N}f_{\bar{u}/p^{\uparrow}}^{(1)}(x)=-\Delta^{N}f_{\bar{d}/p^{\uparrow} }^{(1)}(x) \tag{32}\] which was also a similar observation from a theoretical calculation based on \(SU(2)\) chiral Lagrangian [59] and the predictions at large-\(N_{c}\) limit of QCD [58]. The central values extracted in [26] are qualitatively similar to the features seen in Fig. 11 which are small but non-zero within the uncertainties. Additionally, the corresponding central values extracted in [4] are both negative but consistent with zero. The first transverse moments \(xf_{1T}^{\perp(1)q}(x)\), in the case of \(SU(3)_{\text{flavor}}\), from our DNN result, are more precise (narrow error bands) than those in [4; 22; 25]. However, the error bands are slightly larger than those in JAM20 [5], which includes more data from SIDIS, DY and SIA, \(pp\)-collisions, and parameterizations for Sivers, Collins, and Transversity TMDs together. ### Projections #### iv.4.1 SIDIS Projections In Fig. 10, we compare the SIDIS Sivers asymmetries (in red) projected onto the HERMES2020 3D kinematic bins with the experiment measurements (in blue). These results are obtained using our _proton_-DNN model and are accompanied by 68% CL error bands. We also provide calculated partial \(\chi^{2}\) values for each kinematic bin as a quantitative assessment. Unlike in [9], we have made projections for all the data points since we have not applied any data cuts. There are relatively larger partial \(\chi^{2}/N_{pt}\) (as was also observed in [9]), but only in a couple of \(K^{+}\) and \(K^{-}\) bins. However, the \(\chi^{2}/N_{pt}\) values for \(\pi^{+},\pi^{-},K^{+},K^{-}\) are, respectively, 1.14, 1.04, 1.03 and 0.88, which leads to the total \(\chi^{2}/N_{pt}\) of 1.02 for all the data points from HERMES2020 in 3D kinematic bins. In Fig. (12), we present the projected SIDIS Sivers asymmetries for the JLab kinematics [52], obtained using our _deuteron_-DNN model. The figure includes 68% CL error bands and a comparison with the JLab neutron Sivers asymmetry data. These results are consistent with those reported in [4; 6; 7; 57]. #### iv.4.2 DY Projections The resulting DNN model based on the SIDIS Sivers asymmetries is capable of projecting the Sivers asymmetries in DY experiments which could be sensitive to either valence quarks or sea quarks depending on the relevant kinematic coverage. For example, the COMPASS2017-polarized DY Sivers asymmetry measurements [50] are dominated by the valence quarks, and the upcoming SpinQuest (E1039) experiment's polarized DY asymmetry measurements [60] will be dominated by the sea quarks. For these DY projections, we follow the block diagram represented in Fig. 13, which includes the assumption of the _sign-change_ of the Sivers function in DY relative to the SIDIS process mentioned in Eq. (1). There Figure 9: The extracted Sivers functions from the _proton_-DNN model (upper) and _deuteron_-DNN model (lower) at \(x=0.1\) and \(Q^{2}=2.4\) GeV\({}^{2}\) with 1-\(\sigma\) (68%) CL _error-bands_, including systematic uncertainties. fore, using the trained _proton_-DNN model, we make projections for the DY Sivers asymmetries for the COMPASS2017 experiment [50] with a proton target and a pion beam using CTEQ6l [33] and JAM21PionPDFnlo [61] for proton PDFs and pion PDFs respectively. Meanwhile, using both the _proton_-DNN model and _deuteron_-DNN model, we make predictions for the SpinQuest experiment6. The kinematic-inputs are \(x_{1}\) (beam), \(x_{2}\) (target), \(x_{F}(=x_{1}-x_{2})\), \(q_{T}\) (transverse component of the virtual photon), and \(Q_{M}\) (di-lepton invariant mass). Footnote 6: We are using the kinematic bins from the SeaQuest experiment [62]. The projected DY Sivers asymmetries for the COMPASS2017 experiment [63] are shown in Fig. 10. The projections of the _proton_-DNN model including 68% CL error bands (in red) in comparison with the actual data points (in blue). Figure 10: Projections of the of HERMES 2020 data for 3D kinematic bins, using the _proton_-DNN model including 68% CL error bands (in red) in comparison with the actual data points (in blue). PASS2017 kinematics using the trained _proton_-DNN model in comparison with the data [50] points are represented in the Fig. 14. Although the projections are based on the assumption of conditional universality, it is worth noting that without this assumption, negative asymmetry projections were observed. However, for clarity, the projections without assuming conditional universality are not shown in Fig. 14. When comparing the projections of the _proton_-DNN model with the predictions from [4; 6; 45], it is evident that the mean of the _proton_-DNN model projection, in terms of \(x_{F}\), is more consistent with the measured mean Sivers asymmetry in the experiment (refer to Figure 6 in [50]). At the same time, the _proton_-DNN model projection has relatively smaller uncertainty. It is important to note that the predictions mentioned in the cited works were based on different \(Q^{2}-\) evolution schemes, while the _proton_-DNN model incorporates DGLAP evolution through LHAPDF [34]. In [9], only two data points from the COMPASS2017 data were included in their fits, resulting in larger uncertainties in the projected asymmetry values for the remaining data points when compared to the projections generated by the _proton_-DNN model. The increasing trend of the projected DY Sivers asymmetries with respect to the \(q_{T}\) kinematic variable in [9] is consistent with the projections presented in the middle-right plot of Fig. 14 generated by the _proton_-DNN model, while in [7], the corresponding trend exhibits a very small negative slope in relation to \(q_{T}\). A non-zero sea-quark Sivers asymmetry is inferring that the sea quarks have non-zero orbital angular momentum. The _proton_-DNN model predictions exhibit consistency with the non-zero Sivers asymmetry from the _sea_-quarks, with higher precision compared to existing predictions [4; 6; 45], for the SpinQuest kinematics. Additionally, in this work, we report our projections for the polarized Drell-Yan Sivers asymmetries for a deuteron target at the SpinQuest experiment, as shown in Figure 15 by the orange-colored bands. The central lines are negative in all kinematic projections of \(x_{1},x_{2},x_{F}\), and \(q_{T}\), yet consistent with zero, which correlates to the extracted Sivers functions shown in Fig. 9 and Fig. 11. The _proton_-DNN model predicts a positive slope with respect to \(q_{T}\) for a _proton_ target and a relatively small negative slope for a _deuteron_ target, as shown in the lower-right plot of Fig. 15. To date, with the exception of this work, no predictions have been made for the polarized DY Sivers asymmetry using a _deuteron_ target, which will be measured during the SpinQuest experiment. A noteworthy aspect of the forthcoming SpinQuest experiment Figure 11: The extracted first transverse moments of Sivers functions from the _proton_-DNN model (upper) and _deuteron_-DNN model (lower) at \(x=0.1\) and \(Q^{2}=2.4\) GeV\({}^{2}\) with 68% CL _error-bands_, including systematic uncertainties. Figure 12: The _deuteron_-DNN model projections (red) with 68% CL error-bands, for JLab kinematics [52] in comparison with measured data (blue) without the systematic uncertainty. is that, in addition to measuring the Sivers asymmetry from _proton_ and _deuteron_ targets, it will also ascertain the transversity distributions of both quarks and gluons, utilizing a tensor-polarized deuteron Spin 1 target, as proposed in [63]. ### The 3D Tomography of Proton The TMD density of unpolarized quarks inside a proton polarized in the \(\hat{y}\)-direction can be graphically represented using the relation [9; 47], \[\rho^{a}_{p\uparrow}(x,k_{x},k_{y};Q^{2})=f^{a}_{1}(x,k_{\perp}^{2};Q^{2})- \frac{k_{x}}{m_{p}}f^{\perp a}_{1T}(x,k_{\perp}^{2};Q^{2}), \tag{33}\] where \(k_{\perp}\) is a two-dimensional vector \((k_{x},k_{y})\), and the unpolarized TMD and the Sivers function for quark-flavor \(a\) are respectively represented as \(f^{a}_{1}(x,k_{\perp}^{2};Q^{2})\), and \(f^{\perp a}_{1T}(x,k_{\perp}^{2};Q^{2})\). The corresponding quark density distributions from our _proton_-DNN model for all light quark flavors in \(SU(3)_{\rm flavor}\) at \(x=0.1\) and \(Q^{2}=2.4\) GeV\({}^{2}\) are shown in Fig. 16. The observed shifts in each quark flavor are linked to the correlation between the OAM of Figure 16: Quark density distributions \(\rho^{a}_{p\uparrow}\) from the _proton_-DNN model (average of 1000 replicas ) for the light quark flavor \(a=\{u,\bar{u},d,\bar{d},s,\bar{s}\}\) inside a proton polarized along the \(+y\) direction and moving towards the reader, as a function of \((k_{x},k_{y})\) at \(x=0.1\) and \(Q^{2}=2.4\) GeV\({}^{2}\). Figure 14: The _proton_-DNN model’s predictions (red) including 68% CL error-bands, for Sivers asymmetries in \(x_{1},x_{2},x_{F},q_{T}\), and \(Q_{M}\) kinematic projections for COMPASS DY kinematics [50] in contrast with the measured data (blue). Figure 15: The _proton_-DNN model (red) and the _deuteron_-DNN model (orange) predictions including 68% CL error-bands, for Sivers asymmetries in \(x_{1},x_{2},x_{F}\), and \(q_{T}\) kinematic projections for the SpinQuest DY kinematics [60; 62]. quarks and the spin of the proton. The results shown in Fig. 16 provide evidence of non-zero OAM in the wave function of the proton's valence and sea quarks. The _proton_-DNN model calculations for the \(u\) and \(d\) quarks are similar to those reported in [9; 47], where the distortion has a positive shift for the \(u\)-quark and a negative shift for the \(d\)-quark with respect to the \(+x\) direction. From the results in Fig. 16, the _proton_-DNN model demonstrates that a virtual photon traveling towards a polarized proton "sees" an enhancement of the quark distribution, in particular more \(u,\bar{u}\)-quarks to its right-hand side and more \(d,\bar{d}\)-quarks to its left-hand side in the momentum space. Moreover, the resultant shifts for \(\bar{u},s\) quarks from the _proton_-DNN model are also in agreement with [9]. In the low-x region, the momentum space quark density becomes almost symmetric [47], and it indicates that the Sivers effect becomes smaller and the corresponding experimentally observed asymmetry is small. The forthcoming data from Jefferson Lab at 12 GeV, Fermilab SpinQuest experiment, and the anticipated future data from the Electron-Ion Collider [64; 65; 66], along with their extensive kinematic coverage, are expected to provide invaluable insights into the 3D structure of the nucleon. Obtaining a model-independent estimate of quark angular momentum requires parton distributions that simultaneously depend on both momentum and position [67; 68; 69; 70]. In addition to experimental observations, lattice QCD (LQCD) computations provide a valuable tool for QCD phenomenology from first principles. For instance, LQCD has been utilized to investigate the Sivers effect and other TMD observables at different pion masses [71] as well as the generalized parton distribution at the physical pion mass [72]. Additionally, LQCD results on the Collins-Soper kernel over a range of \(b_{T}\) (the Fourier transform of the transverse momentum) are useful for global fits of TMD observables from different processes [73]. In this way LQCD could complement the experimental data and open up an avenue to enhance the DNN method to explore the 3D structure of nucleons more directly. ## VI Conclusion and Discussion In this paper, we propose a new method for performing global fits to extract the fundamental Sivers distributions of unpolarized quarks in both polarized protons and neutrons. Our approach integrates techniques from artificial intelligence (AI) and utilizes a generating function to ensure the quality of the extraction while maintaining a high level of accuracy and precision. The primary objective of our method is to ensure the quality of the extraction, while simultaneously maintaining an optimal level of accuracy and precision. By leveraging the power of AI, we are able to implement an algorithm that can effectively handle complex and sparse data sets, and generate outputs with higher accuracy and precision. Furthermore, the use of a generating function provides an additional layer of quality control by providing a direct check on the extraction value over the phase space of interest using pseudodata. The generating function can then be improved to better reflect the experimental information and applied again to iteratively improve the extraction. We have analyzed the global data on the transverse single spin asymmetry measurements provided by multiple experimental collaborations, particularly for the SIDIS process. We have used this data to construct a DNN model that can effectively interpolate and extrapolate to some degree, enabling us to predict Sivers asymmetries in both SIDIS and DY processes. We have successfully tested the method of extraction and demonstrated through both pseudodata and real experimental data that the technique can generally outperform classical fitting procedures with accuracy and precision that can be directly quantified. The technique demonstrates good reproducibility as detailed by our systematic studies. Further avenues for advancement in this type of modeling are evident. As the first attempt to extract the Sivers function using AI techniques, we chose the \(\mathcal{N}_{q}(x)\) parameterization of the deep neural net to incorporate all x-dependent features. Our results show promise, as the DNN method is capable of fitting the SIDIS Sivers asymmetry data and consistently provides extraction of the Sivers function not only for valence quarks but also for all light quark flavors in a flavor-independent fashion. The trained model is able to make projections on untrained data for both SIDIS and DY kinematics, and these predictions are consistent with experimental results demonstrating the predictive capability of the method with relatively higher precision. Our method and techniques are intended to be simple and clear enough to be easily reproduced and expanded upon. This analysis effectively demonstrates the potential that AI holds for particular types of analysis and information extraction. The work also emphasizes the need for increased cooperation between experimentalists, theorists, and the growing computational efforts that will undoubtedly accelerate advancements in the years to come. A global effort to standardize data collection, organization, and storage in an unbinned format with detailed covariance information must be developed to take full advantage of the AI tools now available. AI and its emerging technologies will continue to accelerate the progress of data-driven physics, but the speed of that progress largely depends on the level of foresight and cooperation within the community. ###### Acknowledgements. This work was supported by the U. S. Department of Energy (DOE) contract DE-FG02-96ER40950. The authors would like to thank Micheal Wagman, Andrea Signori, Alexei Prokudin, and Filippo Delcarro for their insightful discussions. Additionally, the authors would like to acknowledge Bakur Parsamyan for providing the COMPASS data, Luciano Pappalardo for providing the HERMES data, and Sassot Rodolfo for providing the Fragmentation Functions.
2307.11714
Convergence of SGD for Training Neural Networks with Sliced Wasserstein Losses
Optimal Transport has sparked vivid interest in recent years, in particular thanks to the Wasserstein distance, which provides a geometrically sensible and intuitive way of comparing probability measures. For computational reasons, the Sliced Wasserstein (SW) distance was introduced as an alternative to the Wasserstein distance, and has seen uses for training generative Neural Networks (NNs). While convergence of Stochastic Gradient Descent (SGD) has been observed practically in such a setting, there is to our knowledge no theoretical guarantee for this observation. Leveraging recent works on convergence of SGD on non-smooth and non-convex functions by Bianchi et al. (2022), we aim to bridge that knowledge gap, and provide a realistic context under which fixed-step SGD trajectories for the SW loss on NN parameters converge. More precisely, we show that the trajectories approach the set of (sub)-gradient flow equations as the step decreases. Under stricter assumptions, we show a much stronger convergence result for noised and projected SGD schemes, namely that the long-run limits of the trajectories approach a set of generalised critical points of the loss function.
Eloi Tanguy
2023-07-21T17:19:01Z
http://arxiv.org/abs/2307.11714v3
# Convergence of SGD for Training Neural Networks with Sliced Wasserstein Losses ###### Abstract Optimal Transport has sparked vivid interest in recent years, in particular thanks to the Wasserstein distance, which provides a geometrically sensible and intuitive way of comparing probability measures. For computational reasons, the Sliced Wasserstein (SW) distance was introduced as an alternative to the Wasserstein distance, and has seen uses for training generative Neural Networks (NNs). While convergence of Stochastic Gradient Descent (SGD) has been observed practically in such a setting, there is to our knowledge no theoretical guarantee for this observation. Leveraging recent works on convergence of SGD on non-smooth and non-convex functions by Bianchi et al. (2022), we aim to bridge that knowledge gap, and provide a realistic context under which fixed-step SGD trajectories for the SW loss on NN parameters converge. More precisely, we show that the trajectories approach the set of (sub)-gradient flow equations as the step decreases. Under stricter assumptions, we show a much stronger convergence result for noised and projected SGD schemes, namely that the long-run limits of the trajectories approach a set of generalised critical points of the loss function. ## Table of Contents * 1 Introduction * 1.1 The Sliced Wasserstein Distance in Machine Learning * 1.2 Contributions * 2 Stochastic Gradient Descent with \(\mathrm{SW}\) as Loss * 3 Convergence of Interpolated SGD Trajectories on \(F\) * 4 Convergence of Noised Projected SGD Schemes on \(F\) * 5 Conclusion and Outlook ## 1 Introduction ### The Sliced Wasserstein Distance in Machine Learning Optimal Transport (OT) allows the comparison of measures on a metric space by generalising the use of the ground metric. Typical applications use the so-called 2-Wasserstein distance, defined as \[\forall\nu_{1},\nu_{2}\in\mathcal{P}_{2}(\mathbb{R}^{d}),\;\mathrm{W}_{2}^{2} (\nu_{1},\nu_{2}):=\inf_{\pi\in\Pi(\nu_{1},\nu_{2})}\int_{\mathbb{R}^{d} \times\mathbb{R}^{d}}\|x_{1}-x_{2}\|^{2}\mathrm{d}\pi(x_{1},x_{2}),\] (W2) where \(\mathcal{P}_{2}(\mathbb{R}^{d})\) is the set of probability measures on \(\mathbb{R}^{d}\) admitting a second-order moment and where \(\Pi(\nu_{1},\nu_{2})\) is the set of measures of \(\mathcal{P}_{2}(\mathbb{R}^{d}\times\mathbb{R}^{d})\) of first marginal \(\nu_{1}\) and second marginal \(\nu_{2}\). One may find a thorough presentation of its properties in classical monographs such as Peyre and Cuturi (2019); Santambrogio (2015); Villani (2009) The ability to compare probability measures is useful in probability density fitting problems, which are a sub-genre of generation tasks. In this formalism, one considers a probability measure \(\mu_{u}\), parametrised by \(u\) which is designed to approach a target data distribution \(\mu\) (typically the real-world dataset). In order to determine suitable parameters, one may choose any probability discrepancy (Kullback-Leibler, Ciszar divergences, f-divergences or Maximum Mean Discrepancy), or in our case, the Wasserstein distance. In the case of Generative Adversarial Networks, the optimisation problem which trains the "Wasserstein GAN" (Arjovsky et al., 2017) stems from the Kantorovitch-Rubinstein dual expression of the 1-Wasserstein distance. A less cost-intensive alternative to \(\mathrm{W}_{2}^{2}\) is the Sliced Wasserstein (SW) Distance introduced by Bonneel et al. (2015), which consists in computing the 1D Wasserstein distances between projections of input measures, and averaging over the projections. The aforementioned projection of a measure \(\nu\) on \(\mathbb{R}^{d}\) is done by the _push-forward_ operation by the map \(P_{\theta}:x\longmapsto\theta\cdot x\). Formally, \(P_{\theta}\#\nu\) is the measure on \(\mathbb{R}\) such that for any Borel set \(B\subset\mathbb{R}\), \(P_{\theta}\#\nu(B)=\nu(P_{\theta}^{-1}(B))\). Once the measures are projected onto a line \(\mathbb{R}\theta\), the computation of the Wasserstein distance becomes substantially simpler numerically. We shall illustrate this fact in the discrete case, which arises in practical optimisation settings. Let two discrete measures on \(\mathbb{R}^{d}\): \(\gamma_{X}:=\frac{1}{n}\sum_{k}\delta_{x_{k}},\ \gamma_{Y}:=\frac{1}{n}\sum_{k} \delta_{y_{k}}\) with \(x_{1},\cdots,x_{n},y_{1},\cdots,y_{n}\in\mathbb{R}^{d}\). Their push-borwards by \(P_{\theta}\) are simply computed by the formula \(P_{\theta}\#\gamma_{X}=\sum_{k}\delta_{P_{\theta}(x_{k})}\), and the 2-Wasserstein distance between their projections can be computed by sorting their supports: let \(\sigma\) a permutation sorting \((\theta^{T}x_{1},\cdots,\theta^{T}x_{n})\), and \(\tau\) a permutation sorting \((\theta^{T}y_{1},\cdots,\theta^{T}y_{n})\), one has the simple expression \[\mathrm{W}_{2}^{2}(P_{\theta}\#\gamma_{X},P_{\theta}\#\gamma_{Y})=\frac{1}{n} \sum_{k=1}^{n}(\theta^{T}x_{\sigma(k)}-\theta^{T}y_{\tau(k)})^{2}. \tag{1}\] The SW distance is the expectation of this quantity with respect to \(\theta\sim\mathfrak{v}\), i.e. uniform on the sphere: \(\mathrm{SW}_{2}^{2}(\gamma_{X},\gamma_{Y})=\mathbb{E}_{\theta\sim\mathfrak{v }}\left[\mathrm{W}_{2}^{2}(P_{\theta}\#\gamma_{X},P_{\theta}\#\gamma_{Y})\right]\). The 2-SW distance is also defined more generally between two measures \(\mu,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\): \[\mathrm{SW}_{2}^{2}(\mu,\nu):=\int_{\theta\in\mathbb{S}^{d-1}}\mathrm{W}_{2}^ {2}(P_{\theta}\#\mu,P_{\theta}\#\nu)\mathrm{d}\sigma(\theta).\] (SW) The implicit generative modelling framework is a formalisation of the training step of generative Neural Networks (NNs), where a network \(T\) of parameters \(u\) is learned such as to minimise the discrepancy between \(T_{u}\#x\)1 and \(\mathsf{y}\), where \(\mathsf{x}\) is a low-dimensional input distribution (often chosen as Gaussian or uniform noise), and where \(\mu\) is the target distribution. Our case of interest is when the discrepancy is measured with the SW distance, which leads to minimising \(\mathrm{SW}_{2}^{2}(T_{u}\#\mathsf{x},\mathsf{y})\) in \(u\). In order to train a NN in this manner, at each iteration one draws \(n\) samples from \(\mathsf{x}\) and \(\mathsf{y}\) (denoted \(\gamma_{X}\) and \(\gamma_{Y}\) as discrete measures with \(n\) points), as well as a projection \(\theta\) (or a batch of \(p\) projections) and performs an SGD step on the sample loss Footnote 1: \(T_{u}\#\mathsf{x}\) is the push-forward measure of \(\mathsf{x}\) by \(T_{u}\), i.e. the law of \(T_{u}(x)\) when \(x\sim\mathsf{x}\). \[\mathcal{L}(u)=\mathrm{SW}_{2}^{2}(P_{\theta}\#T_{u}\#\gamma_{X},P_{\theta} \#\gamma_{Y})=\frac{1}{n}\sum_{k=1}^{n}(\theta^{T}T_{u}(x_{\sigma(k)})-\theta^ {T}y_{\tau(k)})^{2}, \tag{2}\] with respect to the parameters \(u\) (see Algorithm 1 for a precise formalisation). In order to compute this numerically, the main complexity comes from determining the permutations \(\sigma\) and \(\tau\) by sorting the numbers \((\theta^{T}T_{u}(x_{k}))_{k}\) and \((y_{k})_{k}\), and summing the results, while the Wasserstein alternative \(\mathrm{W}_{2}^{2}(T_{u}\#\gamma_{X},\gamma_{Y})\) is done by solving a Linear Program, which is substantially costlier. In this paper, we shall study this training method theoretically and prove convergence results. Theoretical guarantees for this optimisation problem are welcome, since this question has not yet been tackled (to our knowledge), even though its use is relatively widespread: for instance, Deshpande et al. (2018) and Wu et al. (2019) train GANs and auto-encoders with this method. Other examples within this formalism include the synthesis of images by minimising the SW distance between features of the optimised image and a target image, as done by Heitz et al. (2021) for textures with neural features, and by Tartavel et al. (2016) with wavelet features (amongst other methods). In practice, it has been observed that SGD in such settings always converges (in the loose numerical sense), yet this property is not known theoretically, since the loss function defined in (2) is not differentiable nor convex in general, because \(X\longmapsto\mathrm{SW}^{2}_{2}(\gamma_{X},\gamma_{Y})\) and the neural network do not have such regularities. Several efforts have been made to prove the convergence of SGD trajectories within this theoretically difficult setting: Bianchi et al. (2022) show the convergence of fixed-step SGD schemes on a function \(F\) with some technical regularity assumptions, Majewski et al. (2018) show the convergence of diminishing-step SGD schemes assuming stronger regularity results on \(F\). Another notable theoretical work is by Bolte and Pauwels (2021), which leverages conservative field theory to prove convergence for back-propagated SGD on deep NNs with definable activations and loss functions. In the case of Optimal Transport losses, the only work (that we are aware of) that has tackled this problem is by Fatras et al. (2021), proving strong convergence results for minibatch variants of classical OT distances, namely the Wasserstein, Entropic Wasserstein and Gromov Wasserstein distances. The aim of this work is to bridge the gap between theory and practical observation by proving convergence results for SGD on Sliced Wasserstein generative losses of the form \(F(u)=\mathrm{SW}^{2}_{2}(T_{u}\#\mathsf{x},\mathsf{y})\). ### Contributions Convergence of Interpolated SGD Under Practical AssumptionsUnder practically realistic assumptions, we prove in Theorem1 that piece-wise affine interpolations (defined in Equation (6)) of constant-step SGD schemes on \(u\longmapsto F(u)\) (formalised in Equation (4)) converge towards the set of sub-gradient flow solutions (see Equation (5)) as the gradient step decreases. This results signifies that with very small learning rates, SGD trajectories will be close to sub-gradient flows, which themselves converge to critical points of \(F\) (omitting serious technicalities). The assumptions needed for this result are practically reasonable: the input measure \(\mathsf{x}\) and the true data measure \(\mathsf{y}\) are assumed to be compactly supported. As for the network \((u,x)\longmapsto T(u,x)\), we assume that for a fixed datum \(x\), \(T(\cdot,x)\) is piecewise \(\mathcal{C}^{2}\)-smooth and that it is Lipschitz jointly in both variables on any compact. We require additional assumptions on \(T\) which are more costly, but are verified as long as \(T\) is of the form \(T(u,x)=\widetilde{T}(u,x)\mathbb{1}_{B}(u)\), where \(\widetilde{T}\) is any typical NN composed of compositions of definable activations (as is the case for all typical activations, see (Bolte and Pauwels, 2021), SS6.2), and of linear units; and where \(\mathbb{1}_{B}(u)\) is the indicator that the parameter \(u\) be in a fixed ball \(B\). This form for \(T\) is a strong theoretical assumption, but in practice makes little difference, as one may take a fixed ball \(B\) to be arbitrarily large. Stronger Convergence Under Stricter AssumptionsIn order to obtain a stronger convergence result, we consider a variant of SGD where each iteration receives an additive noise (scaled by the learning rate) which allows for better space exploration, and where each iteration is projected on a ball \(B(0,r)\) in order to ensure boundedness. This alternative SGD scheme remains within the realm of practical applications, and we show in Theorem2 that long-run limits of such trajectories converge towards a set of generalised critical points of \(F\), as the gradient step approaches \(0\). This result is substantially stronger, and can serve as an explanation of the convergence of practical SGD trajectories, specifically towards a set of critical points which amounts to the stationary points of the energy (barring theoretical technicalities). Unfortunately, we require additional assumptions in order to obtain this stronger convergence result, the most important of which is that the input data measure \(\mathsf{x}\) and the dataset measure \(\mathsf{y}\) are discrete. For the latter, this is always the case in practice, however the former assumption is more problematic, since it is common to envision generative NNs as taking an argument from a continuous space (the input is often Gaussian of Uniform noise), thus a discrete setting is a substantial theoretical drawback. For practical concerns, one may argue that the discrete \(\mathsf{x}\) can have an arbitrary fixed amount of points, and leverage strong sample complexity results such as those of Nadjahi et al. (2020) to ascertain that the discretisation is not costly if the number of samples is large enough. ## 2 Stochastic Gradient Descent with \(\mathrm{SW}\) as Loss Training Sliced-Wasserstein generative models consists in training a neural network \[T:\left\{\begin{array}{rcl}\mathbb{R}^{d_{u}}\times\mathbb{R}^{d_{x}}& \longrightarrow&\mathbb{R}^{d_{y}}\\ (u,x)&\longmapsto&T_{u}(x):=T(u,x)\end{array}\right.\] by minimising \(u\longmapsto\mathrm{SW}^{2}_{2}(T_{u}\#\mathsf{x},\mathsf{y})\) through Stochastic Gradient Descent (as described in Algorithm 1). The probability distribution \(\mathsf{x}\in\mathcal{P}_{2}(\mathbb{R}^{d_{x}})\) is the law of the input of the generator \(T(u,\cdot)\). The distribution \(\mathsf{y}\in\mathcal{P}_{2}(\mathbb{R}^{d_{y}})\) is the data distribution, which \(T\) aims to simulate. Finally, \(\mathsf{o}\) will denote the uniform measure on the unit sphere of \(\mathbb{R}^{d_{y}}\), denoted by \(\mathbb{S}^{d_{y}-1}\). Given a list of points \(X=(x_{1},\cdots,x_{n})\in\mathbb{R}^{n\times d_{x}}\), denote the associated discrete uniform measure \(\gamma_{X}:=\frac{1}{n}\sum_{i}\delta_{x_{i}}\). By abuse of notation, we write \(T_{u}(X):=(T_{u}(x_{1}),\cdots,T_{u}(x_{n}))\in\mathbb{R}^{n\times d_{y}}\). ``` Data: Learning rate \(\alpha>0\), noise level \(a\geq 0\), convergence threshold \(\beta>0\), probability distributions \(\mathsf{x}\in\mathcal{P}_{2}(\mathbb{R}^{d_{x}})\) and \(\mathsf{y}\in\mathcal{P}_{2}(\mathbb{R}^{d_{y}})\). Initialisation: Draw \(u^{(0)}\in\mathbb{R}^{d_{u}}\); for\(t\in[\![0,T_{\max}-1]\!]\)do Draw \(\theta^{(t+1)}\sim\mathsf{o},\ X^{(t+1)}\sim\mathsf{x}^{\otimes n}\ Y^{(t+1)} \sim\mathsf{y}^{\otimes n}\). SGD update: \(u^{(t+1)}=u^{(t)}-\alpha\left[\frac{\partial}{\partial u} \mathrm{W}^{2}_{2}(P_{\theta^{(t+1)}}\#T_{u}\#\gamma_{X^{(t+1)}},P_{\theta^{(t +1)}}\#\gamma_{Y^{(t+1)}})\right]_{u=u^{(t)}}\) end for ``` **Algorithm 1**Training a NN on the SW loss with Stochastic Gradient Descent In the following, we will apply results from (Bianchi et al., 2022), and we pave the way to the application of these results by presenting their theoretical framework. Consider a sample loss function \(f:\mathbb{R}^{d_{u}}\times\Xi\longrightarrow\mathbb{R}\) that is locally Lipschitz in the first variable, and \(\zeta\) a probability measure on \(\Xi\subset\mathbb{R}^{d}\) which is the law of the samples drawn at each SGD iteration. Consider \(\varphi:\mathbb{R}^{d_{u}}\times\Xi\longrightarrow\mathbb{R}^{d_{u}}\) an _almost-everywhere gradient_ of \(f\), which is to say that for almost every \((u,S)\in\mathbb{R}^{d_{u}}\times\Xi,\ \varphi(u,S)=\partial_{u}f(u,S)\) (since each \(f(\cdot,S)\) is locally Lipschitz, it is differentiable almost-everywhere by Rademacher's theorem). The complete loss function is \(F:=u\longrightarrow\int_{\Xi}f(u,S)\mathrm{d}\zeta(S)\). An SGD trajectory of step \(\alpha>0\) for \(F\) is a sequence \((u^{(t)})\in(\mathbb{R}^{d_{u}})^{\mathbb{N}}\) of the form: \[u^{(t+1)}=u^{(t)}-\alpha\varphi(u^{(t)},S^{(t+1)}),\quad\left(u^{(0)},(S^{(t) })_{t\in\mathbb{N}}\right)\sim\nu\otimes\zeta^{\otimes\mathbb{N}},\] where \(\nu\) is the distribution of the initial position \(u^{(0)}\). Within this framework, we define an SGD scheme described by Algorithm 1, with \(\zeta:=\mathsf{x}^{\otimes n}\otimes\gamma^{\otimes n}\otimes\mathsf{o}\) and \[f:=\left\{\begin{array}{rcl}\mathbb{R}^{d_{u}}\times\mathbb{R}^{n\times d_{x} }\times\mathbb{R}^{n\times d_{y}}\times\mathbb{S}^{d_{y}-1}&\longrightarrow& \mathbb{R}^{d_{y}}\\ (u,X,Y,\theta)&\longmapsto&\mathrm{W}^{2}_{2}(P_{\theta}\#T_{u}\#\gamma_{X},P _{\theta}\#\gamma_{Y})\end{array}\right..\] With this definition for \(f\), we have \(F(u)=\mathbb{E}_{(X,Y,\theta)\sim\zeta}\left[\mathrm{W}^{2}_{2}(P_{\theta}\#T_ {u}\#\gamma_{X},P_{\theta}\#\gamma_{Y})\right]=\mathrm{SW}^{2}_{2}(T_{u}\# \mathsf{x},\mathsf{y})\): the complete loss compares the data \(\mathsf{y}\) with the model's generation \(T_{u}\#\mathsf{x}\) using SW. We now wish to define an almost-everywhere gradient of \(f\). To this end, notice that one may write \(f(u,X,Y,\theta)=w_{\theta}(T(u,X),Y)\), where for \(Z,Y\in\mathbb{R}^{n\times d_{y}}\) and \(\theta\in\mathbb{S}^{d_{y}-1},\ w_{\theta}(Y,Z):=\mathrm{W}^{2}_{2}(P_{\theta} \#\gamma_{Z},P_{\theta}\#\gamma_{Y})\). The differentiability properties of \(w_{\theta}(\cdot,Y)\) are already known (Tanguy et al., 2023; Bonneel et al., 2015), in particular one has the following almost-everywhere gradient of \(w_{\theta}(\cdot,Y)\) : \[\frac{\partial w_{\theta}}{\partial Z}(Z,Y)=\left(\frac{2}{n}\theta\theta^{T}(z _{k}-y_{\sigma_{\theta}^{2,Y}(k)})\right)_{k\in[\![1,n]\!]}\in\mathbb{R}^{n \times d_{y}},\] where the permutation \(\sigma_{\theta}^{Z,Y}\in\mathfrak{S}_{n}\) is \(\tau_{Y}^{\theta}\circ(\tau_{Z}^{\theta})^{-1}\), with \(\tau_{Y}^{\theta}\in\mathfrak{S}_{n}\) being a sorting permutation of the list \((\theta\cdot y_{1},\cdots,\theta\cdot y_{n})\). The sorting permutations are chosen arbitrarily when there is ambiguity. To define an almost-everywhere gradient, we must differentiate \(f(\cdot,X,Y,\theta)=u\longmapsto w_{\theta}(T(u,X),Y)\) for which we need regularity assumptions on \(T\): this is the goal of Assumption 1. In the following, \(\overline{A}\) denotes the topological closure of a set \(A\), \(\partial A\) its boundary, and \(\lambda_{\mathbb{R}^{d_{u}}}\) denotes the Lebesgue measure of \(\mathbb{R}^{d_{u}}\). **Assumption 1**.: _For every \(x\in\mathbb{R}^{d_{x}},\;\) there exists a family of disjoint connected open sets \((\mathcal{U}_{j}(x))_{j\in J(x)}\) such that \(\forall j\in J(x),\;T(\cdot,x)\in\mathcal{C}^{2}(\mathcal{U}_{j}(x),\mathbb{R }^{d_{y}})\), \(\bigcup\limits_{j\in J(x)}\overline{\mathcal{U}_{j}(x)}=\mathbb{R}^{d_{u}}\) and \(\lambda_{\mathbb{R}^{d_{u}}}\big{(}\bigcup\limits_{j\in J(x)}\partial \mathcal{U}_{j}(x)\big{)}=0\)._ Note that for measure-theoretic reasons, the sets \(J(x)\) are assumed countable. Assumption 1 implies that given \(X,Y,\theta\) fixed, \(f(\cdot,X,Y,\theta)\) is differentiable almost-everywhere, and that one may define the following almost-everywhere gradient (3). \[\varphi:=\left\{\begin{array}{rcl}\mathbb{R}^{d_{u}}\times\mathbb{R}^{n \times d_{x}}\times\mathbb{R}^{n\times d_{y}}\times\mathbb{S}^{d_{y}-1}& \longrightarrow&\mathbb{R}^{d_{u}}\\ (u,X,Y,\theta)&\longmapsto&\sum\limits_{k=1}^{n}\frac{2}{n}\bigg{(} \frac{\partial T}{\partial u}(u,x_{k})\bigg{)}^{T}\,\theta\theta^{T}(T(u,x_{ k})-y_{\sigma_{\theta}^{T(u,X),Y}(k)})\end{array}\right., \tag{3}\] where for \(x\in\mathbb{R}^{d_{x}},\;\frac{\partial T}{\partial u}(u,x)\in\mathbb{R}^{d_{ y}\times d_{u}}\) denotes the matrix of the differential of \(u\longmapsto T(u,x)\), which is defined for almost-every \(u\). Given \(u\in\partial\mathcal{U}_{j}(x)\) (a point of potential non-differentiability), take instead \(0\). (Any choice at such points would still define an a.e. gradient, and will make no difference). Given a step \(\alpha>0\), and an initial position \(u^{(0)}\sim\nu\), we may now define formally the following fixed-step SGD scheme for \(F\): \[\begin{split} u^{(t+1)}=u^{(t)}-\alpha\varphi(u^{(t)},X^{(t+1)},Y ^{(t+1)},\theta^{(t+1)}),\\ \Big{(}u^{(0)},(X^{(t)})_{t\in\mathbb{N}}\;(Y^{(t)})_{t\in \mathbb{N}}\;(\theta^{(t)})_{t\in\mathbb{N}}\Big{)}\sim\nu\otimes\mathbf{x}^ {\otimes\mathbb{N}}\otimes\mathbf{\gamma}^{\otimes\mathbb{N}}\otimes\mathbf{ \sigma}^{\otimes\mathbb{N}}.\end{split} \tag{4}\] An important technicality that we must verify in order to apply Bianchi et al. (2022)'s results is that \(u\longmapsto f(u,X,Y,\theta)\) and \(F\) are locally Lipschitz. Before proving those claims, we reproduce a useful Property from (Tanguy et al., 2023). In the following, \(\|X\|_{\infty,2}\) denotes \(\max\limits_{k\in[\![1,n]\!]}\;\|x_{k}\|_{2}\) given \(X=(x_{1},\cdots,x_{n})\in\mathbb{R}^{n\times d_{x}}\), and \(B_{\mathcal{N}}(x,r)\) for \(\mathcal{N}\) a norm on \(\mathbb{R}^{d_{x}}\), \(x\in\mathbb{R}^{d_{x}}\) and \(r>0\) shall denote the open ball of \(\mathbb{R}^{d_{x}}\) of centre \(x\) and radius \(r\) for the norm \(\mathcal{N}\) (if \(\mathcal{N}\) is omitted, then \(B\) is an euclidean ball). **Proposition 1**.: _The \((w_{\theta}(\cdot,Y))_{\theta\in\mathbb{S}^{d_{y}-1}}\) are uniformly locally Lipschitz (Tanguy et al., 2023)._ _Let \(\kappa_{r}(Z,Y):=2n(r+\|Z\|_{\infty,2}+\|Y\|_{\infty,2})\), for \(Z,Y\in\mathbb{R}^{n\times d_{y}}\) and \(r>0\). Then \(w_{\theta}(\cdot,Y)\) is \(\kappa_{r}(Z,Y)\)-Lipschitz in the neighbourhood \(B_{\|\cdot\|_{\infty},(Z,r)}\):_ \[\forall Y^{\prime},Y^{\prime\prime}\in B_{\|\cdot\|_{\infty,2}}(Z,r),\;\forall \theta\in\mathbb{S}^{d_{y}-1},\;|w_{\theta}(Y^{\prime},Y)-w_{\theta}(Y^{ \prime\prime},Y)|\leq\kappa_{r}(Z,Y)\|Y^{\prime}-Y^{\prime\prime}\|_{\infty, 2}.\] In order to deduce regularity results on \(f\) and \(F\) from Proposition 1, we will make the following assumption, which under Assumption 1 only requires additional regularity with respect to the data argument. **Assumption 2**.: _For any compacts \(\mathcal{K}_{1}\subset\mathbb{R}^{d_{u}}\) and \(\mathcal{K}_{2}\subset\mathbb{R}^{d_{x}},\;\) there exists \(L_{\mathcal{K}_{1},\mathcal{K}_{2}}>0\) such that \(\forall(u_{1},u_{2},x_{1},x_{2})\in\mathcal{K}_{1}^{2}\times\mathcal{K}_{2}^{2},\;\|T(u_{1},x_{1})-T(u_{2},x_{2})\|\leq L_{\mathcal{K}_{1},\mathcal{K}_{2}} \left(\|u_{1}-u_{2}\|+\|x_{1}-x_{2}\|\right)\)._ **Proposition 2** (Regularity of \(u\longmapsto f(u,X,Y,\theta)\)).: _Under Assumption 2, for \(\varepsilon>0,\;u_{0}\in\mathbb{R}^{d_{u}},\;X\in\mathbb{R}^{n\times d_{x}},\; Y\in\mathbb{R}^{n\times d_{y}}\) and \(\theta\in\mathbb{S}^{d_{y}-1}\), let \(\kappa_{\varepsilon}(u_{0},X,Y):=2Ln(\varepsilon L+\|T(u_{0},X)\|_{\infty,2}+\| Y\|_{\infty,2})\), with \(L:=L_{\overline{B}(u_{0},\varepsilon),\overline{B}(\mathbb{0}_{d_{x}},\|X\|_{ \infty,2})}\). Then \(f(\cdot,X,Y,\theta)\) is \(\kappa_{\varepsilon}(X,Y)\)-Lipschitz in \(B(u_{0},\varepsilon)\):_ \[\forall u,u^{\prime}\in B(u_{0},\varepsilon),\;|f(u,X,Y,\theta)-f(u^{\prime},X,Y, \theta)|\leq\kappa_{\varepsilon}(X,Y)\|u-u^{\prime}\|_{2}.\] Proof.: Let \(\varepsilon>0,\ u_{0}\in\mathbb{R}^{d_{u}},\ X\in\mathbb{R}^{n\times d_{x}},\ Y\in \mathbb{R}^{n\times d_{y}}\) and \(\theta\in\mathbb{S}^{d_{y}-1}\). Let \(u,u^{\prime}\in B(u_{0},\varepsilon)\). Using Assumption 2, we have \(T(u,X),T(u^{\prime},X)\in B_{\|\cdot\|_{\infty,2}}(T(u_{0},X),r)\), with \(r:=\varepsilon L_{\overline{B}(u_{0},\varepsilon),\overline{B}(0_{\text{d}x},\|X\|_{\infty,2})}\). By Proposition 1, we have, with \(L:=L_{\overline{B}(u_{0},\varepsilon),\overline{B}(0_{\text{d}x},\|X\|_{ \infty,2})}\) \[|f(u,X,Y,\theta)-f(u^{\prime},X,Y,\theta)| =|w_{\theta}(T(u,X),Y)-w_{\theta}(T(u^{\prime},X),Y)|\] \[\leq\kappa_{r}(T(u_{0},X),Y)\|T(u,X)-T(u^{\prime},X)\|_{\infty,2}\] \[\leq 2n(\varepsilon L+\|T(u_{0},X)\|_{\infty,2}+\|Y\|_{\infty,2}) L\|u-u^{\prime}\|_{2}.\] Proposition 2 shows that \(f\) is locally Lipschitz in \(u\). We now assume some conditions on the measures \(\mathtt{x}\) and \(\mathtt{y}\) in order to prove that \(F\) is also locally Lipschitz. **Assumption 3**.: \(\mathtt{x}\) _and \(\mathtt{y}\) are Radon probability measures on \(\mathbb{R}^{d_{x}}\) and \(\mathbb{R}^{d_{y}}\) respectively, supported by the compacts \(\mathcal{X}\) and \(\mathcal{Y}\) respectively. Denote \(R_{x}:=\sup\limits_{x\in\mathcal{X}}\|x\|_{2}\) and \(R_{y}:=\sup\limits_{y\in\mathcal{Y}}\|y\|_{2}\)._ **Proposition 3**.: _Assume Assumption 2 and Assumption 3. For \(\varepsilon>0,\ u_{0}\in\mathbb{R}^{d_{u}},\) let \(C_{1}(u_{0}):=\int_{\mathcal{X}^{n}}\|T(u_{0},X)\|_{\infty,2}\mathrm{d}\psi^{ \otimes n}(X),\ C_{2}:=\int_{\mathcal{Y}^{n}}\|Y\|_{\infty,2}\mathrm{d}\psi^{ \otimes n}(Y)\) and \(L:=L_{\overline{B}(u_{0},\varepsilon),\overline{B}(0,R_{x})}\). Let \(\kappa_{\varepsilon}(u_{0}):=2Ln(\varepsilon L+C_{1}(u_{0})+C_{2})\). We have \(\forall u,u^{\prime}\in B(u_{0},\varepsilon),\ |F(u)-F(u^{\prime})|\leq\kappa_{ \varepsilon}(u_{0})\|u-u^{\prime}\|_{2}\)._ Proof.: Let \(\varepsilon>0,\ u_{0}\in\mathbb{R}^{d_{u}}u,u^{\prime}\in B(u_{0},\varepsilon)\). First, notice that for any \(X\in\mathcal{X}^{n},\ \|X\|_{\infty,2}\leq R_{x}\), thus \(L_{\overline{B}(u_{0},\varepsilon),\overline{B}(0_{\text{d}x},\|X\|_{\infty,2 })}\leq L_{\overline{B}(u_{0},\varepsilon),\overline{B}(0,R_{x})}=:L\). We have \[|F(u)-F(u^{\prime})| \leq\int_{\mathcal{X}^{n}\times\mathcal{Y}^{n}\times\mathcal{S} ^{d_{y}-1}}|f(u,X,Y,\theta)-f(u)|\mathrm{d}x^{\otimes n}(X)\mathrm{d}\psi^{ \otimes n}(Y)\mathrm{d}\sigma(\theta)\] \[\leq\int_{\mathcal{X}^{n}\times\mathcal{Y}^{n}}\kappa_{\varepsilon }(u_{0},X,Y)\|u-u^{\prime}\|_{2}\mathrm{d}x^{\otimes n}(X)\mathrm{d}\psi^{ \otimes n}(Y)\] \[\leq\int_{\mathcal{X}^{n}\times\mathcal{Y}^{n}}2Ln(\varepsilon L +\|T(u_{0},X)\|_{\infty,2}+\|Y\|_{\infty,2})\|u-u^{\prime}\|_{2}\mathrm{d}x^{ \otimes n}(X)\mathrm{d}y^{\otimes n}(Y).\] Now by Assumption 2, \(X\longmapsto\|T(u_{0},X)\|_{\infty,2}\) is continuous on the compact \(\mathcal{X}^{n}\), allowing the definition of the constant \(C_{1}(u_{0}):=\int_{\mathcal{X}^{n}}\|T(u_{0},X)\|_{\infty,2}\mathrm{d}\psi^{ \otimes n}(X)\) (\(\mathtt{x}\) is a Radon probability measure by Assumption 3, thus \(C_{1}\) is finite.) Likewise, let \(C_{2}:=\int_{\mathcal{Y}^{n}}\|Y\|_{\infty,2}\mathrm{d}\mathcal{Y}^{\otimes n}( Y)<+\infty\). Finally, \(|F(u)-F(u^{\prime})|\leq 2Ln(\varepsilon L+C_{1}(u_{0})+C_{2})\|u-u^{\prime}\|_{2}\). Having shown that our losses are locally Lipschitz, we can now turn to convergence results. These conclusions are placed in the context of non-smooth and non-convex optimisation, thus will be tied to the Clarke sub-differential of \(F\), which we denote \(\partial_{C}F\). The set of Clarke sub-gradients at a points \(u\) is the convex hull of the limits of gradients of \(F\): \[\partial_{C}F(u):=\mathrm{conv}\left\{v\in\mathbb{R}^{d_{u}}:\ \exists(u_{i})\in( \mathcal{D}_{F})^{\mathbb{N}}:u_{i}\xrightarrow[i\longrightarrow+\infty]{}u\ \text{ and }\nabla F(u_{i})\xrightarrow[i\longrightarrow+\infty]{}v\right\},\] where \(\mathcal{D}_{F}\) is the set of differentiability of \(F\). At points \(u\) where \(F\) is differentiable, \(\partial_{C}F(u)=\{\nabla F(u)\}\), and if \(F\) is convex in a neighbourhood of \(u\), then the Clarke differential at \(u\) is the set of its convex sub-gradients. ## 3 Convergence of Interpolated SGD Trajectories on \(F\) In general, the idea behind SGD is a discretisation of the gradient flow equation \(\dot{u}(s)=-\nabla F(u(s))\). In our non-smooth setting, the underlying continuous-time problem is is instead the Clarke differential inclusion \(\dot{u}(s)\in-\partial_{C}F(u(s))\). Our objective is to show that in a certain sense, the SGD trajectories approach the set of solutions of this inclusion problem, as the step size decreases. We consider solutions that are absolutely continuous (we will write \(u(\cdot)\in\mathcal{C}_{\mathrm{abs}}(\mathbb{R}_{+},\mathbb{R}^{d_{u}})\)) and start within \(\mathcal{K}\subset\mathbb{R}^{d_{u}}\), a fixed compact set. We can now define the solution set formally as \[S_{-\partial_{C}F}(\mathcal{K}):=\left\{u\in\mathcal{C}_{\mathrm{abs}}( \mathbb{R}_{+},\mathbb{R}^{d_{u}})\ |\ \underline{\forall}s\in\mathbb{R}_{+},\ \dot{u}(s)\in-\partial_{C}F(u(s));\ u(0)\in \mathcal{K}\right\}, \tag{5}\] where we write \(\underline{\forall}\) for \({}^{*}\)almost every\({}^{*}\). In order to compare the discrete SGD trajectories to this set continuous-time trajectories, we interpolate the discrete points in an affine manner: Equation (6) defines the _piecewise-affine interpolated SGD trajectory_ associated to an SGD trajectory \((u^{(t)}_{\alpha})_{t\in\mathbb{N}}\) of learning rate \(\alpha\). \[u_{\alpha}(s)=u^{(t)}_{\alpha}+\left(\frac{s}{\alpha}-t\right)(u^{(t+1)}_{ \alpha}-u^{(t)}_{\alpha}),\quad\forall s\in[t\alpha,(t+1)\alpha[,\quad\forall t \in\mathbb{N}. \tag{6}\] In order to compare our interpolated trajectories with the solutions, we consider the metric of uniform convergence on all segments \[d_{c}(u,u^{\prime}):=\sum_{k\in\mathbb{N}^{*}}\frac{1}{2^{k}}\mathrm{min} \left(1,\max_{s\in[0,k]}\lVert u(s)-u^{\prime}(s)\rVert_{\infty,2}\right). \tag{7}\] In order to prove that the interpolated trajectories, we will leverage the results of Bianchi et al. (2022) which hinge on three conditions on the loss \(F\) that we reproduce and verify successively. **Condition 1**.: 1. _There exists_ \(\kappa:\mathbb{R}^{d_{u}}\times\Xi\longrightarrow\mathbb{R}_{+}\) _measurable such that each_ \(\kappa(u,\cdot)\) _is_ \(\zeta\)_-integrable, and:_ \[\exists\varepsilon>0,\ \forall u,u^{\prime}\in B(u_{0},\varepsilon),\ \forall S\in\Xi,\ |f(u,S)-f(u^{\prime},S)|\leq \kappa(u_{0},S)\lVert u-u^{\prime}\rVert_{2}.\] 2. _There exists_ \(u\in\mathbb{R}^{d_{u}}\) _such that_ \(f(u,\cdot)\) _is_ \(\zeta\)_-integrable._ Our regularity result on \(f\) Proposition2 allows us to verify Condition1, by letting \(\varepsilon:=1\) and \(\kappa(u_{0},S):=\kappa_{1}(u_{0},X,Y,\theta)\). Condition1 ii) is immediate since for _all_\(u\in\mathbb{R}^{d_{u}}\), \((X,Y,\theta)\longmapsto w_{\theta}(T(u,X),Y)\) is continuous in each variable separately, thanks to the regularity of \(T\) provided by Assumption2, and to the regularities of \(w\) (as implied by (Tanguy et al., 2023), Lemma2.2.2, for instance). This continuity implies that all \(f(u,\cdot)\) are \(\zeta\)-integrable, since \(\zeta=\mathfrak{x}^{\otimes n}\otimes\mathfrak{y}^{\otimes n}\otimes\sigma\) is a compactly supported Radon measure under Assumption3. **Condition 2**.: _The function \(\kappa\) of Condition1 verifies:_ 1. _There exists_ \(c\geq 0\) _such that_ \(\forall u\in\mathbb{R}^{d_{u}},\ \int_{\Xi}\kappa(u,S)\mathrm{d}\zeta(S)\leq c(1+ \lVert u\rVert_{2})\)_._ 2. _For every compact_ \(\mathcal{K}\subset\mathbb{R}^{d_{u}}\)_,_ \(\sup_{u\in\mathcal{K}}\ \int_{\Xi}\kappa(u,S)^{2}\mathrm{d}\zeta(S)<+\infty\)_._ Condition2.ii) is verified by \(\kappa\) given its regularity. However, Condition2.i) requires that \(T(u,x)\) increase slowly as \(\lVert u\rVert\) increases, which is more costly. **Assumption 4**.: _There exists an \(\kappa\)-integrable function \(g:\mathbb{R}^{d_{x}}\longrightarrow\mathbb{R}_{+}\) such that \(\forall u\in\mathbb{R}^{d_{u}},\ \forall x\in\mathbb{R}^{d_{x}},\ \lVert T(u,x) \rVert\leq g(x)(1+\lVert u\rVert_{2})\)._ Assumption4 is satisfied in particular as soon as \(T(\cdot,x)\) is bounded (which is the case for a neural network with bounded activation functions), or if \(T\) is of the form \(T(u,x)=\tilde{T}(u,x)\mathbb{1}_{B(0,R)}(u)\), i.e. limiting the network parameters \(u\) to be bounded. This second case does not yield substantial restrictions in practice, yet vastly simplifies theory. Under Assumption 4, we have for any \(u\in\mathbb{R}^{d_{u}}\), with \(\kappa\) from Proposition 2 and \(C_{2}\) from Proposition 3, \[\int_{\mathcal{X}^{n}\times\mathcal{Y}^{n}\times\mathbb{S}^{d_{y}-1 }}\kappa_{1}(u,X,Y,\theta)\mathrm{d}\varkappa^{\otimes n}(X)\mathrm{d}\gamma^{ \otimes n}(Y)\mathrm{d}\sigma(\theta) \leq 2Ln\left(\varepsilon L+(1+\|u\|_{2})\int_{\mathcal{X}^{n} \,\mathbb{R}\in[1,n]}\,g(x_{k})\mathrm{d}\varkappa^{\otimes n}(X)+C_{2}\right)\] \[\leq c(1+\|u\|_{2}).\] As a consequence, Condition 2 holds under our assumptions. We now consider the Markov kernel associated to the SGD schemes: \[P_{\alpha}:\left\{\begin{array}{rcl}\mathbb{R}^{d_{u}}\times\mathcal{B}( \mathbb{R}^{d_{u}})&\longrightarrow&[0,1]\\ u,B&\longmapsto&\int_{\Xi}\mathbb{1}_{B}(u-\alpha\varphi(u,S))\mathrm{d}\zeta(S )\end{array}\right..\] With \(\lambda_{\mathbb{R}^{d_{u}}}\) denoting the Lebesgue measure on \(\mathbb{R}^{d_{u}}\), let \(\Gamma:=\{\alpha\in\,]0,+\infty[\,\mid\,\forall\rho\ll\lambda_{\mathbb{R}^{d_ {u}}},\ \rho P_{\alpha}\ll\lambda_{\mathbb{R}^{d_{u}}}\}\). We will verify the following condition: **Condition 3**.: _The closure of \(\Gamma\) contains 0._ In order to satisfy Condition 3, we require an additional regularity condition on the neural network \(T\) which we formulate in Assumption 5. **Assumption 5**.: _There exists a constant \(M>0\), such that (with the notations of Assumption 1 and Assumption 3) \(\forall x\in\mathcal{X},\ \forall j\in J(x),\ \forall u\in\mathcal{U}_{j}(x),\ \forall(i_{1},i_{2},i_{3},i_{4})\in \llbracket 1,d_{u}\rrbracket^{2}\times\llbracket 1,d_{y}\rrbracket^{2},\)_ \[\left|\frac{\partial^{2}}{\partial u_{i_{1}}\partial u_{i_{2}}}\Big{(}[T(u,x)] _{i_{3}}[T(u,x)]_{i_{4}}\Big{)}\right|\leq M,\ \mathrm{and}\ \left\|\frac{\partial^{2}T}{ \partial u_{i_{1}}\partial u_{i_{2}}}(u,x)\right\|_{2}\leq M.\] The upper bounds in assumption bear strong consequences on the behaviour of \(T\) for \(\|u\|_{2}\gg 1\), and are only practical for networks of the form \(T(u,x)=\bar{T}(u,x)\mathbb{1}_{B(0,R)}(u)\), similarly to Assumption 4. **Proposition 4**.: _Under Assumption 1, Assumption 3 and Assumption 5, for the SGD trajectories (4), \(\Gamma\ \supset\,]0,\alpha_{0}[\), where \(\alpha_{0}:=\frac{1}{(d_{y}^{\ 2}+2R_{y})d_{u}M}.\)_ Proof.: Let \(\rho\ll\lambda\) and \(B\in\mathcal{B}(\mathbb{R}^{d_{y}})\) such that \(\lambda(B)=0\). We have, with \(\alpha^{\prime}:=2\alpha/n,\ S:=(X,Y,\theta),\ \zeta:=x^{\otimes n}\otimes \gamma^{\otimes n}\otimes\sigma\) and \(\Xi:=\mathcal{X}^{n}\times\mathcal{Y}^{n}\times\mathbb{S}^{d_{y}-1}\), \[\rho P_{\alpha}(B)=\int_{\mathbb{R}^{d_{u}}\times\Xi}\mathbb{1}_{B}\left[u- \alpha^{\prime}\sum_{k=1}^{n}\left(\frac{\partial T}{\partial u}(u,x_{k}) \right)^{T}\theta\theta^{T}(T(u,x_{k})-y_{\sigma_{\theta}^{T(u,X),Y}(k)}) \right]\mathrm{d}\rho(u)\mathrm{d}\zeta(S)\ \ \leq\sum_{\tau\in\mathfrak{S}_{n}}\int_{\Xi}I_{\tau}(S) \mathrm{d}\zeta(S)\] where \(I_{\tau}(S):=\int_{\mathbb{R}^{d_{u}}}\mathbb{1}_{B}\left(\phi_{\tau,S}(u) \right)\mathrm{d}\rho(u)\), with \(\phi_{\tau,S}:=u-\alpha^{\prime}\underbrace{\sum_{k=1}^{n}\left(\frac{ \partial T}{\partial u}(u,x_{k})\right)^{T}\theta\theta^{T}(T(u,x_{k})-y_{\tau (k)})}_{\psi_{\tau,S}:=}\). Let \(\tau\in\mathfrak{S}_{n}\) and \((X,Y,\theta)\in\Xi\). Using Assumption 1, separate \(I_{\tau}(S)=\sum_{j\in J}\int_{\mathcal{U}_{j}(X)}\mathbb{1}_{B}\left(u-\psi_{ \tau,S}(u)\right)\mathrm{d}\rho(u)\), where the differentiability structure \((\mathcal{U}_{j}(X))_{j\in J(X)}\) is obtained using the respective differentiability structures: for each \(k\in\llbracket 1,n\rrbracket\), Assumption 1 yields a structure \((\mathcal{U}_{j_{k}}(x_{k}))_{j_{k}\in J_{k}(x_{k})}\) of \(u\longmapsto T(u,x_{k})\), which depends on \(x_{k}\), hence the \(k\) indices. To be precise, define for \(j=(j_{1},\cdots,j_{n})\in J_{1}(x_{1})\times\cdots\times J_{n}(x_{n}),\ \mathcal{U}_{j}(X):=\bigcap_{k=1}^{n} \mathcal{U}_{j_{k}}(x_{k})\), and \(J(X):=\{(j_{1},\cdots,j_{n})\in J_{1}(x_{1})\times\cdots\times J_{n}(x_{n}) \mid\mathcal{U}_{j}(X)\neq\varnothing\}\). In particular, for any \(k\in\llbracket 1,n\rrbracket,\ T(\cdot,x_{k})\) is \(\mathcal{C}^{2}\) on \(\mathcal{U}_{j}(X)\). Notice that the derivatives are not necessarily defined on the border \(\partial\mathcal{U}_{j}(X)\), which is of Lebesgue measure 0 by Assumption 1, thus the values of the derivatives on the border do not change the value of the integrals (the integrals may have the value \(+\infty\), depending on the behaviour of \(\phi_{\tau,s}\), but we shall see that they are all finite when \(\alpha\) is small enough). We drop the \(S,\tau\) index in the notation, and focus on the properties of \(\phi\) and \(\psi\) as functions of \(u\). Our first objective is to determine a constant \(K>0\), independent of \(u,S,\tau\), such that \(\psi\) is \(K\)-Lipschitz on \(\mathcal{U}_{j}(X)\). First, let \(\chi:=u\in\mathcal{U}_{j}(X)\longmapsto\left(\dfrac{\partial T}{ \partial u}(u,x_{k})\right)^{T}\theta\theta^{T}T(u,x_{k})\). \(\chi\) is of class \(\mathcal{C}^{1}\), therefore we determine its Lipschitz constant by upper-bounding the \(\|\cdot\|_{2}\)-induced operator norm of its differential, denoted by \(\left|\dfrac{\partial\chi}{\partial u}(u)\right|_{2}\). Notice that \(\chi(u)=\dfrac{1}{2}\dfrac{\partial}{\partial u}(\theta\cdot T(u,x_{k}))^{2}\). \[\text{Now }\left|\dfrac{\partial^{2}}{\partial u^{2}}(\theta \cdot T(u,x_{k}))^{2}\right|\hskip-1.0pt\Bigg{|}_{2}\leq d_{u}\max_{(i_{1},i_ {2})\in[\![1,d_{u}]^{2}]}\;\left|\dfrac{\partial^{2}}{\partial u_{i_{1}} \partial u_{i_{2}}}(\theta\cdot T(u,x_{k}))^{2}\right|\hskip-1.0pt\Bigg{|}, \text{with by Assumption \ref{eq:SGD},}\] \[\left|\dfrac{\partial^{2}}{\partial u_{i_{1}}\partial u_{i_{2}}}( \theta\cdot T(u,x_{k}))^{2}\right|\leq\sum_{(i_{3},i_{4})\in[\![1,d_{u}]^{2}]} \left|\theta_{i_{3}}\theta_{i_{4}}\dfrac{\partial^{2}}{\partial u_{i_{1}} \partial u_{i_{2}}}\!\!\left(\![T(u,x_{k})]_{i_{3}}[T(u,x_{k})]_{i_{4}}\right) \right|\leq d_{y}{}^{2}M.\] We obtain that \(\chi\) is \(\frac{1}{2}d_{u}d_{y}{}^{2}M\)-Lipschitz. Second, let \(\omega:u\in\mathcal{U}_{j}(X)\longmapsto\left(\dfrac{\partial T}{ \partial u}(u,x_{k})\right)^{T}\theta\theta^{T}y_{\tau(k)}\), also of class \(\mathcal{C}^{1}\). We re-write \(\left[\dfrac{\partial\omega}{\partial u}(u)\right]_{i_{1},i_{2}}=y_{\tau(k)}^ {T}\theta\theta^{T}\dfrac{\partial^{2}T}{\partial u_{i_{1}}\partial u_{i_{2}} }(u,x_{k})\), and conclude similarly by Assumption 5 that \(\omega\) is \(\|y_{\tau(k)}\|_{2}d_{u}M\)-Lipschitz. Finally, \(\psi=\sum\limits_{k=1}^{n}(\chi_{k}-\omega_{k})\), and is therefore \(K:=(\frac{1}{2}d_{y}{}^{2}+R_{y})d_{u}nM\)-Lipschitz, with \(R_{y}\) from Assumption 3. We have proven that \(\left|\dfrac{\partial\psi}{\partial u}(u)\right|\hskip-1.0pt\Bigg{|}_{2}\leq K\) for any \(u\in\mathcal{U}_{j}(X)\), and that \(K\) does not depend on \(X,Y,\theta,j\) or \(u\). We now suppose that \(\alpha^{\prime}<\frac{1}{K}\), which is to say \(\alpha<\frac{n}{2K}\). Under this condition, \(\phi:\mathcal{U}_{j}(X)\longrightarrow\mathbb{R}^{d_{u}}\) is injective. Indeed, if \(\phi(u_{1})=\phi(u_{2})\), then \(\|u_{1}-u_{2}\|_{2}=\alpha^{\prime}\|\psi(u_{1})-\psi(u_{2})\|_{2}\leq\alpha^{ \prime}K\|u_{1}-u_{2}\|_{2}\), thus \(u_{1}=u_{2}\). Furthermore, for any \(u\in\mathcal{U}_{j}(X)\), \(\dfrac{\partial\phi}{\partial u}(u)=\mathrm{Id}_{\mathbb{R}^{d_{u}}}-\alpha^ {\prime}\dfrac{\partial\psi}{\partial u}(u)\), with \(\left|\hskip-1.0pt\Bigg{|}\alpha^{\prime}\dfrac{\partial\psi}{\partial u}(u) \right|\hskip-1.0pt\Bigg{|}_{2}<1\), thus the matrix \(\dfrac{\partial\phi}{\partial u}(u)\) is invertible (using the Neumann series method). By the global inverse function theorem, \(\phi:\mathcal{U}_{j}(X)\longrightarrow\phi(\mathcal{U}_{j}(X))\) is a \(\mathcal{C}^{1}\)-diffeomorphism. Re-writing \(\int_{\mathcal{U}_{j}(X)}\hskip-1.0pt\mathbb{1}_{B}(\phi(u))\mathrm{d}\rho(u)= \phi\#\rho(B)\), we have now shown that \(\phi\) is a \(\mathcal{C}^{1}\)-diffeomorphism, thus since \(\rho\ll\lambda\), \(\phi\#\rho\ll\lambda\). It then follows that the integral is \(0\), then \(I_{\tau}(S)=0\) and finally \(\rho P_{\alpha}(B)=0\). Now that we have verified Condition 1, Condition 2 and Condition 3, we can apply (Bianchi et al., 2022), Theorem 2 to \(F\). Let \(\alpha_{1}<\alpha_{0}\) (see Proposition 4). **Theorem 1** (Convergence of the interpolated SGD trajectories).: _Consider a neural network \(T\) and measures \(\star\), \(\forall\) satisfying Assumption 1, Assumption 2, Assumption 3, Assumption 4 and Assumption 5. Let \((u_{\alpha}^{(l)}),\alpha\in]0,\alpha_{1}],t\in\mathbb{N}\) a collection of SGD trajectories associated to (4). Consider \((u_{\alpha})\) their associated interpolations. For any compact \(\mathcal{K}\subset\mathbb{R}^{d_{u}}\) and any \(\varepsilon>0\), we have:_ \[\lim_{\begin{subarray}{c}\alpha\longrightarrow 0\\ \alpha\in[0,\alpha_{1}]\end{subarray}}\nu\otimes\pi^{\otimes\mathbb{N}}\otimes \gamma^{\otimes\mathbb{N}}\otimes\mathfrak{o}^{\otimes\mathbb{N}}(d_{c}(u_{ \alpha},S_{-\partial_{C}F}(\mathcal{K}))>\varepsilon)=0. \tag{8}\] The distance \(d_{c}\) is defined in (7). As the learning rate decreases, the interpolated trajectories approach the trajectory set \(S_{-\partial_{C}F}\), which is essentially a solution of the _gradient flow equation_\(\dot{u}(s)=-\nabla F(u(s))\) (ignoring the set of non-differentiability, which is \(\lambda_{\mathbb{R}^{d_{u}}}\)-null). To get a tangible idea of the concepts at play, if \(F\) was \(\mathcal{C}^{2}\) and had a finite amount of critical points, then one would have the convergence of a solution \(u(s)\) to a critical point of \(F\), as \(s\longrightarrow+\infty\). These results have implicit consequences on the value of the parameters at the "end" of training for low learning rates, which is why we will consider a variant of SGD for which we can say more precise results on the convergence of the parameters. ## 4 Convergence of Noised Projected SGD Schemes on \(F\) In practice, it is seldom desirable for the parameters of a neural network to reach extremely large values during training. Weight clipping is a common (although contentious) method of enforcing that \(T(u,\cdot)\) stay Lipschitz, which is desirable for theoretical reasons. For instance the 1-Wasserstein duality in Wasserstein GANs (Arjovsky et al., 2017) requires Lipschitz networks, and similarly, Sliced-Wasserstein GANs (Deshpande et al., 2018) use weight clipping and enforce their networks to be Lipschitz. Given a radius \(R_{u}>0\), we consider SGD schemes that are restricted to \(u\in\overline{B}(0,r)=:B_{r}\), by performing _projected_ SGD. At each step \(t\), we also add a noise \(a\varepsilon^{(t+1)}\), where \(\varepsilon^{(t+1)}\) is an additive noise of law \(\eta\ll\lambda_{\mathbb{R}^{u}}\), which is often taken as standard Gaussian in practice. These additions yield the following SGD scheme: \[\begin{split} u^{(t+1)}=\pi_{r}\left(u^{(t)}-\alpha\varphi(u^{(t) },X^{(t+1)},Y^{(t+1)},\theta^{(t+1)})+\alpha a\varepsilon^{(t+1)}\right),\\ \left(u^{(0)},(X^{(t)})_{t\in\mathbb{N}}\ (Y^{(t)})_{t\in \mathbb{N}},\ (\theta^{(t)})_{t\in\mathbb{N}},\ (\varepsilon^{(t)})_{t\in \mathbb{N}}\right)\sim\nu\otimes\mathbb{x}^{\otimes\mathbb{N}}\otimes\gamma^{ \otimes\mathbb{N}}\otimes\sigma^{\otimes\mathbb{N}}\otimes\eta^{\otimes\mathbb{ N}},\end{split} \tag{9}\] where \(\pi_{r}:\mathbb{R}^{u}\longrightarrow B_{r}\) denotes the orthogonal projection on the ball \(B_{r}:=\overline{B}(0,r)\). Thanks to Condition 1, Condition 2 and the additional noise, we can verify the assumptions for (Bianchi et al., 2022) Theorem 4, yielding the same result as Theorem 1 for the noised projected scheme (9). In fact, under additional assumptions, we shall prove a stronger mode of convergence for the aforementioned trajectories. The natural context in which to perform gradient descent is on functions that admit a chain rule, which is formalised in the case of almost-everywhere differentiability by the notion of _path differentiability_, as studied thoroughly in (Bolte and Pauwels, 2021). We formulate this condition from (Bianchi et al., 2022) before presenting sufficient conditions on \(T\) under which path differentiability shall hold. **Condition 4**.: \(F\) _is path differentiable, which is to say that for any \(u\in\mathcal{C}_{\mathrm{abs}}(\mathbb{R}_{+},\mathbb{R}^{d_{u}})\), for almost all \(t>0,\ \forall v\in\partial_{C}F(u(s)),\ v\cdot\dot{u}(s)=(F\circ u)^{\prime}(s)\)._ Note that by (Bolte and Pauwels, 2021) Corollary 2, \(F\) is path differentiable if and only if \(\partial_{C}F\) is a conservative field for \(F\) (in the sense of (Bolte and Pauwels, 2021), Definition 1) if and only if \(F\) has a chain rule for \(\partial_{C}\) (which is the formulation chosen in Condition 4 by (Bianchi et al., 2022)). In order to satisfy Condition 4, we need to make the assumption that the NN input measure \(\mathpzc{x}\) and the data measure \(\mathpzc{y}\) are discrete measures, which is the case for \(\mathpzc{y}\) in the case of generative neural networks, but is less realistic for \(\mathpzc{x}\) in practice. We define \(\Sigma_{n}\) the \(n\)-simplex: its elements are the \(a\in\mathbb{R}^{n}\) s.t. \(\forall i\in\llbracket 1,n\rrbracket,\ a_{i}\geq 0\) and \(\sum_{i}a_{i}=1\). **Assumption 6**.: _One may write \(\mathpzc{x}=\sum_{k=1}^{n_{x}}a_{k}\delta_{x_{k}}\) and \(\mathpzc{y}=\sum_{k=1}^{n_{y}}b_{k}\delta_{y_{k}}\), with the coefficient vectors \(a\in\Sigma_{n_{x}},b\in\Sigma_{n_{y}},\ \mathcal{X}=\{x_{1},\cdots,x_{n_{x}}\}\subset\mathbb{R}^{d_{x}}\) and \(\mathcal{Y}=\{y_{1},\cdots,y_{n_{y}}\}\subset\mathbb{R}^{d_{y}}\)._ There is little practical reason to consider non-uniform measures, however the generalisation to any discrete measure makes no theoretical difference. Note that Assumption 3 is clearly implied by Assumption 6. In order to show that \(F\) is path differentiable, we require the natural assumption that each \(T(\cdot,x)\) is path differentiable. Since \(T(\cdot,x)\) is a vector-valued function, we need to extend the notion of path-differentiability. Thankfully, Bolte and Pauwels (2021) define _conservative mappings_ for vector-valued locally Lipschitz functions (Definition 4), which allows us to define naturally path differentiability of a vector-valued function as the path-differentiability of all of its coordinate functions. **Assumption 7**.: _For any \(x\in\mathbb{R}^{d_{x}}\), \(T(\cdot,x)\) is path differentiable._ Assumption 7 holds as soon as each \(T(\cdot,x)\) is semi-algebraic (i.e. piecewise polynomial, where the pieces are in finite number and can be written through polynomial equations) or more generally definable (see (Davis et al., 2020), Definition 5.10), as proven by (Davis et al., 2020), Theorem 5.8. This is the case for iterated compositions of linear maps and definable activation functions (such as the widespread sigmoid and ReLU), see (Davis et al., 2020), Corollary 5.11, as well as (Bolte and Pauwels, 2021), SS6.2 for further explanations on suitable NNs. **Proposition 5**.: _Under Assumption 2, Assumption 6 and Assumption 7, \(F\) is path differentiable._ Proof.: We shall use repeatedly the property that the composition of path differentiable functions remains path differentiable, which is proved in (Bolte and Pauwels, 2021), Lemma 6. Let \(\mathcal{E}:\left\{\begin{array}{ccc}\mathbb{R}^{n\times d_{y}}\times \mathbb{R}^{n\times d_{y}}&\longrightarrow&\mathbb{R}_{+}\\ Y,Y^{\prime}&\longmapsto&\mathrm{SW}^{2}_{2}(\gamma_{Y},\gamma_{Y^{\prime}}) \end{array}\right..\) By (Tanguy et al., 2023), Proposition 2.4.3, each \(\mathcal{E}(\cdot,Y)\) is semi-concave and thus is path differentiable (by (Tanguy et al., 2023), Proposition 4.3.3). Thanks to Assumption 6, \(\mathsf{x}^{\otimes n}\) and \(\mathsf{y}^{\otimes n}\) are discrete measures on \(\mathbb{R}^{n\times d_{x}}\) and \(\mathbb{R}^{n\times d_{y}}\) respectively, allowing one to write \(\mathsf{x}^{\otimes n}=\sum_{k}a_{k}\delta\chi_{k_{k}}\) and \(\mathsf{y}^{\otimes n}=\sum_{l}b_{l}\delta_{Y_{l}}\). Then \(F=u\longmapsto\sum_{k,l}a_{k}b_{l}\mathcal{E}(T(u,X_{k}),Y_{l})\) is path differentiable as a sum ((Bolte and Pauwels, 2021), Corollary 4) of compositions ((Bolte and Pauwels, 2021), Lemma 6) of path differentiable functions. We have now satisfied all the assumptions to apply (Bianchi et al., 2022), Theorem 6, showing that trajectories of (9) converge towards \(\mathcal{Z}_{r}\), the set of _Karush-Kahn-Tucker_ points related to the differential inclusion tied to the discrete scheme (9): \[\mathcal{Z}_{r}:=\left\{u\in\mathbb{R}^{d_{u}}\ |\ 0\in-\partial_{C}F(u)- \mathcal{N}_{r}(u)\right\},\quad\mathcal{N}_{r}(u)=\left\{\begin{array}{c} \{0\}\text{ if }\|u\|_{2}<r\\ \{\lambda u\ |\ \lambda\geq 0\}\text{ if }\|u\|_{2}=r\\ \varnothing\text{ if }\|u\|_{2}>r\end{array}\right., \tag{10}\] where \(\mathcal{N}_{r}(u)\) refers to the _normal cone_ of the ball \(B(0,r)\) at \(x\). The term \(\mathcal{N}_{r}(u)\) in (10) only makes a difference in the pathological case \(\|u\|_{2}=r\), which never happens in practice since the idea behind projecting is to do so on a very large ball, in order to avoid gradient explosion, to limit the Lipschitz constant and to satisfy theoretical assumptions. Omitting the \(\mathcal{N}_{r}(u)\) term, and denoting \(\mathcal{D}\) the points where \(F\) is differentiable, (10) simplifies to \(\mathcal{Z}_{r}\cap\mathcal{D}=\{u\in\mathcal{D}\ |\ \nabla F(u)=0\}\), i.e. the critical points of \(F\) for the usual differential. Like in Theorem 1, we let \(\alpha_{1}<\alpha_{0}\), where \(\alpha_{0}\) is defined in Proposition 4. **Theorem 2** (Bianchi et al. (2022), Theorem 6 applied to (9)).: _Consider a neural network \(T\) and measures \(\mathsf{x}\), \(\mathsf{y}\) satisfying Assumption 1, Assumption 2, Assumption 4, Assumption 5, Assumption 6 and Assumption 7. Let \((u^{(t)}_{\alpha})_{t\in\mathbb{N}}\) be SGD trajectories defined by (9) for \(r>0\) and \(\alpha\in]0,\alpha_{1}]\). One has_ \[\forall\varepsilon>0,\ \underset{t\longrightarrow+\infty}{\overline{\lim}}\ \nu\otimes\mathsf{x}^{\otimes\mathbb{N}}\otimes\mathsf{y}^{\otimes\mathbb{N}} \otimes\mathsf{e}^{\mathbb{N}}\otimes\eta^{\otimes\mathbb{N}}\left(d(u^{(t)}_{ \alpha},\mathcal{Z}_{r})>\varepsilon\right)\xrightarrow[\alpha\alpha\to 0]{ \alpha\in]0,\alpha_{1}]}0.\] The distance \(d\) above is the usual euclidean distance. Theorem 2 shows essentially that as the learning rate approaches \(0\), the long-run limits of the SGD trajectories approach the set of \(\mathcal{Z}_{r}\) in probability. Omitting the points of non-differentiability and the pathological case \(\|u\|_{2}=r\), the general idea is that \(u^{(\infty)}_{\alpha}\xrightarrow[\alpha\longrightarrow 0]{}\{u\ :\ \nabla F(u)=0\}\), which is the convergence that would be achieved by the gradient flow of \(F\), in the simpler case of \(\mathcal{C}^{2}\) smoothness. ## 5 Conclusion and Outlook Under reasonable assumptions, we have shown that SGD trajectories of parameters of generative NNs with a SW loss converge towards the desired sub-gradient flow solutions, implying in a weak sense the convergence of said trajectories. Under stronger assumptions, we have shown that trajectories of a mildly modified SGD scheme converge towards a set of generalised critical points of the loss, which provides a missing convergence result for such optimisation problems. The core limitation of this theoretical work is the assumption that the input data measure \(\varkappa\) is discrete (Assumption 6), which we required in order to prove that the loss \(F\) is path differentiable. In order to generalise to a non-discrete measure, one would need to apply or show a result on the stability of path differentiability through integration: in our case, we want to show that \(\int_{\mathcal{X}^{n}}\mathcal{E}(T(u,X),Y)\mathrm{d}\kappa^{\otimes n}(X)\) is path differentiable, knowing that \(u\longmapsto\mathcal{E}(T(u,X),Y)\) is path differentiable by composition (see the proof of Proposition 5 for the justification). Unfortunately, in general if each \(g(\cdot,x)\) is path differentiable, it is not always the case that \(\int g(\cdot,x)\mathrm{d}x\) is path differentiable (at the very least, there is no theorem stating this, even in the simpler case of tame functions, see (Bianchi et al., 2022), Section 6.1). However, there is such a theorem for _Clarke regular_ functions (specifically (Clarke, 1990), Theorem 2.7.2 with Remark 2.3.5), sadly the composition of Clarke regular functions is not always Clarke regular, it is only known to be the case in excessively restrictive cases (see (Clarke, 1990), Theorems 2.3.9 and 2.3.10). As a result, we leave the generalisation to a non-discrete input measure \(\varkappa\) for future work. Another avenue for future study would be to tie the flow approximation result from Theorem 1 to Sliced Wasserstein Flows (Liutkus et al., 2019; Bonet et al., 2022). The difficulty in seeing the differential inclusion (5) as a flow of \(F\) lies in the non-differentiable nature of the functions at play, as well as the presence of the composition between SW and the neural network \(T\), which bodes poorly with Clarke sub-differentials. ### Acknowledgements We thank Julie Delon for proof-reading and general feedback, as well as Remi Flamary and Alain Durmus for fruitful discussions.
2304.06738
A Study of Biologically Plausible Neural Network: The Role and Interactions of Brain-Inspired Mechanisms in Continual Learning
Humans excel at continually acquiring, consolidating, and retaining information from an ever-changing environment, whereas artificial neural networks (ANNs) exhibit catastrophic forgetting. There are considerable differences in the complexity of synapses, the processing of information, and the learning mechanisms in biological neural networks and their artificial counterparts, which may explain the mismatch in performance. We consider a biologically plausible framework that constitutes separate populations of exclusively excitatory and inhibitory neurons that adhere to Dale's principle, and the excitatory pyramidal neurons are augmented with dendritic-like structures for context-dependent processing of stimuli. We then conduct a comprehensive study on the role and interactions of different mechanisms inspired by the brain, including sparse non-overlapping representations, Hebbian learning, synaptic consolidation, and replay of past activations that accompanied the learning event. Our study suggests that the employing of multiple complementary mechanisms in a biologically plausible architecture, similar to the brain, may be effective in enabling continual learning in ANNs.
Fahad Sarfraz, Elahe Arani, Bahram Zonooz
2023-04-13T16:34:12Z
http://arxiv.org/abs/2304.06738v1
# A Study of Biologically Plausible Neural Network: ###### Abstract Humans excel at continually acquiring, consolidating, and retaining information from an ever-changing environment, whereas artificial neural networks (ANNs) exhibit catastrophic forgetting. There are considerable differences in the complexity of synapses, the processing of information, and the learning mechanisms in biological neural networks and their artificial counterparts, which may explain the mismatch in performance. We consider a biologically plausible framework that constitutes separate populations of exclusively excitatory and inhibitory neurons that adhere to Dale's principle, and the excitatory pyramidal neurons are augmented with dendritic-like structures for context-dependent processing of stimuli. We then conduct a comprehensive study on the role and interactions of different mechanisms inspired by the brain, including sparse non-overlapping representations, Hebbian learning, synaptic consolidation, and replay of past activations that accompanied the learning event. Our study suggests that the employing of multiple complementary mechanisms in a biologically plausible architecture, similar to the brain, may be effective in enabling continual learning in ANNs. 1 Footnote 1: Our code is publicly available at [https://github.com/NeurAI-Lab/Bio-ANN](https://github.com/NeurAI-Lab/Bio-ANN). ## 1 Introduction The human brain excels at continually learning from a dynamically changing environment, whereas standard artificial neural networks (ANNs) are inherently designed for training from stationary i.i.d. data. Sequential learning of tasks in continual learning (CL) violates this strong assumption, resulting in catastrophic forgetting. Although ANNs are inspired by biological neurons (Fukushima, 1980), they omit numerous details of the design principles and learning mechanisms in the brain. These fundamental differences may account for the mismatch in performance and behavior. Biological neural networks are characterized by considerably more complex synapses and dynamic context-dependent processing of information. In addition, individual neurons have a specific role. Each presynaptic neuron has an exclusive excitatory or inhibitory impact on its postsynaptic partners, as postulated by Dale's principle (Strata et al., 1999). Furthermore, distal dendritic segments in pyramidal neurons, which comprise the majority of excitatory cells in the neocortex, receive additional context information and enable context-dependent processing of information. This, in conjunction with inhibition, allows the network to learn task-specific patterns and avoid catastrophic forgetting (Yang et al., 2014; Iyer et al., 2022; Barron et al., 2017). Furthermore, replay of non-overlapping and sparse neural activities of previous experiences in the neocortex and hippocampus is considered to play a critical role in memory formation, consolidation, and retrieval (Walker & Stickgold, 2004; McClelland et al., 1995). To protect information from erasure, the brain employs synaptic consolidation, in which plasticity rates are selectively reduced in proportion to strengthened synapses (Cichon and Gan, 2015). Thus, we study the role and interactions of different mechanisms inspired by the brain in a biologically plausible framework in a CL setup. The underlying model constitutes separate populations of exclusively excitatory and inhibitory neurons in each layer, which adheres to Dale's principle (Cornford et al., 2020) and excitatory neurons (mimicking pyramidal cells) are augmented with dendrite-like structures for context-dependent processing of information (Iyer et al., 2022). Dendritic segments process an additional context signal encoding task information and subsequently modulate the feedforward activity of the excitatory neuron (Figure 1). We then systematically study the effect of controlling the overlap in representations, employing the "fire together" learning paradigm and employing experience replay and synaptic consolidation. Our study shows that: 1. An ANN architecture equipped with context-dependent processing of information by dendrites and adhering to Dale's principle can learn effectively in CL setup. Importantly, accounting for the discrepancy in the effect of weight changes in excitatory and inhibitory neurons further reduces forgetting in CL. 2. Enforcing different levels of activation sparsity in the hidden layers using k-winner-take-all activations and employing a complementary dropout mechanism (Heterogeneous Dropout) that encourages the model to use a different set of active neurons for each task can effectively control the overlap in representations, and hence reduce interference while allowing for resusability. Figure 1: Architecture of one hidden layer in the biologically plausible framework. Each layer consists of separate populations of exclusively excitatory pyramidal cells and inhibitory neurons, which adhere to Dale’s principle. The shade indicates the strength of weights or activations, with a darker shade indicating a higher value. (a) The pyramidal cells are augmented with dendritic segments, which receive an additional context signal \(c\) and the dendritic segment whose weights are most aligned with the context vector (bottom row) is selected to modulate the output activity of the feedforward neurons for context-dependent processing of information. (b) The Hebbian update step further strengthens the association between the context and the winning dendritic segment with maximum absolute value (indicated with a darker shade in the bottom row). Finally, Heterogeneous dropout keeps the activation count of each pyramidal cell (indicated with the gray shade) and drops the neurons that were most active for the previous task (the darkest shade dropped) to enforce non-overlapping representations. The top-k remaining cells then project to the next layer (increased shade). This provides us with a more biologically plausible framework within which we can study the role of different brain-inspired mechanisms and provide insights for designing new CL methods. 3. Task similarities need to be considered when enforcing such constraints to allow for a balance between forwarding transfer and interference. 4. Mimicking the ubiquitous "fire together, wire together" learning rule in the brain through a Hebbian update step on the connection between context signal and dendritic segments, which further strengthens context gating and facilitates the formation of task-specific subnetworks. 5. We show that employing both synaptic consolidation with importance measures adjusted to take into account the discrepancy in the effect of weight changes and a replay mechanism in a context-specific manner is critical for consolidating information across different tasks, especially for challenging CL settings. Our study suggests that employing multiple complementary mechanisms in a biologically plausible architecture, similar to what is believed to exist in the brain, can be effective in enabling CL in ANNs. To the best of our knowledge, we are the first to provide a comprehensive study of the integration of different brain-inspired mechanisms in a biologically plausible architecture in a CL setting. ## 2 Biologically Plausible Framework for CL We provide details of the biologically plausible framework within which we conduct our study. ### Dale's Principle Biological neural networks differ from their artificial counterparts in the complexity of synapses and the role of individual units. In particular, the majority of neurons in the brain adhere to Dale's principle, which posits that presynaptic neurons can only have an exclusive excitatory or inhibitory impact on their postsynaptic partners (Strata et al., 1999). Several studies show that the balanced dynamics (Murphy and Miller, 2009; Van Vreeswijk and Sompolinsky, 1996) of excitatory and inhibitory populations provide functional advantages, including efficient predictive coding (Boerlin et al., 2013) and pattern learning (Ingrosso and Abbott, 2019). Furthermore, inhibition is hypothesized to play a role in alleviating catastrophic forgetting (Barron et al., 2017). Standard ANNs, however, lack adherence to Dale's principle, as neurons contain both positive and negative output weights, and signs can change while learning. Cornford et al. (2020) incorporate Dale's principle into ANNs (referred to as DANNs), which take into account the distinct connectivity patterns of excitatory and inhibitory neurons (Tremblay et al., 2016) and perform comparable to standard ANNs in the benchmark object recognition task. Each layer \(l\) comprises of a separate population of excitatory, \(h_{e}^{l}\in\mathbb{R}_{+}^{n_{e}}\), and inhibitory \(h_{i}^{l}\in\mathbb{R}_{+}^{n_{i}}\) neurons, where \(n_{e}\gg n_{i}\) and synaptic weights are strictly non-negative. Similar to biological networks, while both populations receive excitatory projections from the previous layer (\(h_{e}^{l-1}\)), only excitatory neurons project between layers, whereas inhibitory neurons inhibit the activity of excitatory units of the same layer. Cornford et al. (2020) characterized these properties by three sets of strictly positive weights: excitatory connections between layers \(W_{ee}^{l}\in\mathbb{R}_{+}^{n_{e}\times n_{e}}\), excitatory projection to inhibitory units \(W_{ie}^{l}\in\mathbb{R}_{+}^{n_{i}\times n_{e}}\), and inhibitory projections within the layer \(W_{ei}^{l}\in\mathbb{R}_{+}^{n_{e}\times n_{i}}\). The output of the excitatory units is impacted by the subtractive inhibition from the inhibitory units: \[z^{l}=(W_{ee}^{l}-W_{ei}^{l}W_{ie}^{l})h_{e}^{l-1}+b^{l} \tag{1}\] where \(b^{l}\in\mathbb{R}^{n_{e}}\) is the bias term. Figure 1 shows the interactions and connectivity between excitatory pyramidal cells (triangle symbol) and inhibitory neurons (denoted by \(i\)). We aim to employ DANNs as feedforward neurons to show that they can also learn in a challenging CL setting and performance comparable to standard ANNs and provide a biologically plausible framework for further studying the role of inhibition in alleviating catastrophic forgetting. ### Active Dendrites The brain employs specific structures and mechanisms for context-dependent processing and routing of information. The prefrontal cortex, which plays an important role in cognitive control (Miller and Cohen, 2001), receives sensory inputs as well as contextual information, which allows it to choose the most relevant sensory features for the present task to guide actions (Mante et al., 2013; Fuster, 2015; Siegel et al., 2015; Zeng et al., 2019). Of particular interest are pyramidal cells, which represent the most populous members of the excitatory family of neurons in the brain (Bekkers, 2011). The dendritic spines in pyramid cells exhibit highly non-linear integrative properties that are considered important for learning task-specific patterns (Yang et al., 2014). Pyramidal cells integrate a range of diverse inputs into multiple independent dendritic segments, allowing contextual inputs in active dendrites to modulate the response of a neuron, making it more likely to fire. However, standard ANNs are based on a point neuron model (Lapique, 1907) which is an oversimplified model of biological computations and lacks the sophisticated non-linear and context-dependent behavior of pyramidal cells. Iyer et al. (2022) model these integrative properties of dendrites by augmenting each neuron with a set of dendritic segments. Multiple dendritic segments receive additional contextual information, which is processed using a separate set of weights. The resultant dendritic output modulates the feedforward activation, which is computed by a linearly weighted sum of the feedforward inputs. This computation results in a neuron where the magnitude of the response to a given stimulus is highly context-dependent. To enable task-specific processing of information, the prototype vector for task \(\tau\) is evaluated by taking the element-wise mean of the tasks samples, \(\mathcal{D}_{\tau}\) at the beginning of the task and then subsequently provided as context during training. \[c_{\tau}=\frac{1}{\left|\mathcal{D}_{\tau}\right|}\sum_{x\in\mathcal{D}_{\tau }}x \tag{2}\] During inference, the closest prototype vector to each test sample, \(x^{\prime}\), is selected as the context using the Euclidean distance among all task prototypes, \(\mathbf{C}\), stored in memory. \[c^{\prime}=\operatorname*{arg\,min}_{c_{\tau}}\left\|\mathbf{x}^{\prime}-\mathbf{C}_{ \tau}\right\|_{2} \tag{3}\] Following Iyer et al. (2022), we augment the excitatory units in each layer with dendritic segments (Figure 1 (a)). The feedforward activity of excitatory units is modulated by dendritic segments, which receive a context vector. Given the context vector, each dendritic segment \(j\) computes \(u_{j}^{T}c\), given weight \(u_{j}\in\mathbb{R}^{d}\) and the context vector \(c\in\mathbb{R}^{d}\) where \(d\) is the dimensions of the input image. For excitatory neurons, the dendritic segment with the highest response to the context (maximum absolute value with the sign retained) is selected to modulate output activity. \[h_{e}^{l}=\text{\emph{k-WTA}}(z_{l}\times\sigma(u_{\kappa}^{T}c)),\qquad \text{ where }\kappa=\operatorname*{arg\,max}_{j}|u_{j}^{T}c| \tag{4}\] where \(\sigma\) is the sigmoid function (Han & Moraga, 1995) and _k-WTA_(.) is the k-Winner-Take-All activation function (Ahmad & Scheinkman, 2019) which propagates only the top \(k\) neurons and sets the rest at zero. This provides us with a biologically plausible framework where, similar to biological networks, the feedforward neurons adhere to Dale's principle, and the excitatory neurons mimic the integrative properties of active dendrites for context-dependent processing of stimuli. ## 3 Continual Learning Settings To study the role of different components inspired by the brain in a biologically plausible NN for CL and gauge their roles in the performance and characteristics of the model, we conduct all our experiments under uniform settings. Implementation details and experimental setup are provided in Appendix. We evaluate the models on two CL scenarios. **Domain incremental learning (Domain-IL)** refers to the CL scenario in which the classes remain the same in subsequent tasks but the input distribution changes. We consider Rot-MNIST which involves classifying the 10 digits in each task with each digit rotated by an angle between 0 and 180 degrees, and Perm-MNIST which applies a fixed random permutation to the pixels for each task. Importantly, there are different variants of Rot-MNIST with varying difficulties. We incrementally rotate the digits to a fixed degree, i.e. {0, 8,..., (N-1)*8} for task \(\{\tau_{1},\tau_{2},..,\tau_{N}\}\) which is substantially more challenging than random sampling rotations. Importantly, the Rot-MNIST dataset captures the notion of similarity in subsequent tasks, where the similarity between two tasks is defined by the difference in their degree of rotation, whereas each task in Perm-MNIST is independent. We also consider the challenging **Class incremental learning (Class-IL)** scenario where new classes are added with each subsequent task and the agent must learn to distinguish not only amongst the classes within the current task but also across all learned tasks. Seq-MNIST divides the MNIST classification into 5 tasks with 2 classes for each task. ## 4 Empirical Evaluation To investigate the impact of the different components inspired by the brain, we use the aforementioned biologically plausible framework and study the effect on the performance and characteristics of the model. ### Effect of Inhibitory Neurons We first study whether feedforward networks with separate populations of excitatory and inhibitory units can work well in the CL setting. Importantly, we note that when learning a sequence of tasks with inhibitory neurons, it is beneficial to take into account the disparities in the degree to which updates to different parameters affect the layer's output distribution (Cornford et al., 2020) and hence forgetting. Specifically, since \(W^{l}_{ie}\) and \(W^{l}_{ci}\) affect the output distribution to a higher degree than \(W^{l}_{ee}\), we reduce the learning rate for these weights after the first task (see Appendix). Table 1 shows that models with feedforward neurons adhering to Dale's principle perform on par with standard neurons and can also further mitigate forgetting in conjunction with Active Dendrites when the quality of context signal is high (in case of Permuted-MNIST). Note that this gain comes with considerably fewer parameters and context-dependent processing, as we keep the number of neurons in each layer the same, and only excitatory neurons (\(\sim\)90%) are augmented with dendritic segments. For 20 tasks, Active Dendrite with Dale's principle reduces the parameters from \(\sim\)70M to less than \(\sim\)64M parameters. We hypothesize that having separate populations within a layer enables them to play a specialized role. In particular, inhibitory neurons can selectively inhibit certain excitatory neurons based on stimulus, which can further facilitate the formation of task-specific subnetworks and complement the context-dependent processing of information by dendritic segments. \begin{table} \begin{tabular}{l|c c c|c c c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Rot-MNIST} & \multicolumn{3}{c|}{Perm-MNIST} & \multirow{2}{*}{Seq-MNIST} \\ \cline{2-2} \cline{5-8} & 5 Tasks & 10 Tasks & 20 Tasks & 5 Tasks & 10 Tasks & 20 tasks \\ \hline Joint & 98.27\({}_{\texttt{a0.07}}\) & 98.25\({}_{\texttt{a0.06}}\) & 98.17\({}_{\texttt{a0.11}}\) & 97.64\({}_{\texttt{a0.19}}\) & 97.61\({}_{\texttt{a0.15}}\) & 97.59\({}_{\texttt{a0.08}}\) & 94.53\({}_{\texttt{a0.50}}\) \\ SGD & 93.79\({}_{\texttt{a0.32}}\) & 74.13\({}_{\texttt{a0.36}}\) & 51.89\({}_{\texttt{a0.36}}\) & 77.96\({}_{\texttt{a3.84}}\) & 76.42\({}_{\texttt{a2.54}}\) & 65.18\({}_{\texttt{a1.21}}\) & 19.83\({}_{\texttt{a0.04}}\) \\ \hline Active Dendrites & 92.58\({}_{\texttt{a0.26}}\) & 71.06\({}_{\texttt{a0.32}}\) & 48.18\({}_{\texttt{a0.52}}\) & 95.71\({}_{\texttt{a0.27}}\) & 94.41\({}_{\texttt{a0.21}}\) & 91.74\({}_{\texttt{a0.34}}\) & 19.97\({}_{\texttt{a0.29}}\) \\ + Dale’s Principle & 92.28\({}_{\texttt{a0.27}}\) & 70.78\({}_{\texttt{a0.23}}\) & 48.79\({}_{\texttt{a0.27}}\) & 96.18\({}_{\texttt{a0.14}}\) & 95.20\({}_{\texttt{a0.11}}\) & 92.44\({}_{\texttt{a0.20}}\) & 19.79\({}_{\texttt{a0.08}}\) \\ \hline + Hebbian Update & 92.58\({}_{\texttt{a0.35}}\) & 71.22\({}_{\texttt{a0.86}}\) & 48.88\({}_{\texttt{a0.90}}\) & 95.90\({}_{\texttt{a0.24}}\) & 94.72\({}_{\texttt{a0.29}}\) & 92.55\({}_{\texttt{a0.47}}\) & 19.88\({}_{\texttt{a0.02}}\) \\ + HD & 93.24\({}_{\texttt{a0.25}}\) & 75.50\({}_{\texttt{a0.74}}\) & 51.11\({}_{\texttt{a0.76}}\) & 96.58\({}_{\texttt{a0.17}}\) & 95.94\({}_{\texttt{a0.24}}\) & 93.20\({}_{\texttt{a0.32}}\) & 38.45\({}_{\texttt{a0.27}}\) \\ + SC & 93.34\({}_{\texttt{a0.57}}\) & 75.94\({}_{\texttt{a1.15}}\) & 64.99\({}_{\texttt{a2.19}}\) & 96.54\({}_{\texttt{a0.29}}\) & 96.14\({}_{\texttt{a0.47}}\) & 95.43\({}_{\texttt{a0.49}}\) & 27.31\({}_{\texttt{a2.20}}\) \\ + ER & 95.21\({}_{\texttt{a0.28}}\) & 90.99\({}_{\texttt{a0.51}}\) & 83.45\({}_{\texttt{a0.44}}\) & 96.72\({}_{\texttt{a0.13}}\) & 96.04\({}_{\texttt{a0.17}}\) & 94.26\({}_{\texttt{a0.78}}\) & 86.93\({}_{\texttt{a0.82}}\) \\ + ER + CR & 96.48\({}_{\texttt{a0.35}}\) & 93.87\({}_{\texttt{a0.25}}\) & 89.39\({}_{\texttt{a0.23}}\) & 97.23\({}_{\texttt{a0.30}}\) & 96.93\({}_{\texttt{a0.31}}\) & 96.13\({}_{\texttt{a0.05}}\) & 89.60\({}_{\texttt{a0.73}}\) \\ \hline Bio-ANN & **96.82\({}_{\texttt{a0.14}}\)** & **94.64\({}_{\texttt{a0.23}}\)** & **91.32\({}_{\texttt{a0.26}}\)** & **97.33\({}_{\texttt{a0.04}}\)** & **97.07\({}_{\texttt{a0.05}}\)** & **96.51\({}_{\texttt{a0.03}}\)** & **89.90\({}_{\texttt{a0.24}}\)** \\ \hline \hline \end{tabular} \end{table} Table 1: Effect of each component of the biologically plausible framework on different datasets with varying number of tasks. We first show the effect of utilizing feedforward neurons adhering to Dale’s principle in conjunction with _Active Dendrites_ to form the framework within which we evaluate the individual effect of brain-inspired mechanisms (Hebbian Update, Heterogeneous Dropout (HD), Synaptic Consolidation (SC), Experience Replay (ER) and Consistency Regularization (CR)) before combining them all together to forge Bio-ANN. For all experiments, we set the percentage of active neurons at 5. We provide the average task performance and 1 std of five runs. We also demonstrate performance gains with the brain-inspired mechanisms on top of standard ANNs in Table 6 in Appendix ### Sparse Activations Facilitate the Formation of Subnetworks Neocortical circuits are characterized by high levels of sparsity in neural activations (Barth & Poulet, 2012; Graham & Field, 2006). There is further evidence suggesting that neuronal coding of natural sensory stimuli should be sparse (Barth & Poulet, 2012; Tolhurst et al., 2009). This is in stark contrast to the dense and highly entangled connectivity in standard ANNs. Particularly for CL, sparsity provides several advantages: sparse non-overlapping representations can reduce interference between tasks (Abbasi et al., 2022; Iyer et al., 2022; Aljundi et al., 2019), can lead to the natural emergence of task-specific modules (Hadsell et al., 2020). We study the effect of different levels of activation sparsity by varying the ratio of active neurons in k-winner-take-all (k-WTA) activations (Ahmad & Scheinkman, 2019). Each hidden layer of our model has a constant sparsity in its connections (randomly 50% of weights are set to 0 at initialization) and propagates only the top-k activations (in Figure 1, k-WTA layer). Table 2 shows that sparsity plays a critical role in enabling CL in DNNs. Sparsity in activations effectively reduces interference by reducing the overlap in representations. Interestingly, the stark difference in the effect of different levels of sparse activations on Rot-MNIST and Perm-MNIST highlights the importance of considering task similarity in the design of CL methods. As the tasks in Perm-MNIST are independent of each other, having fewer active neurons (5%) enables the network to learn non-overlapping representations for each task, while the high task similarity in Rot-MNIST can benefit from overlapping representations, which allows for the reusability of features across the tasks. The number of tasks the agent has to learn also has an effect on the optimal sparsity level. In Appendix, we show that having different levels of sparsity in different layers can further improve performance. As the earlier layers learn general features, having a higher ratio of active neurons can enable higher reusability and forward transfer. For the later layers, a smaller ratio of active neurons can reduce interference between task-specific features. ### Heterogeneous Dropout for Non-overlapping Activations and Subnetworks Information in the brain is encoded by the strong activation of a relatively small set of neurons that form a sparse coding. A different subset of neurons is utilized to represent different types of stimuli Graham \begin{table} \begin{tabular}{r|c c c c|c c c c} \hline \multirow{2}{*}{\#Tasks} & \multicolumn{5}{c|}{Rot-MNIST} & \multicolumn{5}{c}{Perm-MNIST} \\ \cline{2-9} & 0.05 & 0.10 & 0.20 & 0.50 & 0.05 & 0.10 & 0.20 & 0.50 \\ \hline 5 & \(92.28_{\pm 0.27}\) & \(92.26_{\pm 0.31}\) & \(\mathbf{92.79}_{\pm 0.44}\) & \(92.26_{\pm 0.65}\) & \(95.77_{\pm 0.33}\) & \(\mathbf{96.32}_{\pm 0.20}\) & \(90.29_{\pm 0.67}\) & \(74.51_{\pm 13.55}\) \\ 10 & \(70.78_{\pm 0.23}\) & \(71.95_{\pm 1.54}\) & \(\mathbf{73.32}_{\pm 0.69}\) & \(71.61_{\pm 0.76}\) & \(\mathbf{95.06}_{\pm 0.29}\) & \(93.45_{\pm 0.92}\) & \(72.68_{\pm 12.83}\) & \(41.33_{\pm 0.72}\) \\ 20 & \(\mathbf{48.79}_{\pm 0.27}\) & \(47.96_{\pm 1.84}\) & \(48.65_{\pm 0.91}\) & \(47.71_{\pm 0.91}\) & \(\mathbf{92.40}_{\pm 0.38}\) & \(84.28_{\pm 1.33}\) & \(63.84_{\pm 3.45}\) & \(20.80_{\pm 0.90}\) \\ \hline \end{tabular} \end{table} Table 2: Effect of different levels of sparsity in activations on the performance of the model. Columns show the ratio of active neurons (\(k\) in k-WTA activation), and rows provide the number of tasks. Figure 2: Total activation counts for the test set of each task (y-axis) for a random set of 25 units in the second hidden layer of the model. Heterogeneous dropout reduces the overlap in activations and facilitates the formation of task-specific subnetworks. & Field (2006). Furthermore, there is evidence of non-overlapping representations in the brain. To mimic this, we employ Heterogeneous dropout (Abbasi et al., 2022) which in conjunction with context-dependent processing of information, can effectively reduce the overlap of representations, leading to less interference between tasks and, thereby, less forgetting. During training, we track the frequency of activations for each neuron in a layer for a given task, and in the subsequent tasks, the probability of a neuron being dropped is inversely proportional to its activation counts. This encourages the model to learn the new task using neurons that have been less active for previous tasks. Figure 1 shows that neurons that have been more active (darker shade) are more likely to be dropped before k-WTA is applied. Specifically, let \([a_{t}^{l}]_{j}\) denote the activation counter of the neuron \(j\) in the layer \(l\) after learning \(t\) tasks. For the learning task \(t\) + 1, the probability that this neuron is retained is given by: \[[p_{t+1}^{l}]_{j}=exp(\frac{-[a_{t}^{l}]_{j}}{\max_{j}{[a_{t}^{l}]_{j}}\rho)} \tag{5}\] where \(\rho\) controls the strength of enforcement of non-overlapping representations, with larger values leading to less overlap. This provides us with an efficient mechanism for controlling the degree of overlap between the representations of different tasks and, hence, the degree of forward transfer and interference based on the task similarities. Table 3 shows that employing Heterogeneous dropout can further improve the performance of the model. We also analyze the effect of the \(\rho\) parameter on the activation counts and the overlap in the representations. Figure 2 shows that Heterogeneous dropout can facilitate the formation of task-specific subnetworks and Figure 3 shows the symmetric KL-divergence between the distribution of activation counts on the test set of Task 1 and Task 2 on the model trained with different \(\rho\) values on Perm-MNIST with two tasks. As we increase the \(\rho\) parameter, the activations in each layer become increasingly dissimilar. Heterogeneous dropout provides a simple mechanism for balancing the reusability and interference of features depending on the similarity of tasks. ### Layerwise Heterogeneous Dropout and Task Similarity For an effective CL agent, it is important to maintain a balance between forward transfer and interference across tasks. As the earlier layers learn general features, a higher portion of the features can be reused to learn the new task, which can facilitate forward transfer, whereas the later layers learn more task-specific features, which can cause interference. Heterogeneous dropout provides us with an efficient mechanism for controlling the degree of overlap between the activations, and hence the features of each layer. Here, we investigate whether having different levels of sparsity (controlled with the \(\rho\) parameter) in different layers can further improve performance. As the earlier layers learn general features, having higher overlap (smaller \(\rho\)) between the set of active neurons can enable higher reusability and forward transfer. For the later layers, a lesser overlap between the activations (higher \(\rho\)) can reduce interference between task-specific features. To study the effect of Heterogeneous dropout in relation to task similarity, we vary the incremental rotation, \(\theta_{inc}\), in each subsequent task for Rot-MNIST setting with 5 tasks. The rotation of task \(\tau\) is given by \begin{table} \begin{tabular}{l|c|c|c c c c c} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{\# Tasks} & \multirow{2}{*}{w/o} & \multicolumn{5}{c}{Dropout parameter (\(\rho\))} \\ \cline{3-8} & & & Dropout & 0.1 & 0.3 & 0.5 & 0.7 & 1.0 \\ \hline \multirow{3}{*}{Rot-MNIST} & 5 & 92.28\({}_{\pm 0.20}\) & 91.79\({}_{\pm 0.53}\) & 92.53\({}_{\pm 0.11}\) & 92.74\({}_{\pm 0.38}\) & 93.19\({}_{\pm 0.32}\) & **93.42\({}_{\pm 0.25}\)** \\ & 10 & 70.78\({}_{\pm 0.23}\) & 71.53\({}_{\pm 1.07}\) & 72.38\({}_{\pm 1.44}\) & 73.63\({}_{\pm 1.00}\) & 74.20\({}_{\pm 0.78}\) & **75.50\({}_{\pm 0.74}\)** \\ & 20 & 48.79\({}_{\pm 0.27}\) & 48.57\({}_{\pm 0.90}\) & 48.91\({}_{\pm 0.65}\) & 49.84\({}_{\pm 0.59}\) & 51.03\({}_{\pm 0.31}\) & **51.11\({}_{\pm 0.76}\)** \\ \hline \multirow{3}{*}{Perm-MNIST} & 5 & 95.77\({}_{\pm 0.33}\) & 95.70\({}_{\pm 0.29}\) & 95.97\({}_{\pm 0.44}\) & 96.40\({}_{\pm 0.28}\) & **96.58\({}_{\pm 0.17}\)** & 96.48\({}_{\pm 0.26}\) \\ & 10 & 95.06\({}_{\pm 0.29}\) & 95.23\({}_{\pm 0.04}\) & 95.65\({}_{\pm 0.20}\) & 95.54\({}_{\pm 0.26}\) & 95.74\({}_{\pm 0.22}\) & **95.94\({}_{\pm 0.24}\)** \\ \cline{1-1} & 20 & 92.40\({}_{\pm 0.38}\) & 92.83\({}_{\pm 0.42}\) & **93.20\({}_{\pm 0.32}\)** & 92.82\({}_{\pm 0.06}\) & 93.09\({}_{\pm 0.47}\) & 91.77\({}_{\pm 0.30}\) \\ \hline \end{tabular} \end{table} Table 3: Effect of Heterogeneous dropout with increasing \(\rho\) values on different datasets with a varying number of tasks. \(1)\theta_{inc}\). Table 4 shows the performance of the model for different layerwise \(\rho\) values. Generally, heterogeneous dropout consistently improves the performance of the model, especially when the task similarity is low. For \(\theta_{inc}=32\), it provides \(\sim\)25% improvement. As task similarity decreases (\(\theta_{inc}\) increases), higher values of \(\rho\) are more effective. Furthermore, we see that having different \(\rho\) values for each layer can provide additional gains in performance. ### Hebbian Learning Strengthens Context Gating For a biologically plausible ANN, it is important to incorporate not only the design elements of biological neurons but also the learning mechanisms it employs. Lifetime plasticity in the brain generally follows the Hebbian principle: a neuron that consistently contributes to the firing of another neuron will build a stronger connection to that neuron (Hebb, 2005). Therefore, we follow the approach in Flesch et al. (2023) to complement error-based learning with the Hebbian update to strengthen the connections between contextual information and dendritic segments (Figure 1(b)). Each supervised parameter update with backpropagation is followed by a Hebbian update step on the dendritic segments to strengthen the connections between the context input and the corresponding dendritic segment, which is activated. To constrain the parameters, we use Oja's rule, which adds weight decay to Hebbian learning (Oja, 1982), \[u_{\kappa}\gets u_{\kappa}+\eta_{h}d(c-du_{\kappa}) \tag{6}\] where \(\eta_{h}\) is the learning rate, \(\kappa\) is the index of the winning dendrite with weight \(u_{\kappa}\) and the modulating signal \(d=u_{\kappa}^{T}c\) for the context signal \(c\). Figure 4 shows that the Hebbian update step increases the magnitude of the modulating signal from the dendrites on the feedforward activity, which can further strengthen context-dependent gating and facilitate the formation of task-specific subnetworks. Table 1 shows that this can consequently lead to improvement in results. Though the gains are not considerable in these settings, Table 5 shows the gains with Hebbian Learning are more pronounced in challenging CL settings with higher task and dataset complexity. ### Synaptic Consolidation Further Mitigates Forgetting In addition to their integrative properties, dendrites also play a key role in retaining information and providing protection against erasure (Cichon and Gan, 2015; Yang et al., 2009). New spines that are formed on different sets of dendritic branches in response to learning different tasks are protected from being eliminated through mediation of synaptic plasticity and structural changes that persist when learning a new task (Yang et al., 2009). We employ synaptic consolidation by incorporating _Synaptic Intelligence_(Zenke et al., 2017) (details in Appendix) which maintains an importance estimate of each synapse in an online manner during training and subsequently reduces the plasticity of synapses which are considered important for learned tasks. In particular, we adjust the importance estimate to account for the disparity in the degree to which updates to different parameters affect the output of the layer due to the inhibitory interneuron architecture in the DANN layers (Cornford et al., 2020). The importance estimates of the excitatory connections to the inhibitory units and the intra-layer inhibitory connections are upscaled to further penalize changes to these weights. Table 1 shows that employing Synaptic Intelligence (+SC) in this manner further mitigates forgetting. Particularly for Rot-MNIST with 20 tasks, it provides a considerable performance improvement. ### Experience Replay is Essential for Enabling CL in Challenging Scenarios Replay of past neural activation patterns in the brain is considered to play a critical role in memory formation, consolidation, and retrieval (Walker and Stickgold, 2004; McClelland et al., 1995). The replay mechanism in the hippocampus (Kumaran et al., 2016) has inspired a series of rehearsal-based approaches (Li and Hoiem, 2017; Chaudhry et al., 2019; Lopez-Paz and Ranzato, 2017; Arani et al., 2022) that have been proven to be effective in challenging continual learning scenarios (Farquhar and Gal, 2018; Hadsell et al., 2020). Therefore, to replay samples from previous tasks, we utilize a small episodic memory buffer that is maintained through _Reservoir sampling_(Vitter, 1985). It attempts to approximately match the distribution of the incoming stream by assigning equal probabilities to each new sample to be represented in the buffer. During training, samples from the current task, \((x_{b},y_{b})\sim\mathcal{D}_{\tau}\), are interleaved with memory buffer samples, \((x_{m},y_{m})\sim\mathcal{M}\) to approximate the joint distribution of tasks seen so far. Furthermore, to mimic the replay of the activation patterns that accompanied the learning event in the brain, we also save the output logits, \(z_{m}\), across the training trajectory and enforce consistency loss when replaying samples from episodic memory. Concretely, the loss is given by: \[\mathcal{L}=\mathcal{L}_{cls}(f(x_{b};\theta),y_{b})+\alpha\mathcal{L}_{cls}(f (x_{m};\theta),y_{m})+\beta(f(x_{m};\theta)-z_{m})^{2} \tag{7}\] where \(f(.;\theta)\) is the model parameterized by \(\theta\), \(\mathcal{L}_{cls}\) is the standard cross-entropy loss, and \(\alpha\) and \(\beta\) controls the strength of interleaved training and the consistency constraint, respectively. Table 1 shows that experience replay (+ER) complements context-dependent information processing and enables the model to learn the joint distribution well in varying challenging settings. In particular, the failure of the model to avoid forgetting in the Class-IL setting (Seq-MNIST) without experience replay suggests that context-dependent processing of information alone does not suffice, and experience replay might be essential. Adding consistency regularization (+CR) further improves performance as the model receives additional relational information about the structural similarity of classes, which facilitates the consolidation of information. ### Combining the individual components Having shown the individual effect of each of the brain-inspired components in the biologically plausible framework, here we look at their combined effect. The resultant model is referred to as Bio-ANN. Table 1 shows that the different components complement each other and consistently improve the performance of the model. Our empirical results suggest that employing multiple complementary components and learning mechanisms, similar to the brain, may be an effective approach to enable continual learning in ANNs. ### Additional Results on Challenging CL settings To further evaluate the versatility of the biological components on more challenging settings, we conducted experiments on Fashion-MNIST and grayscale version of CIFAR10. We considered both the Class-IL and Domain-IL settings. Seq-FMNIST, Seq-MNIST, and Seq-GCIFAR10 divide the classification into 5 tasks with 2 classes each, while Rot-FMNIST involves 20 tasks with each task involving classifying the 10 classes in each task with the samples rotated by increments of 8 degrees. For brevity, we refer to Active Dendrites + Dale's principle as ActiveDANN. To show the effect of different components better (ActiveDANN without ER fails in the class-IL setting), we consider ActiveDann + ER as the baseline upon which we add the other components. Empirical results in Table 5 show that the findings on MNIST settings also translate to more challenging datasets and each component leads to performance improvement. In particular, we observe that for more complex datasets, Hebbian learning provides a significant performance improvement. The preliminary results suggest that the effect of biological mechanisms and architecture might be more pronounced on more complex datasets and CL settings. ## 5 Discussion Continual learning is a hallmark of intelligence, and the human brain constitutes the most efficient learning agent capable of CL. Therefore, incorporating the different components and mechanisms employed in the brain and studying their interactions can provide valuable insights for the design of ANNs suitable for CL. While there are several studies that are inspired by the brain, they focus primarily on one aspect. Since the brain employs all these different components in tandem, it stands to reason that their interactions, or complementary nature, are what enables effective learning instead of one component alone. Furthermore, the underlying framework within which these components are employed and the learning mechanisms might \begin{table} \begin{tabular}{l|c|c c c} \hline \hline \multirow{2}{*}{Method} & Domain-IL & \multicolumn{3}{c}{Class-IL} \\ \cline{2-5} & Rot-FMNIST & Seq-FMNIST & Seq-MNIST & Seq-GCIFAR10 \\ \hline Joint & 98.15\({}_{\texttt{a}0.09}\) & 94.33\({}_{\texttt{a}0.51}\) & 94.53\({}_{\texttt{a}0.53}\) & 36.10\({}_{\texttt{a}0.86}\) \\ SGD & 51.89\({}_{\texttt{a}0.27}\) & 19.83\({}_{\texttt{a}0.04}\) & 19.83\({}_{\texttt{a}0.04}\) & 14.86\({}_{\texttt{a}1.06}\) \\ \hline ActiveDANN & 49.42\({}_{\texttt{a}0.83}\) & 21.46\({}_{\texttt{a}1.95}\) & 19.97\({}_{\texttt{a}0.29}\) & 16.12\({}_{\texttt{a}0.31}\) \\ ActiveDANN + ER & 80.99\({}_{\texttt{a}0.53}\) & 77.56\({}_{\texttt{a}0.27}\) & 86.88\({}_{\texttt{a}0.83}\) & 28.74\({}_{\texttt{a}0.42}\) \\ \hline + Hebbian Update & 82.16\({}_{\texttt{a}0.26}\) & 78.02\({}_{\texttt{a}0.38}\) & 88.39\({}_{\texttt{a}0.78}\) & 30.12\({}_{\texttt{a}0.49}\) \\ + SC & 82.55\({}_{\texttt{a}0.37}\) & 78.05\({}_{\texttt{a}0.61}\) & 88.79\({}_{\texttt{a}0.49}\) & 30.45\({}_{\texttt{a}0.61}\) \\ + HD & 83.97\({}_{\texttt{a}0.46}\) & 78.74\({}_{\texttt{a}0.38}\) & 89.60\({}_{\texttt{a}0.73}\) & 30.29\({}_{\texttt{a}0.51}\) \\ \hline Bio-ANN & **89.22\({}_{\texttt{a}0.21}\)** & **79.28\({}_{\texttt{a}0.42}\)** & **89.90\({}_{\texttt{a}0.24}\)** & **30.95\({}_{\texttt{a}0.17}\)** \\ \hline \hline \end{tabular} \end{table} Table 5: Effect of each component of the biologically plausible framework on different Domain-IL and Class-IL settings. For all experiments, we use a memory budget of 500 samples. We provide the average task performance and 1 std of 5 runs. also be critical. The effort to close the gap between current AI and human intelligence could benefit from our enhanced understanding of the brain and incorporating similar mechanisms in ANNs. This is the fundamental question we aimed to study and bring to the attention of the research community at large. We conducted a study on the effect of different brain-inspired mechanisms under a biologically plausible framework in the CL setting. The underlying model incorporates several key components of the design principles and learning mechanisms in the brain: each layer constitutes separate populations of exclusively excitatory and inhibitory units, which adheres to Dale's principle, and the excitatory pyramidal neurons are augmented with dendritic segments for context-dependent processing of information. We first showed that equipped with the integrative properties of dendrites, the feedforward network adhering to Dale's principle perform on par with standard ANNs, and provide considerable performance gains in cases where the the quality of context signal in high. Then we studied the individual role of different components. We showed that controlling the sparsity in activations using k-WTA activations and Heterogeneous dropout mechanism that encourages the model to use a different set of neurons for each task is an effective approach for maintaining a balance between reusability of features and interference, which is critical for enabling CL. We further showed that complementing error-based learning with the "fire together, wire together" learning paradigm can further strengthen the association between the context signal and dendritic segments that process them and facilitate context-dependent gating. To further mitigate forgetting, we incorporated synaptic consolidation in conjunction with experience replay and showed their effectiveness in challenging CL settings. Finally, the combined effect of these components suggests that similar to the brain, employing multiple complementary mechanisms in a biologically plausible architecture is an effective approach to enable CL in ANN. It also provides a framework for further study of the role of inhibition in mitigating catastrophic forgetting. However, there are several limitations and potential avenues for future research. In particular, as dendritic segments provide an effective mechanism for studying the effect of encoding different information in the context signal, they provide an interesting research avenue as to what information is useful for the sequential learning of tasks and the effect of different context signals. Neuroscience studies suggest that multiple brain regions are involved in processing a stimulus and, while there is evidence that active dendritic segments receive contextual information that is different from the input received by the proximal segments, it is unclear what information is encoded in the contextual information and how it is extracted. Here, we used the context signal as in (Iyer et al., 2022), which aims to encode the identity of the task by taking the average input image of all the samples in the task. Although this approach empirically works well in the Perm-MNIST setting, it is important to consider its utility and limitations under different CL settings. Given the specific design of Perm-MNIST, binary-centered digits, and the independent nature of the permutations in each task, the average input image can provide a good approximation of the applied permutation, and hence efficiently encode the task identity. However, this is not straightforward for Rot-MNIST where the task similarities are higher and even more challenging for natural images, where averaging the input image does not provide a meaningful signal. More importantly, it does not seem biologically plausible to encode task information alone as the context signal and ignore the similarity of classes occurring in different tasks. For instance, it seems more reasonable to process slight rotations of the same digits similarly (as in Rot-MNIST) rather than processing them through different subnetworks. This argument is supported by the performance degradation on the Rotated-MNIST setting with active dendrites over standard ANNs. Ideally, we would like the context signal for different rotations of a digit to be highly similar. It is, however, quite challenging to design context signals that can capture a wide range of complexities in the sequential learning of tasks. Furthermore, instead of hand engineering the context signal to bias learning towards certain types of tasks, an effective approach for learning the context signal in an end-to-end training framework is an interesting direction for future search. Another limitation of our framework would be the use of backpropagation which is widely criticized for being biologically implausible as it requires symmetric weight matrices in the feedforward and feedback pathways Xiao et al. and requires each neuron to have access to and contribute towards optimizing a single global objective function. The synapses in the brain, on the other hand, are unidirectional, and feedforward and feedback connections are physically distinct. The brain is also believed to perform localized learning. An interesting avenue for future research could be to extend our framework with more biologically plausible learning rules and study how the learning rule affects the interactions between the biologically plausible mechanisms. Future studies can also study the role of more plausible organization of neurons instead of fully connected, for instance, incorporation of population coding. In general, our study presents a compelling case for incorporating the design principles and learning machinery of the brain into ANNs and provides credence to the argument that distilling the details of the learning machinery of the brain can bring us closer to human intelligence (Hassabis et al., 2017; Hayes et al., 2021). Furthermore, deep learning is increasingly being used in neuroscience research to model and analyze brain data (Richards et al., 2019). The utility of the model for such research depends on two critical aspects: the performance of the model and how close the architecture is to the brain (Cornford et al., 2020; Schrimpf et al., 2020). The biologically plausible framework in our study incorporates several design components and learning mechanisms of the brain and performs well in a (continual learning) task that is closer to human learning. Therefore, we believe that this work may also be useful for the neuroscience community in evaluating and guiding computational neuroscience. Studying the properties of ANNs with higher similarity to the brain may provide insight into the mechanisms of brain functions. We believe that the fields of artificial intelligence and neuroscience are intricately intertwined, and progress in one can drive the other as well.
2305.04560
Building Neural Networks on Matrix Manifolds: A Gyrovector Space Approach
Matrix manifolds, such as manifolds of Symmetric Positive Definite (SPD) matrices and Grassmann manifolds, appear in many applications. Recently, by applying the theory of gyrogroups and gyrovector spaces that is a powerful framework for studying hyperbolic geometry, some works have attempted to build principled generalizations of Euclidean neural networks on matrix manifolds. However, due to the lack of many concepts in gyrovector spaces for the considered manifolds, e.g., the inner product and gyroangles, techniques and mathematical tools provided by these works are still limited compared to those developed for studying hyperbolic geometry. In this paper, we generalize some notions in gyrovector spaces for SPD and Grassmann manifolds, and propose new models and layers for building neural networks on these manifolds. We show the effectiveness of our approach in two applications, i.e., human action recognition and knowledge graph completion.
Xuan Son Nguyen, Shuo Yang
2023-05-08T09:10:11Z
http://arxiv.org/abs/2305.04560v3
# Building Neural Networks on Matrix Manifolds: A Gyrovector Space Approach ###### Abstract Matrix manifolds, such as manifolds of Symmetric Positive Definite (SPD) matrices and Grassmann manifolds, appear in many applications. Recently, by applying the theory of gyrogroups and gyrovector spaces that is a powerful framework for studying hyperbolic geometry, some works have attempted to build principled generalizations of Euclidean neural networks on matrix manifolds. However, due to the lack of many concepts in gyrovector spaces for the considered manifolds, e.g., the inner product and gyroangles, techniques and mathematical tools provided by these works are still limited compared to those developed for studying hyperbolic geometry. In this paper, we generalize some notions in gyrovector spaces for SPD and Grassmann manifolds, and propose new models and layers for building neural networks on these manifolds. We show the effectiveness of our approach in two applications, i.e., human action recognition and knowledge graph completion. Machine Learning, Hyperbolic Geometry, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational, Variational Learning, Variational Learning, Variational Learning, Variational, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational, Variational Learning, Variational Learning, Variational Learning, Variational, Variational Learning, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational Learning, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational Learning, Variational Learning, Variational Learning, Variational, Variational Learning, Variational Learning, Variational, Variational Learning, Variational Learning, Variational, Variational Learning, Variational Learning, Variational, Variational Learning, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational Learning, Variational, Variational Learning, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational Learning, Variational, Variational Learning, Variational Learning, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational, Variational Learning, Variational, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational, Variational Learning, Variational, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational,, Variational Learning, Variational, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational, Variational Learning, Variational, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational, Variational Learning, Variational, Variational, Variational Learning, Variational, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational, Variational Learning, Variational,, Variational Learning, Variational, Variational, Variational Learning, Variational, Variational Learning, Variational, Variational, Variational Learning, Variational,, Variational Learning, Variational, Variational, Variational Learning, Variational, Variational Learning, Variational,, Variational Learning, Variational, Variational Learning, Variational,, Variational, Variational Learning, Variational,, Variational Learning, Variational, Variational, Variational Learning, Variational, Variational, Variational Learning, Variational,, Variational Learning, Variational, Variational,, Variational Learning, Variational,, Variational Learning, Variational,, Variational Learning, Variational,, Variational Learning, Variational,, Variational Learning, Variational, Variational,, Variational Learning, Variational,, Variational Learning, Variational, Variational,, Variational Learning, Variational,, Variational Learning, Variational,, Variational Learning, Variational,, Variational Learning, Variational,, Variational,, Variational Learning, Variational,, Variational Learning,, Variational,, Variational Learning,, Variational,, Variational Learning, Variational,, Variational,, Variational Learning, Variational,, Variational,, Variational Learning,, Variational,, Variational Learning, Variational,, Variational Learning, Variational,, Variational Learning, Variational,, Variational Learning, Variational,, Variational,, Variational Learning, Variational,,, Variational Learning,, Variational,, Variational Learning, Variational,,, Variational Learning, Variational,, Variational,,, Variational Learning,,, Variational Learning,,,, Variational Learning
2305.18607
How Effective Are Neural Networks for Fixing Security Vulnerabilities
Security vulnerability repair is a difficult task that is in dire need of automation. Two groups of techniques have shown promise: (1) large code language models (LLMs) that have been pre-trained on source code for tasks such as code completion, and (2) automated program repair (APR) techniques that use deep learning (DL) models to automatically fix software bugs. This paper is the first to study and compare Java vulnerability repair capabilities of LLMs and DL-based APR models. The contributions include that we (1) apply and evaluate five LLMs (Codex, CodeGen, CodeT5, PLBART and InCoder), four fine-tuned LLMs, and four DL-based APR techniques on two real-world Java vulnerability benchmarks (Vul4J and VJBench), (2) design code transformations to address the training and test data overlapping threat to Codex, (3) create a new Java vulnerability repair benchmark VJBench, and its transformed version VJBench-trans and (4) evaluate LLMs and APR techniques on the transformed vulnerabilities in VJBench-trans. Our findings include that (1) existing LLMs and APR models fix very few Java vulnerabilities. Codex fixes 10.2 (20.4%), the most number of vulnerabilities. (2) Fine-tuning with general APR data improves LLMs' vulnerability-fixing capabilities. (3) Our new VJBench reveals that LLMs and APR models fail to fix many Common Weakness Enumeration (CWE) types, such as CWE-325 Missing cryptographic step and CWE-444 HTTP request smuggling. (4) Codex still fixes 8.3 transformed vulnerabilities, outperforming all the other LLMs and APR models on transformed vulnerabilities. The results call for innovations to enhance automated Java vulnerability repair such as creating larger vulnerability repair training data, tuning LLMs with such data, and applying code simplification transformation to facilitate vulnerability repair.
Yi Wu, Nan Jiang, Hung Viet Pham, Thibaud Lutellier, Jordan Davis, Lin Tan, Petr Babkin, Sameena Shah
2023-05-29T20:50:27Z
http://arxiv.org/abs/2305.18607v2
# How Effective Are Neural Networks for Fixing Security Vulnerabilities ###### Abstract. Security vulnerability repair is a difficult task that is in dire need of automation. Two groups of techniques have shown promise: (1) large code language models (LLMs) that have been pre-trained on source code for tasks such as code completion, and (2) automated program repair (APR) techniques that use deep learning (DL) models to automatically fix software bugs. This paper is the first to study and compare Java vulnerability repair capabilities of LLMs and DI-based APR models. The contributions include that we (1) apply and evaluate five LLMs (Codex, CodeGen, CodeT5, PLBART and InCoder), four fine-tuned LLMs, and four DL-based APR techniques on two real-world Java vulnerability benchmarks (Vul4J and VJBench), (2) design code transformations to address the training and test data overlapping threat to Codex, (3) create a new Java vulnerability repair benchmark VJBench, and its transformed version VJBench-trans, to better evaluate LLMs and APR techniques, and (4) evaluate LLMs and APR techniques on the transformed vulnerabilities in VJBench-trans. Our findings include that (1) existing LLMs and APR models fix very few Java vulnerabilities. Codex fixes 10.2 (20.4%), the most number of vulnerabilities. Many of the generated patches are unexplable patches. (2) Fine-tuning with general APR data improves LLMs' vulnerability-fixing capabilities. (3) Our new VJBench reveals that LLMs and APR models fail to fix many Common Weakness Enumeration (CWE) types, such as CWE-325 Missing cryptographic step and CWE-444 HTTP request smuggling. (4) Codex still fixes 8.3 transformed vulnerabilities, outperforming all the other LLMs and APR models on transformed vulnerabilities. The results call for innovations to enhance automated Java vulnerability repair such as creating larger vulnerability repair training data, tuning LLMs with such data, and applying code simplification transformation to facilitate vulnerability repair. Automated Program Repair, Large Language Model, Vulnerability, AI and Software Engineering + Footnote †: This work is done when Hung Viet Pham and Thibaud Lutellier were at University of Waterloo. fix was released more than one month later. As a result, there is a need for faster vulnerability-fixing solutions. Most vulnerability benchmarks and vulnerability repair solutions focus either on C/C++ (Kumar et al., 2017; Kumar et al., 2018; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019) or binaries (Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019). There is a lack of solutions and benchmarks for Java, despite it being a widely-used programming language (the third most popular language in the open-source community (Kumar et al., 2019)) with _many seven vulnerabilities_. Java has been used to implement important servers, including web servers and services (e.g., Tomcat, Spring, CFX, Log4), which are especially vulnerable to attackers. Consequently, many of the most critical vulnerabilities are in Java software. For example, Google assessed that the Log4Shell vulnerability in the Log4] package affected 17,000 Maven projects (Maven, 2017), and Microsoft even reported that nation-state attackers exploited the vulnerability (Kumar et al., 2019). Benchmarks and solutions for other programming languages often do not work or work poorly for fixing Java vulnerabilities. For example, the most common vulnerabilities in C/C++ are buffer overflows (Kumar et al., 2019; Kumar et al., 2019). Java, as a type-safe language, is designed to avoid buffer overflow. Thus, most C/C++ techniques focusing on buffer overflow vulnerabilities are irrelevant to Java. We need new benchmarks and techniques for fixing Java security vulnerabilities. Instead of building a technique to fix Java vulnerabilities automatically, we study and compare the space and feasibility of applying two types of techniques--learning-based automated program repair and LLMs--to fix Java security vulnerabilities automatically. First, learning-based program repair has gained popularity (Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019). These encoder-decoder approaches learn from a large number of pairs of bugs and their fixes (in open-source projects) to fix unseen Java software bugs automatically. _It would be interesting to study how effective such learning-based program repair models are in fixing a subset of software bugs, i.e., software vulnerabilities_. Secondly, LLMs have recently been applied to source code (Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019) and are pre-trained models that have been trained on a tremendous amount of source code (e.g., the entirety of GitHub). Different from APR models, pre-trained LLMs learn from large corpus of source code (instead of pairs of bugs and their fixes) for various tasks such as identifier tagging and code completion. Despite learning to perform tasks different from repairing, recent study (Kumar et al., 2019; Kumar et al., 2019) shows that pre-trained LLMs have competitive capabilities of fixing general Java bugs (Kumar et al., 2019; Kumar et al., 2019). _It would be interesting to study how effective such LLMs are for a different task, i.e., fixing software vulnerabilities, when they do not see how bugs are fixed._ Thirdly, it would be interesting to compare deep learning (DL)-based APR techniques' and LLMs' capabilities of fixing Java vulnerabilities. DL-based APR techniques and LLMs represent two angles of applying models for a different task. Applying DL-based APR techniques to fix vulnerabilities is using models learned from a general dataset for a specific subset of the dataset (software vulnerability is a type of software bugs). Applying LLMs to fix vulnerabilities is using models learned from a different format of dataset (sequences of code) for another format (pairs of buggy and fixed code). Since LLMs do not require pairs of bugs and their fixes, LLMs are typically built from data that is orders of magnitude larger than the training data used to train APR models. _Would more data win or data-format matching win?_ Lastly, pre-trained LLMs are often fine-tuned to adapt to different downstream tasks (Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019; Kumar et al., 2019). A recent study (Kumar et al., 2019) shows that fine-tuning improves LLMs' fixing capabilities by at least 31%. However, given the lack of Java vulnerability data, it is unrealistic to fine-tune LLMs for fixing Java vulnerabilities. Thus, _it would be interesting to study how effective LLMs fine-tuned with general APR data are in fixing software vulnerabilities_. And when compared with DL-based APR techniques, _would more data plus fine-tuning win or data-format matching win?_ ### Our Approach We conduct the first study to evaluate and compare APR techniques' and LLMs' abilities of fixing Java vulnerabilities. We evaluate five LLMs (Codex (Codex, 2017), CodeT5 (Kumar et al., 2019), CodeGen (Kumar et al., 2019), PLIBART (Maven, 2017) and InCder (Kumar et al., 2019)), four LLMs that are fined-tuned with general APR data, and four APR techniques (CURE (Kumar et al., 2019), Recoder (Kumar et al., 2019), RewardRepair (Kumar et al., 2019), and KNOD (Kumar et al., 2019)) on two Java vulnerability benchmarks (Vul4J and a new VJBench that we create). There are two main challenges. First, there are few benchmarks available for evaluating Java vulnerability repair tools. While Vul4J (Kumar et al., 2019) contains 79 reproducible Java vulnerabilities, they belong to only 25 CWEs, i.e., types of vulnerabilities. In addition, 60% of the CWEs in the dataset (15 types of vulnerabilities) are covered by only a single reproducible vulnerability. To address this challenge, we develop new benchmarks. We analyze the entire National Vulnerability Database (NVD) (Bartos et al., 2017) to identify reproducible real-world Java vulnerabilities that are suitable for vulnerability repair evaluation, and use these to create our VJBench benchmark. These vulnerabilities cover an additional twelve CWE types not included by the Vul4J dataset and add more vulnerabilities to four CWE types with which Vul4J has only one vulnerability associated. The new benchmark can facilitate the evaluation of future Java vulnerability repair techniques. The second challenge arises from the fact that Codex was trained on a substantial code corpus collected from GitHub (Kumar et al., 2019) and the training dataset is unreleased. Since the projects in Vul4J and VJBench are public repositories on GitHub, one cannot be certain that the vulnerabilities in Vul4J and VJBench are not in Codex's training data. This is a major known threat to the validity of evaluation (Kumar et al., 2019; Kumar et al., 2019). While dataset HumanEval (Kumar et al., 2019) is not in Codex's training data, it is for Python code completion and does not contain Java vulnerabilities. Creating new real-world benchmarks is not only expensive (Kumar et al., 2019; Kumar et al., 2019), but might also be impracticable if LLMs have been trained on all public datasets. Our best-effort solution to mitigate this challenge is to transform the vulnerability code in existing benchmarks. We use two types of code transformation: identifier renaming and code structure change. These transformations generate new equivalent programs that still retain the vulnerabilities but are not included in any open-source dataset that Codex and other LLMs may have seen. As a result, we create VJBench-trans, a benchmark of transformed vulnerabilities, by applying two transformation strategies on vulnerabilities from Vul4J and VJBench. ### Contributions Our paper makes the following contributions: * We conduct the first study that evaluates the fixing capabilities of five LLMs, four fine-tuned LLMs, and four APR techniques on real-world Java vulnerabilities from two benchmarks Vul4J and our new VJBench. Our findings include: * Existing LLMs and APR techniques fix very few Java vulnerabilities. Codex fixes 10.2 (20.4%) vulnerabilities on average, exhibiting the best fixing capability. (Section 6.1) * Fine-tuning with general APR data improves LLMs' vulnerability-fixing capabilities. Fine-tuned InCoder fixes 9 vulnerabilities, exhibiting competitive fixing capability to Codex's. (Section 6.1) * Codex has the highest compilation rate of 79.7%. Other LLMs (fine-tuned or not) and APR techniques have low compilation rates (the lowest being 6.4% with CodeT5 and the rest between 24.5% to 65.2%), showing a lack of syntax domain knowledge. (Section 6.1) * LLMs and APR models, except Codex, only fix vulnerabilities that require simple changes, such as a single deletion or variable/method replacement. (Section 6.2) * Our new VJBench reveals that LLMs and APR models fail to fix many CWE types including CWE-172 Encoding error, CWE-325 Missing cryptographic step, CWE-444 HTTP request smuggling, CWE-668 Exposure of resource to wrong sphere, and CWE-1295 Debug messages revealing unnecessary information. (Section 6.2) * We create two Java vulnerability benchmarks for automated program repair: (1) _VJBench_, which contains 42 reproducible real-world Java vulnerabilities that cover twelve new CWE types, and (2) _VJBench-trans_, which contains 150 transformed Java vulnerabilities. * We use code transformations to mitigate the threat that LLMs and black-box Codex may have seen the evaluated benchmarks. * We evaluate LLMs and APR techniques' fixing capabilities on transformed vulnerabilities (VJBench-trans). * Code transformations make LLMs and APR techniques fix fewer number of vulnerabilities. Some models such as Codex and fine-tuned PLBART are more robust to code transformations. On the other hand, some transformations make the vulnerabilities easier to fix. (Section 6.3) * We provide implications and suggestions for future directions (Section 6). ## 2. New Benchmark of Java Vulnerabilities A Java APR benchmark must contain reproducible Java vulnerabilities with test cases exposing the vulnerabilities. While there is an abundance of such benchmarks for Java bugs, including Defects4J (Velick et al., 2018), QuixBugs (QuixBugs, 2018), Bugs.jar (Searns et al., 2018), and Bears (Bears, 2018), the only Java vulnerability benchmark for APR is Vul4J (Searns et al., 2018). Vul4J contains 79 vulnerabilities from 51 projects covering 25 CWE types. Despite a valuable first step, Vul4J offers limited coverage of CWE categories as explained in Introduction. In addition, only 35 of these vulnerabilities are applicable for evaluating state-of-the-art learning-based APR systems (Zhao et al., 2018; Wang et al., 2018; Wang et al., 2018) since these APR models only fix single-hunk bugs. Specifically, 39 of the 79 vulnerabilities are single-hunk. We can only reproduce 35 of the 39 vulnerabilities, as two bugs fail to compile, and two bugs are not reproducible with the Docker container provided by the Vul4J authors. To extend this benchmark, we collect Java vulnerabilities following prior work (Velick et al., 2018): i) The vulnerability should only be related to Java source code, ii) The fixing commit should contain at least one test case that passes on \(V_{f\text{Jx}}\) but fails on \(V_{bug}\), iii) The fixing patch should only include changes that fix the vulnerability and should not introduce unrelated changes such as features or refactoring, and iv) the vulnerability is not already in Vul4J. We download all available vulnerability data in JSON format on May 13, 2022 from NVD. We parse this data and obtain a list of 7,116 GitHub projects by collecting the reference URLs of these vulnerabilities. We exclude projects which have less than 50% of their code in Java, resulting in 400 Java projects containing 933 unique vulnerabilities. We then try to identify the fixing commits for each of the 933 vulnerabilities by manually checking the reference links provided in the vulnerability report or by searching the vulnerability ID in the GitHub repository if no link is provided. We find vulnerability-fixing commits for 698 vulnerabilities. Then we manually filter out 185 vulnerabilities whose fixing commits contain non-Java changes and 314 vulnerabilities that do not have test cases in their fixing commits. We now have 199 vulnerabilities, each with test cases and a corresponding Java-only fixing commit. We then successfully reproduce 42 Java vulnerabilities that are not included in Vul4J, using building tools such as Maven or Gradle. We end up with a dataset of **42 new reproducible real-world Java vulnerabilities** from thirty open-source projects. In detail, our dataset consists of _27 multi-hunk vulnerabilities_ from twenty-two projects and _15 single-hunk vulnerabilities_ from eleven projects. As Figure 1 shows, these 42 vulnerabilities covers a total of 23 CWE types. Furthermore, our dataset introduces **12 new CWE types** (denoted by " in Figure 1) not included in Vul4J and supplements four CWE types (CWE-78, CWE-200, CWE-310, CWE-863) for which Vul4J only has one example. Table 1 describes the 15 new single-hunk vulnerabilities of twelve CWE types in our _VJBench_ benchmark. There are six new unique CWE types of vulnerabilities not present in Vul4J. As a result, there are 15 vulnerabilities from VJBench and 35 vulnerabilities from Vul4J, a total of **50 vulnerabilities** that we use in our study. Figure 1. CWE Type Distribution of VJBench (* denotes the new CWE types not included in Vul4J). ## 3. Large Language Models and APR Techniques ### Large Language Models We select five LLMs, i.e., Codex, PLBART, CodeT5, CodeGen and InCoder, because they are (1) state-of-the-art, (2) capable of performing code generation tasks without any modifications to the models or additional components (e.g., CodeBERT (Zhu et al., 2019) GraphCodeBERT (Zhu et al., 2019) are excluded), and (3) trained with enough source code so that they can understand code to some extent (e.g., we exclude T5 (Zhu et al., 2019), GPT-2 (Zhu et al., 2019), GPT-Neo (Zhu et al., 2019) and GPT-J (Zhu et al., 2019), whose training data is over 90% text). In this work, we study the LLMs in two settings: as is and fine-tuned with general APR data. #### 3.1.1. Large Language Models As Is In this section, we introduce the details of the studied LLMs and how to use them for fixing vulnerabilities. Table 3 provides the model sizes and their training data information. **Codex (Zhu et al., 2019):** Codex is a GPT-3-based (Zhu et al., 2019; Zhu et al., 2019) language model with 12B parameters trained on both natural language and source code. We use the davinci-002 model (as of July 2022), which is supposed to be the most accurate Codex model (Boward et al., 2018). We focus on Codex's insertion mode as it provided the best results in our preliminary study among the three main modes: completion, insertion, and edit. **CodeT5 (Zhu et al., 2019):** CodeT5 is an encoder-decoder transformer model (Zhu et al., 2019) pre-trained with an identifier-aware denoising objective and with bimodal dual generation tasks. It is trained on a corpus of 5.2 million code functions and 8.3 million natural language sentences from open-source repositories in six programming languages including Java. In this work, we use the largest CodeT5 model released, which has 770M parameters. **CodeGen (Zhu et al., 2019):** CodeGen models are a series of autoregressive decoder-only transformers trained for conversational program synthesis. Their training data consists of 354.7B natural language tokens from THEPLE dataset and 150.8B programming language tokens extracted from a subset of the Google BigQuery database. In this work, we apply the CodeGen model which contains 6B parameters (the larger model with 16B parameters is not used due to the limitation of our machine). **PLBART (Zhu et al., 2019):** PLBART uses an encoder-decoder transformer architecture with an additional normalization layer on the encoder and decoder. It's pre-trained on functions extracted from Java and Python GitHub repositories via denoising autoencoding. Two PLBART models of different sizes are available, and we use the larger model containing 400M parameters. **InCoder (Zhu et al., 2019):** InCoder models follow XGLM (Zhu et al., 2019)'s decoder-only architecture and are pre-trained on the masked span prediction task. Its pre-training data comes from open-sourced projects on GitHub and GitLab, and StackOverflow posts. There are two InCoder models of different sizes released, and we use the larger one which contains 6B parameters. **Input Formats:** Table 2 illustrates the input format we used for each model. For Codex, we adopt an input format similar to the one used in prior work (Zhu et al., 2019). The prompt includes the commented buggy code with hint words "BUG." and "FIXED." to signify the location of the bug and to guide Codex towards generating a fixed version of the code. If the number of input tokens exceeds the maximum number for a model, we truncate the code and input the code around the buggy lines. Since it is unclear how the commented buggy line prompts will affect the models' fixing capabilities, we experiment with the input with and without commented buggy lines for each model. Figure 2 shows an example of the input and expected output of Codex with buggy lines commented by /*BUG.. FIXED */. #### 3.1.2. Fine-tuned Large Language Models We also study the fixing capabilities of fine-tuned LLMs, since fine-tuning is a common technique to adapt a pre-trained LLM to a specific downstream task, such as code summarization or code translation (Zhu et al., 2019; Zhu et al., 2019; Zhu et al., 2019). \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline **CWE** & **Description** & **Vulnerability IDs** \\ \hline 20 & Improper Input Validation & Pulsar-1 \\ 22 & Improper limitation of path name & Halo-1 \\ & to a restricted directory & \\ 74 & Improper Neutralization of Elements & Ratpack-1 \\ & in Output (injection) & \\ & Cross-site Scripting & Json-santizer-1 \\ 172* Encoding error & Flow-1 & Flow-1 \\ 200 & Exposure of sensitive information & Jenkins-1, Jenkins-2, Jenkins-3 \\ 325* Missing cryptographic step & Jenkins-1 & BC-Java-1 \\ 347* & Improper Verification of & \\ & Cryptographic Signature & \\ 444* HTTP request swapping & Netty-1, Netty-2 \\ 611 & Improper restriction of XML external entity reference & Quartz-1, Retrofii-1 \\ 648* Exposure of resource to wrong sphere & Flow-2 \\ 1295* Debug messages revealing & Flow-2 \\ & unnecessary information & \\ _unk_ & no specific CWE category & Jinjava-1 \\ \hline \hline \end{tabular} \end{table} Table 1. List of the 15 new single-hunk vulnerabilities categorized by their corresponding CWE. The vulnerability IDs compose of the project name and the bug index. * denotes the six new CWE types that our benchmark adds compared to Vul4J. Jenkins-1 and Flow-2 both belong to two CWE categories. \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt}} \hline \hline **Model** & **Input Format** \\ \hline Codex & Comment buggy lines (BL) with hint ”BUG.” and ”FIXED.” Prefix prompt: Beginning of the buggy function to BL comment & \\ & Suffix prompt: Line after BL comment to end of the buggy function & \\ \hline CodeT5 & Mask buggy lines with \textless{extra\_id\_0}- and input the buggy function & \\ \hline CodeGen & Input beginning of the buggy method to line before buggy lines & \\ \hline PLBART & Mask buggy lines with \textless{mask\_5} and input the buggy function & \\ \hline InCoder & Mask buggy lines with \textless{mask\_5} and input the buggy function & \\ \hline Tuned LLMs & Comment buggy lines and input the buggy function & \\ \hline \hline \end{tabular} \end{table} Table 2. Input Formats of Large Language Models Figure 2. An example input to Codex and its expected output However, due to the lack of vulnerabilities as fine-tuning data, we use the LLMs fine-tuned with general APR data, shared by existing work (Xiong et al., 2018). Prior work (Xiong et al., 2018) fine-tuned LLMs with a training dataset containing 143,666 instances collected from open-source GitHub Java projects (Xiong et al., 2018). Each data instance is a pair of buggy code and fixed code. In detail, (Xiong et al., 2018) used the Adam optimizer with a learning rate of \(1e^{-5}\), set batch size to one and fine-tuned for one epoch. The fine-tuned LLMs are supposed to be adjusted to vulnerability fixing task to some extent due to the similarity between vulnerability fixing and general bug fixing. We perform a search and confirm that none of the vulnerabilities we study in this work is present in the APR training data used to fine-tune the LLMs. We cannot fine-tune Codex, since it does not offer any fine-tuning API and there is also no fine-tuned Codex available. The last row of Table 2 describes the input format for using fine-tuned LLMs, where the buggy lines are given as commented lines, and the entire function is input into the fine-tuned LLMs to generate the patched lines (Xiong et al., 2018). ### APR Techniques We select four state-of-the-art learning-based APR techniques trained for Java bugs. These APR techniques need to be open-sourced so that we can run them on our new vulnerability benchmarks. **CURE**(Xiong et al., 2018) applies a small language model (pre-trained with 4.04M code instances) to the CoCoNuT's (Xiong et al., 2018) encoder-decoder architecture to learn code syntax and propose a new code-aware strategy to remove invalid identifiers and increase the compilation rate during inference. CURE is trained with 2.72M APR instances. **Decoder**(Xiong et al., 2018) uses an tree-based deep learning network that is trained on 82.87K APR training instances. It focuses on generating edits to modify buggy ASTs to form the patched ASTs. **RewardRepair**(Xiong et al., 2018) includes compilation in the calculation of the model's loss function to increase the number of compilable (and correct) patches. This is different from CURE as the loss function increases the number of compilable patches during training. Overall, RewardRepair is trained with 3.51M APR training instances. **KNOD**(Xiong et al., 2018) proposes a novel three-stage tree decoder to generate the patched ASTs, and also uses domain-knowledge distillation to modify the loss function to let the models learn code syntax and semantics. KNOD is trained with 576K APR training instances, and is the state-of-the-art DL-based APR techniques. ## 4. Code Transformation To address the challenge of training-testing data overlap, we need to create vulnerabilities and their fixes that have not been seen by existing LLMs or APR techniques. We generate unseen vulnerabilities by transforming existing vulnerabilities to their semantically equivalent forms. None of the APR models and LLMs, including Codex, have seen these transformed buggy code and the corresponding fixes in their training set. We apply two categories of transformations to Vul4J and VJBench, which are described below: **(1) Identifier Renaming:** To prevent LLMs and APR models from simply memorizing the exact correct patches associated with identifier names, we rename identifiers in the buggy code and the corresponding fixed code. All variables, functions, and classes defined in the project are renamed using synonyms for the original identifier names according to Java specifications. We use synonyms to keep the word meaning of the original identifiers. We do not rename identifiers from external libraries or default Java class libraries, since one often cannot modify external libraries. Figure 3 shows an example of identifier renaming for Halo-1. We first use the tool src2abs (Cordes et al., 2016) to extract all variable, function, and class names in the buggy function, and filter out those identifiers from Java or third-party libraries. We tokenize each identifier based on camel case or snake case conventions, then use NLTK WordNet (Chen et al., 2017) to generate synonyms for each word. After that, we reassemble these synonyms to form a complete identifier. We manually review and adjust the synonyms to ensure they fit the Figure 4. Function chaining for VUL4J-30 Figure 5. Function-argument passing for Halo-1. \begin{table} \begin{tabular}{l r r r r r} \hline \hline & **Codex** & **CodeT5** & **CodeGen** & **PLBART** & **InCoder** \\ \hline \#Parameters & \multicolumn{2}{c}{12B} & 770M & \multicolumn{2}{c}{6B} & 400M & \multicolumn{1}{c}{6B} \\ \hline Training Data & NL & 45.0TB & - & 1.1TB & 79.0GB & 57.0GB \\ Raw Size & PL & 159.0GB & - & 436.3GB & 576.0GB & 159.0GB \\ \hline Training Data & NL & 499.0B & - & 354.7B & 6.7B & - \\ \#Tokens & PL & 100.0B & - & 150.8B & 64.4B & - \\ \hline Training Data & NL & - & 5.2M & - & 47.0M & - \\ \#instances & PL & - & 8.3M & - & 680.0M & - \\ \hline \hline \end{tabular} \end{table} Table 3. Model size (number of parameters) and training data size of the five LLMs we apply and report in this work code context. Since some APR techniques need to extract identifiers from the whole project, we rename the identifiers used in the buggy function across the entire project. **(2) Code Structure Change:** We define six transformation rules to change code structures. * **If-condition flipping:** negates an if-condition and swaps the code blocks in the if and else branches. * **Loop transformation:** converts a for loop to a while loop and vice versa. * **Conditional-statement transformation:** turns a ternary expression (var = cond? exprTrue: exprFalse;) into an if-else statement (if (cond) (var = exprTrue;) else (var = exprFalse;)), and transform a switch statement into multiple if and elseif statements, and vice versa. * **Function chaining:** merges multiple function invocations into one call chain, or conversely splits a function call chain into separate function invocations. Figure 4 shows an example where value.getClass().equals(...); is split into Class value_class = value.getClass(); and value_class.equals(...);. * **Function-argument passing:** If a locally defined variable or object is only used as a function argument, we replace the function argument with its definition statement, or we extract the function call that is passed as a function argument into a separate variable/object definition. Figure 5 shows an example where the argument parentPath.normalize() is extracted and declared as a local object normalizedParentPath. * **Code-order change:** alters the order of statements if changing the order does not affect the execution results. For example, funcA(); int n=%; can be transformed into int n=%; funcA(); as invoking funcA() and declaring int n do not affect each other. For code structure change, we manually transform the buggy function. For each buggy function, we apply all applicable transformations at once. We further confirm the equivalence of the transformed bug by reproducing them using the same test set and applying semantically equivalent patches to pass the tests. **A new benchmark (VJBench-trans):** In summary, to create bugs and patches that LLMs have not seen in their training set, we apply three sets of transformations (identifier renaming only, code structure change only, and both at the same time) to VJBench and Vul4J, and create _VJBench-trans_ that contains 3x50 = 150 transformed Java vulnerabilities. We search in GitHub and Google the transformed code, and find no public code that is the same as the transformed buggy function. **Recover patches for evaluation:** The transformed code is still realistic and human-readable. However, for the ease of evaluating the correctness of plausible patches, we maintain a dictionary that stores the mapping between the renamed identifiers and their original names. For each vulnerability, we also write a patched program for its code structure transformed version, providing a reference for future dataset users. ## 5. Experiment Setup Figure 6 provides an overview of our study. First, we build a new dataset of vulnerabilities, VJBench, that contains 42 new vulnerabilities. We use this new dataset and the original dataset (Vul4J) to benchmark the vulnerability-fixing capabilities of DL-based APR techniques, LLMs and fine-tuned LLMs. Each language model generates 10 patches for each bug through inference. For each APR model, we use its default beam search size and validate its top 10 patches. The generated patches are then validated using test cases and manual verification of all the patches that pass the test cases. Then, we apply code transformations on Vul4J and VJBench to generate VJBench-trans. Finally, we evaluate the impact of code transformations on the vulnerability-repair capabilities of all the LLMs, fine-tuned LLMs and APR techniques. ### Dataset In this work, we focus on fixing single-hunk Java vulnerabilities as state-of-the-art DL-based APR models are designed to fix single-hunk bugs. We filter and obtain 35 single-hunk bugs from Vul4J dataset. Along with the 15 single-hunk vulnerabilities from VJBench, we have a total of 50 Java vulnerabilities. We use the perfect fault localization for these Java vulnerabilities, that is, we use the code lines that are modified in the developers' patches as the buggy lines. ### Large Language Model Setups We evaluate each LLM with two input setups: (1) the buggy lines are commented as part of the input and (2) without the buggy lines. We observe that InCoder fixes more vulnerabilites when the input contains buggy line comments, while the other LLMs perform better without buggy lines. We then report the best-performing setup for each model in the rest of this paper. For fine-tuned LLMs, we follow the input format with buggy line comments used in (Zhu et al., 2018) which is described in Table 2. We configure each model to generate 10 patches for each vulnerability. For CodeT5, CodeGen, PLBART and InCoder, we set their beam search size to 10. For Codex, we set its parameter \(n\), the number of candidates to generate, to 10. Considering the inherent randomness of the sampling method adopted by Codex, we run it twenty-five times for each vulnerability to obtain the average results. We run twenty-five times to control the margin of error small (\(\leq\)0.3) at 95% confidence level. We set the sampling temperature of Codex to 0.6, which is shown to have the best performance when sampling ten candidates in prior work (Cordex et al., 2017). We set the max number Figure 6. Overview of our study of newly generated tokens to 400 for Codex due to its request rate limit, and to 512 for all other LLMs. ### Patch Validation Codex insertion mode generates code to be inserted between the prefix prompt and the suffix prompt. Since we use the code before and including the buggy line comment as its prefix prompt and the code after the buggy line comment as its suffix prompt, we replace the original buggy code with the code that Codex generates. Similarly, CodC5 generates code to replace the masked label in its input. PLBART generates the entire patched function that replaces the whole buggy function. CodeGen and InCoder are completion models that generate code to complete the given prefix prompt. We take the first complete function CodeGen and InCoder generate to replace the original buggy function. For all the fine-tuned LLMs, the fine-tuned CodeT5, CodeGen, PLBART and InCoder directly generate the patched code to replace the buggy code. For each LLM and APR techniques, we first validate the top-10 patches they generate using the test cases from the project. Following prior work (Zhu et al., 2017; Zhu et al., 2017; Zhu et al., 2018; Zhu et al., 2019), _plausible patches_ are patches that pass all test cases, while _correct patches_ are semantically equivalent to developer patches, and _over-fitted patches_ are patches that pass all test cases but are incorrect. We manually inspect each plausible patch to identify if it is a correct patch. ## 6. Results and Findings We evaluate the vulnerability fixing capabilities of five LLMs, four fine-tuned LLMs and four DL-based APR techniques on two real-world Java vulnerability benchmarks. ### RQ1: Vulnerability Fixing Capabilities We run Codex twenty-five times and report the average number of fixed vulnerabilities with the margin of error, because Codex's patch generation is non-deterministic. For other LLMs, we only run them once since their patch generation is deterministic (Section 5). Table 4 shows the fixing capabilities, i.e., the number of vulnerabilities that each approach fixes correctly, of five LLMs, four fine-tuned LLMs and four APR models. We consider the top ten patches since a recent study shows that almost all developers are only willing to examine ten patches at most (Zhu et al., 2017). Results in Table 4 are reported as X/Y, where X is the number of vulnerabilities correctly fixed by each technique and Y is the number of vulnerabilities that are plausibly fixed. A vulnerability is plausibly fixed by a model if the model generates a plausible patch (definition in Section 5.3). #### 6.1.1. LLMs vs. APR Techniques We first compare using LLMs as is with APR techniques. Here, _LLMs as is_ refers to that we apply Codex and LLMs under zero-shot learning and without fine-tuning. Our results show that Codex exhibits the best fixing capability. Out of a total of 50 vulnerabilities in VuldJ and VJBench, Codex fixes an average of 10.2 vulnerabilities with a margin of error of 0.3 (at 95% confidence). InCoder demonstrates the second best capability, fixing 5 vulnerabilities. The other LLMs and DL-based APR techniques only fix very few vulnerabilities. Overall, LLMs and APR techniques show very limited vulnerability fixing capabilities. Our finding of Codex performing the best on fixing Java vulnerabilities is consistent with Codex's superior performance in repairing general bugs (Zhu et al., 2017) and in other domains (Brandt et al., 2016; Zhu et al., 2018; Zhu et al., 2019; Zhu et al., 2019), possibly due to its significantly larger model size and training data size as indicated in Table 3. Our result is also consistent with recent work (Zhu et al., 2019) in showing that LLMs without fine-tuning have competitive fixing capabilities - InCoder fix three more vulnerabilities than the best APR technique (RewardRepair). However, while (Zhu et al., 2019) shows that CodGen, PLBART and InCoder as is can fix 18%-23% general bug of Java APR benchmarks, our result shows that they can fix only 4%(250)-10%(5/50) vulnerabilities of VuldJ and VJBench. In real-world, only about 1-7% of bugs are vulnerabilities, resulting in few data for models to learn from. This means that, for neural networks, fixing vulnerabilities is more difficult than general bugs and requires more domain-specific knowledge. **Finding 1**: Existing large language models and APR techniques fix very few Java vulnerabilities. Codex fixes 10.2 (20.4%) vulnerabilities on average, exhibiting the best fixing capability. #### 6.1.2. LLMs Fine-tuned with APR Data We applied LLMs finetuned with general APR data by (Zhu et al., 2019) on the vulnerability benchmarks. We cannot fine-tune Codex as OpenAI does not provide a public API for fine-tuning. Table 4 shows that all the finet-tuned LLMs fix more vulnerabilities than their original models. In detail, fine-tuned InCoder fixes 9 vulnerabilities, 4 more than its original model. The second best models is fine-tuned CodeGen, which fixes 8 vulnerabilities, 6 more than its original model. Fine-tuned CodeT5 and fine-tuned PLBART each fixes 3 and 2 more vulnerabilities. Overall, fine-tuning with general APR data can improve the fixing capabilities of LLMs for vulnerabilities. First, fine-tuning could adapt LLMs to APR tasks better, making LLMs be aware of generating patches instead of open-ending code or text. Second, though vulnerabilities have special characteristics (root causes) compared to general bugs, some vulnerabilities still share similar repair patterns with general bugs, such as replacing a function argument with another variable, which can be well learned during fine-tuning. Given the scarcity of real-world vulnerability data, our results implicate that fine-tuning LLMs with general APR data can be beneficial. **Finding 2**: Fine-tuning with general APR data improves all four LLMs' vulnerability-fixing capabilities. Fine-tuned InCoder fixes 9 vulnerabilities, exhibiting competitive fixing capability compared to Codex's. We also evaluate the compilation rates (i.e., portions of generated patches that compile) to study the quality of the patches. Uncompilable patches cannot be correct patches. Codex, the best model overall, has a compilation rate of 79.7%, which is significantly higher than that of the best fine-tuned LLM, fine-tuned InCoder (55.2%) and the best APR model, Recoder (57.6%). Fine-tuning notably improves CodeT5 and CodeGen's compilation rates, from 6.4% to 46.8% and from 35.8% to 47.2% respectively. On the other hand, the compilation rate of fine-tuned PLBART is 45.2%, slightly lower than the original PLBART's compilation rate of 47.8%. Despite the higher 65.2% compilation rate of InCoder compared to its fine-tuned model, it generates 82.0% duplicate patches, whereas the fine-tuned InCoder generates patches with more diverse modifications that result in more correct fixes. Overall, compared with compilation rates of repairing general bugs (Zhu et al., 2019), these compilation rates of fixing vulnerability are lower. PLBART, CodeGen and InCoder without fine-tuning when repairing general bugs show an average of 65%-73% compilation rate (Wang et al., 2018), outperforming both of their original and fine-tuned models when repairing vulnerabilities. Figure 6(a) shows an example of uncomompliable patches of Vul4J-12: The function signature declares t to be final, thus t's value is not allowed to be changed. However, Codex fails to capture this constraint, even though the function signature is only two lines above the buggy line. As a result, it generates code t-- to decrease t's value which makes the patch uncompilable. Similarly, RewardR ignores the fact that v and vt are both of type int, and invokes the invalid function equals on them. Figure 6(b) shows another example of uncomplable patch for Vul4J-1: parseArray is a method defined in another class in the project that accepts two or three arguments only. All the four fine-tuned LLMs generate the same uncompilable patches where they pass null as the fourth argument, because they do not have the information that parseArray does not accept four arguments. These results suggest that LLMs' abilities to learn code syntax could be improved. Recent work (Wang et al., 2018; Wang et al., 2018) are steps in the right direction to add domain knowledge to models to help them learn code syntax and semantics. Another direction is prompt engineering, such as providing method signatures or type information in the prompt to specify the constraints. This would enable LLMs to utilize syntax information from across the entire project, rather than being limited to the code within the buggy function. ### RQ2: What kinds of vulnerabilities do LLMs and learning-based APR techniques fix? Table 5 shows the vulnerabilities that are correctly fixed by the LLMs, fine-tuned LLMs, and APR techniques. In total, 16 vulnerabilities (belonging to ten CWE categories as shown in column _CWE_ with their description in column _Description_) from both benchmarks are fixed by at least one of the models. The IDs of these vulnerabilities are listed under column _Vul. ID_. Some vulnerabilities belong to no specific CWE category and are listed as _unk_. Vul4J-47 is a vulnerability that only Codex can fix. Figure 7(a) shows the developer patch for Vul4J-47 of type CWE-611 (Improper Restriction of XML External Entity Reference). The correct fix requires inserting a statement xmlIn.setProperty(XMLInputFactory.SUPPORT_DTD,Boolean.FALSE) to disable the support of Document Type Definition (DTD), because DTD processing can be used to perform XML External Entity (XXE) attacks. The original buggy code only disables the support for external entities by setting the IS_SUPPORTIME_EXTERNAL_ENTITIES property to false, which is not enough to prevent the attack. Figure 7(b) shows an incorrect patch generated by fine-tuned CodeGen, which merely replaces the Boolean.FALSE with Boolean.TRUE. In general, except Codex, other LLMs and fine-tuned LLMs only fix vulnerabilities that require simple modifications such as deleting statements or replacing variable/method names. On the other hand, Codex fixes 15 out of the 16 vulnerabilities (the union of all bugs, for which Codex generates at least one correct patch in twenty-five runs). The one vulnerability fixed by other LLMs but not Codex is Vul4J-39 of type CWE-200 (Exposure of Sensitive Information to an Unauthorized Actor). This vulnerability can be fixed by simply deleting the entire buggy code. However, for Vul4J-39, Codex generates patches by applying different modifications to the buggy code, rather than deleting it. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**LLMs**} & \multicolumn{4}{c}{**Fine-tuned LLMs**} & \multicolumn{4}{c}{**APR models**} \\ \cline{2-13} & & Codex & CodexT & CodexGen & PLBART & InCoder & CodexT & CodexGen & PLBART & InCoder & CURE & Recoder & RewardR & KNDD \\ \hline Vllench (15) & **4.0**/ 4.0 & 0/0 & 1/2 & 2/3 & 2/2 & 3/4 & 3/4 & 2/3 & 3/4 & 0/1 & 1/2 & 2/3 & 0/0 \\ Vul4J (35) & **6.2**/ 10.9 & 2/2 & 1/6 & 0/4 & 3/4 & 2/7 & 5/8 & 2/6 & 6/9 & 1/4 & 0/4 & 0/2 & 1/1 \\ \hline **Total (50)** & **10.2**/ 14.9 & 2/2 & 2/8 & 2/7 & 5/6 & 5/11 & 8/12 & 4/9 & 9/13 & 1/5 & 1/6 & 2/5 & 1/1 \\ \hline Compilation Rate (vt) & 79.7 & 6.4 & 35.8 & 47.8 & 65.2 & 46.8 & 47.2 & 45.2 & 55.2 & 24.5 & 57.6 & 37.7 & 37.3 \\ \hline \hline \end{tabular} \end{table} Table 4. Comparison of LLMs and APR models on fixing Java vulnerabilities. For x/y in a cell, x denotes the number of correctly-fixed bugs, and y is plausibly-fixed bugs (with at least one patch that passes the test cases). RewardR is RewardRepair. Figure 8. Java vulnerability Vul4J-47 and its patches Figure 7. Vul4J-12’s and VulJ-1’s developer patch and uncompilable patch Ratpack-1, Vul4J-12, Vul4J-39 and Jenkins-2 are four vulnerabilities fixed by the most number (6-7 out of 13) of models. Ratpack-1 (Figure 9) when initializing DefaultttttpHeaders, sets the constructor argument to false, which disables the validation for user-supplied header values. The correct patch is simply removing false or changing it to true to enable the validation. The fix for Vul4J-12 (Figure 6(a)) is to change the keyword while to if, and the fix for both Jenkins-2 and Vul4J-39 is to simply delete of an if statement that exposes sensitive information to unauthorized actors. The simplicity of these patches are evident from the number of models that can fix them. ### RQ3: Fixing Capabilities on Transformed Vulnerabilities To mitigate the training-testing data overlapping threat, we apply code transformations to the benchmarks to study the generalization abilities of Codex and LLMs on unseen data (Section 4). Table 6 shows the number of vulnerabilities that LLMs as is, fine-tuned LLMs, and APR techniques can fix in four settings: (1) _No transformation_--the original vulnerability dataset, (2) _Rename only_--only identifier renaming is applied, (3) _Code structure change only_--only code structure change_ is applied, and (4) _Rename + code structure change_--both transformations are applied. Overall, code transformations make LLMs (fine-tuned or not) and APR techniques fix fewer vulnerabilities. For example, fine-tuned InCoder fixes nine vulnerabilities in Vul4J and VJBench (no transformation), but only fixes five fully transformed vulnerabilities (Rename + Code structure change). The impact of transformation is smaller on some models, e.g., Codex and fine-tuned CodeT5, demonstrating these models' robustness against code transformations and generalized learning capabilities. This result, to some extent, addresses the threat of Codex's non-public training data and reveals Codex's strong learning and vulnerability-fixing capability. Many models only fix two or fewer vulnerabilities without transformations, thus the impact of transformations cannot be big for these models. However, we see a general trend across almost all models that these code transformations make models fix fewer number of vulnerabilities. Figure 11(a) shows an example, Halo-1, whose correct fix is to call normalize() on pathToCheck to remove any redundant elements in the file path. This bug can be correctly fixed by Codex, fine-tuned CodeGen, and fine-tuned InCoder. Yet, after applying both transformations, only Codex can fix it (Figure 11(b)). For fine-tuned LLMs, different transformations have different effects but each transformation significantly affects at least one LLM. For example, although identifier renaming has small effect on CodeT5 and CodeGen, it decreases the number of vulnerabilities that InCoder fixes by four. The result shows that our code transformation effectively tests the generalization ability of LLMs on unseen data. One interesting observation is that some models fix transformed vulnerabilities that they cannot fix in the original dataset. This is a reasonable phenomenon because our transformation may convert a code snippet into a simpler form for the models to fix. For example, Vul4J-30 is a bug that none of the models fixes in its original form, but its transformed version is fixed by all four fine-tuned LLMs when code structure transformation is applied. Figure 13 shows that the fix of Vul4J-30 is to call trim() on String.valueOf(value). The original vulnerability is hard to fix as String.valueOf(value) is a part of a complex if-condition. Yet, after code transformation, String.valueOf(value) stands out as a single statement, which is easier for LLMs to repair. This phenomenon suggests that equivalent code transformation could be a promising direction to simplify the vulnerable code and enhance the effectiveness of fixing vulnerabilities. ## 7. Threats to Validity Java vulnerabilities are diverse. It is hard for benchmarks to represent all of them. Thus, our findings might not generalize to all Java \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{LLMs} & \multicolumn{4}{c}{Fine-tuned LLMs} & \multicolumn{4}{c}{APR Techniques} \\ \cline{2-13} & \multicolumn{1}{c}{Codex} & CodF5 & CodeGen & PLRM & InCoder & \multicolumn{1}{c}{CodF5} & CodeGen & PLRM & InCoder & \multicolumn{1}{c}{CURE} & Recoder & \multicolumn{1}{c}{Reward} & \multicolumn{1}{c}{KNDO} \\ \hline No transformation & **10.2 \(\pm\)0.3** & 2 & 2 & 5 & 5 & 8 & 4 & 9 & 1 & 1 & 2 & 1 \\ \multicolumn{13}{c}{Rename only} & **8.1 \(\pm\)0.3** & 0 & 1 & 0 & 2 & 4 & 6 & 1 & 5 & 0 & 1 & 1 & 1 \\ Code structure change only & **9.9 \(\pm\)0.3** & 0 & 2 & 2 & 1 & 4 & 6 & 4 & 5 & 0 & 1 & 1 & 2 \\ \multicolumn{13}{c}{Rename + code structure change} & **8.3 \(\pm\)0.4** & 0 & 1 & 1 & 1 & 3 & 4 & 3 & 5 & 0 & 1 & 1 & 0 \\ \hline \hline \end{tabular} \end{table} Table 6. Impact of code transformation on LLMs’ and APR models’ vulnerability repair capabilities. For Codex, x \(\pm\) y: x denotes the average number of correctly fixed bug, and y denotes the margin of error (95% confidence). Figure 12. Halo-1 before and after transformation Figure 13. Vul4J-30 before and after code structure change vulnerabilities. We address this threat by expanding the existing Java vulnerability benchmark with a new dataset of vulnerabilities. We rely on developers' patches to assess whether a vulnerability is fixed. Developers may make a mistake in fixing vulnerabilities. Therefore, our ground truth might be incorrect. We mitigate this threat by only looking at vulnerabilities that are publicly disclosed in the NVD dataset that are reproducible and include test cases indicating that the fixed version is no more exploitable. Another threat is that Codex (and other LLMs) may have been trained on the vulnerability patches in Vul4J and VJBench dataset. To mitigate this problem, we apply code transformations to create semantically equivalent vulnerabilities that are not included in their training dataset. Then we apply Codex to repair these transformed programs to prove that Codex is indeed able to repair new vulnerabilities that it has not seen. ## 8. Related Work ### DL-based Vulnerability Fixing Techniques Much work uses DL to fix vulnerabilities. Encoder-decoder approaches have been proposed for repairing C vulnerabilities: (Kumar et al., 2017) fine-tuned a CodeT5 model with C vulnerability repair data; (Kumar et al., 2017) trained a transformer model on a large bug fixing dataset and then tuned on a small vulnerability fixing dataset, but they use sequence accuracy as the evaluation metric rather than practical APR settings. Previous work (Kumar et al., 2017) applied both CodeBERT and GraphCodeBert to fix vulnerabilities, but they only evaluated on a _synthetic_ vulnerability database, the Juliet 1.1 C/C++ test suite (Kumar et al., 2017), which is a benchmark for evaluating static analyzers only. As a result, the vulnerabilities in the dataset are isolated and simplified to fit within a few lines and are not representative of code vulnerabilities in the production. Our work is different since we use a dataset of _real-world_ vulnerabilities for our evaluation, making our results closer to what researchers and developers can expect of the quality of LLM vulnerability repair in real-world production code. Prior work (Zhao et al., 2018) applied LLMs with zero-shot learning to repair seven hand-crafted C/Python vulnerabilities and 12 real-world C vulnerabilities. They explored the effectiveness of different prompt templates and used the static analysis tool CodeQL or C sanitizers to detect the vulnerabilities to incorporate the obtained error messages into the input prompts. Our work differs from (Zhao et al., 2018) in several main aspects. First, we study not only LLMs but also DL-based APR tools and LLMs fine-tuned with general APR data. Second, we evaluate our approach on a larger dataset of 50 real-world Java vulnerabilities. Third, we apply code transformations to mitigate the data leakage problem and suggest a new direction of using transformations to simplify the repair for some vulnerabilities. Most vulnerabilities in Vul4J and VJBench cannot be detected by state-of-the-art Java security analysis tools, so we cannot incorporate error messages in the input prompts as (Zhao et al., 2018) did. ### Vulnerability Benchmarks Previous work proposed benchmarks and datasets to help evaluate vulnerability fixing approaches. Maestro (Maestro, 2018) propose a platform for benchmarking tools on Java and C++ vulnerabilities. As Maestro does not support running LLMs and APR models, we directly use the same Java vulnerability dataset, Vul4J (Maestro, 2018), with our new dataset VJBench. Other benchmarks and datasets of real-world vulnerabilities have been proposed (Kumar et al., 2017; Li et al., 2018; Li et al., 2018; Li et al., 2018). However, these datasets only contain code snippets from the fixing commits and do not have test cases. Therefore, such datasets can only support code matching when evaluating the correctness of patches, and cannot be used in automated program repair in practice. ### LLMs for Repair and Other Tasks Researchers use LLMs to improve many software engineering tasks such as automated program repair (Zhao et al., 2018; Li et al., 2018; Li et al., 2018), auto-complete suggestions (Li et al., 2018), and pair-programming (Zhao et al., 2018). Much work also discusses the implication of LLMs for software developers (Li et al., 2018; Li et al., 2018; Li et al., 2018) and current limitations of LLMs (Li et al., 2018; Li et al., 2018; Li et al., 2018). Our work explores a different application domain of LLMs, with its own challenges (vulnerabilities are notoriously difficult to fix (Li et al., 2018)) that have not been well explored yet. ## 9. Conclusion This work is the first to investigate LLMs' and DL-based APR models' capacity at repairing vulnerabilities in Java. We evaluate five LLMs, four fine-tuned LLMs, and four DL-based APR techniques on two real-world Java vulnerability benchmarks including a new one that we create. We use code transformations to address the training and testing data overlapping threat of LLMs and create a new Java vulnerability repair benchmark VJBench, and its transformed version VJBench-trans. We find that existing LLMs and APR models fix very few Java vulnerabilities, and call for new research innovations to improve automated Java vulnerability repair such as creating larger vulnerability repair training datasets, fine-tuning LLMs with such data, exploring few-shot learning, and leveraging simplifying transformations to improve program repair. **Replication package**: Our benchmark and artifacts are available at (Li et al., 2018). ## Acknowledgement We thank the reviewers for their insightful comments and suggestions. This work was funded in part by NSF 1901242, NSF 2006688, J.P. Morgan AI Faculty Research Awards, and Meta/Facebook Research Awards. Any opinions, findings, and conclusions in this paper are those of the authors only and do not necessarily reflect the views of our sponsors.
2307.10436
A Matrix Ensemble Kalman Filter-based Multi-arm Neural Network to Adequately Approximate Deep Neural Networks
Deep Learners (DLs) are the state-of-art predictive mechanism with applications in many fields requiring complex high dimensional data processing. Although conventional DLs get trained via gradient descent with back-propagation, Kalman Filter (KF)-based techniques that do not need gradient computation have been developed to approximate DLs. We propose a multi-arm extension of a KF-based DL approximator that can mimic DL when the sample size is too small to train a multi-arm DL. The proposed Matrix Ensemble Kalman Filter-based multi-arm ANN (MEnKF-ANN) also performs explicit model stacking that becomes relevant when the training sample has an unequal-size feature set. Our proposed technique can approximate Long Short-term Memory (LSTM) Networks and attach uncertainty to the predictions obtained from these LSTMs with desirable coverage. We demonstrate how MEnKF-ANN can "adequately" approximate an LSTM network trained to classify what carbohydrate substrates are digested and utilized by a microbiome sample whose genomic sequences consist of polysaccharide utilization loci (PULs) and their encoded genes.
Ved Piyush, Yuchen Yan, Yuzhen Zhou, Yanbin Yin, Souparno Ghosh
2023-07-19T20:00:00Z
http://arxiv.org/abs/2307.10436v1
A Matrix Ensemble Kalman Filter-based Multi-arm Neural Network to Adequately Approximate Deep Neural Networks ###### Abstract Deep Learners (DLs) are the state-of-art predictive mechanism with applications in many fields requiring complex high dimensional data processing. Although conventional DLs get trained via gradient descent with back-propagation, Kalman Filter (KF)-based techniques that do not need gradient computation have been developed to approximate DLs. We propose a multi-arm extension of a KF-based DL approximator that can mimic DL when the sample size is too small to train a multi-arm DL. The proposed Matrix Ensemble Kalman Filter-based multi-arm ANN (MEnKF-ANN) also performs explicit model stacking that becomes relevant when the training sample has an unequal-size feature set. Our proposed technique can approximate Long Short-term Memory (LSTM) Networks and attach uncertainty to the predictions obtained from these LSTMs with desirable coverage. We demonstrate how MEnKF-ANN can "adequately" approximate an LSTM network trained to classify what carbohydrate substrates are digested and utilized by a microbiome sample whose genomic sequences consist of polysaccharide utilization loci (PULs) and their encoded genes. The scripts to reproduce the results in this paper are available at [https://github.com/Ved-Piyush/MEnKF-ANN-PUL](https://github.com/Ved-Piyush/MEnKF-ANN-PUL). ## 1 Introduction Deep Learners (DLs) have achieved state-of-art status in empirical predictive modeling in a wide array of fields. The ability of DLs to synthesize vast amounts of complex data, ranging from high dimensional vectors to functions and images, to produce accurate predictions has made them the go-to models in several areas where predictive accuracy is of paramount interest. Bioinformatics has also seen a steep increase in articles developing or deploying DL techniques in recent years [Min et al., 2017]. However, conventional DLs trained via gradient descent with back-propagation require tuning of a large number of hyperparameters. Additionally, given the vast number of weights that DLs estimate, they are prone to overfitting when the training sample size is relatively small. Since the gradient descent algorithms compute the weights deterministically, DLs in their vanilla form do not yield any uncertainty estimate. Several techniques have been proposed to alleviate the foregoing issues in DLs. For instance, Bayesian Neural Network (BNN) [Kononenko, 1989, Neal, 2012] was explicitly devised to incorporate epistemic and aleatoric uncertainty in the parameter estimation process by assigning suitable priors on the weights. The Bayesian mechanism can process these priors and generate uncertainty associated with DL predictions. Additionally, with judicious choice of priors, BNNs can be made less prone to overfitting [Fortuin et al., 2021, Srivastava et al., 2014]. Variational inference is another popular technique for uncertainty quantification in DLs. In particular, Hinton and Van Camp [1993] showed that a posterior distribution for the model weights could be obtained by minimizing the Kullback-Leibler distance between a variational approximation of the posterior and the true posterior of the weights. The Bayes by Backprop [Blundell et al., 2015] is another technique that uses variational formulation to extract uncertainty associated with the weights in DLs. However, the Monte Carlo dropout technique [Srivastava et al., 2014], wherein each neuron (and all of its connections) is randomly dropped with some probability during the model training process, has arguably turned out to be the most popular method to regularize DLs and extract predictive uncertainty. In addition to the conceptual simplicity of the dropout technique, it was shown that the models trained using dropout are an approximation to Gaussian processes and are theoretically equivalent to variational inference [Gal and Ghahramani, 2016]. Regardless of its conceptual simplicity and theoretical underpinning, dropout methods require gradient computation. Hence, it can be quite computationally intensive in DLs with millions of parameters. Another suite of methods for approximating DLs uses Kalman Filters (KF), or its variants, to obtain approximate estimates of the DL parameters [Yegenoglu et al., 2020, Rivas and Personnaz, 1998, Wan and Van Der Merwe, 2000, Julier and Uhlmann, 2004, Chen et al., 2019]. In particular, the Ensemble Kalman Filter (EnKF) technique offers a computationally fast approximation technique to DLs. For instance, Chen et al. [2019] train a single hidden layer neural network using the EnKF updating equations outlined in Iglesias et al. [2013] and show how using the augmented state variable, one can estimate the measurement error variance. In the DL setting, Chen et al. [2018] demonstrate the utility of EnKF in approximating a Long Short Term Memory (LSTM) model. Yegenoglu et al. [2020] use the EnKF to train a Convolutional Neural Network directly using the Kalman Filtering equations derived in Iglesias et al. [2013]. All these methods approximate single-arm DLs and, therefore, cannot be used in situations where input features can be represented in multiple ways. Our target is to develop a multi-arm approximator to the DL. In principle, we can train different DL approximators for each representation and perform a post-training model averaging. However, that would increase the computation cost substantially. We argue that since multiple different feature representations do not necessarily offer complimentary information, developing a multi-arm approximator that performs model averaging while training would perform "adequately". To that end, we develop a Matrix Ensemble Kalman Filter (MEnKF)-based multi-arm ANN that approximates a deep learner and simultaneously performs model averaging. We apply our method to approximate an LSTM model trained to classify what carbohydrate substrates are digested and utilized by a microbiome sample characterized by genomic sequences consisting of polysaccharide utilization loci (PULs) [Bjursell et al., 2006] and their encoded genes. We use two different representations of the genomic sequences consisting of the PULs in two different arms, our MEnKF-ANN approximator, and demonstrate that our approximator closely follows the predicted probabilities obtained from the trained LSTM. We also generate prediction intervals around the LSTM-predicted probabilities. Our results show that the average width of the prediction interval obtained from the MEnKF-ANN approximator is lower than that obtained from the original LSTM trained with MC dropout. We also perform extensive simulations, mimicking the focal dataset, to demonstrate that our method has desirable coverage for test samples compared to the MC dropout technique. Finally, we emphasize that even though the original problem is binary classification, our MEnKF-ANN approximator is designed to emulate the probabilities obtained from the original LSTM model and quantify the uncertainties in the LSTM-predicted probabilities. The remainder of the article is organized as follows: In section 2, we describe the aspects of an experimentally obtained microbiome dataset that motivated us to design this approximator. In section 3, for the sake of completion, we offer a brief review of KF, EnKF, and how these techniques have been used to train DLs. Section 4 details the construction of our MEnKF-ANN method. In section 5, we offer extensive simulation results under different scenarios and follow it up with the application on real data in section 6. Finally, section 7 offers concluding remarks and future research directions. ## 2 Motivating Problem The human gut, especially the colon, is a carbohydrate-rich environment (Kaoutari et al., 2013). However, most of the non-starch polysaccharides (for example, xylan, pectin, resistant glycans) reach the colon undegraded (Pudlo et al., 2022) because human digestive system does not produce the enzymes required to degrade these polysaccharides (Flint et al., 2012). Instead, humans have developed a complex symbiotic relationship with gut microbiota, with the latter providing a large set of enzymes for degrading the aforementioned non-digestible dietary components (Valdes et al., 2018). Consequently, an essential task in studying the human gut microbiome is to predict what carbohydrate substrates a microbiome sample can digest from the genetic characterization of the said microbiome (Koropatkin et al., 2012). In order to generate a focused genetic characterization of the microbes that relates to their carbohydrate utilization property, one often investigates the genes encoding the Carbohydrate Active Enzymes (CAZymes) and other proteins that target glycosidic linkages and act to degrade, synthesize, or modify carbohydrates (Lombard et al., 2014; Zhang et al., 2018). This set of genes tend to form physically linked gene clusters in the genome known as polysaccharide utilization loci (PULs) (Bjursell et al., 2006). Consequently, the gene sequences associated with PULs of microbes could be used as a predictor to ascertain the carbohydrate substrate the microbe can efficiently degrade. However, these gene sequences are string-valued quantities (Huang et al., 2018; Stewart et al., 2018) and hence their naive quantitative representations (for instance, one-hot-encoding or count vectorization) often do not produce classifiers with acceptable accuracy (Badjatiya et al., 2017). Instead, we can use LSTM to process the entire sequence of string-valued features and then implement a classifier with a categorical loss function. The trained LSTM, by default, produces an embedding of the gene sequences in a vector space. Alternatively, we can also use a Doc2Vec embedding of the entire sequence associated with the PUL or an arithmetic average of Word2Vec embedding of each gene in the sequence and train a shallow learner (an ANN, for example). Since various representations of the predictors are available, we can train a multi-arm DL that takes different representations of features in different arms and performs concatenation/late integration of the embeddings before the prediction layer (Liu et al., 2020, hanifi-Noghabi et al., 2019]. However, such multi-arm DLs require a relatively large number of training samples - typically tens of thousands. Since the experimental characterization of new PULs for carbohydrate utilization is an expensive process [Ausland et al., 2021], we do not have large enough labeled samples to train complex multi-arm DLs. This predicament motivates us to develop a multi-arm approximator to a DL with the following capabilities: (a) it must "adequately" approximate the focal single-arm DL, (b) it should be able to ingest different feature representations in different arms and perform model averaging, (c) it should be able to detect if the set of representations supplied to it is substantially different from the representations used to train the original DL, i.e., sensitive to major misspecification. Since the original DL is trained on a single representation of features and the approximator is ingesting multiple representations, the latter is misspecified in a strict sense. However, this misspecification is deliberately introduced to circumvent training a multi-arm DL and assess, via model averaging, whether there is any benefit in using multiple representations of the same feature set. We extract the dataset from the dbCAN-PUL database [Ausland et al., 2021] that contains experimentally verified PULs and the corresponding GenBank sequences of these PULs along with known target carbohydrate substrates. Figure 1 shows an example of a gene sequence associated with a PUL for the substrate Pectin. We have a total of approximately 411 data points. Figure 2 shows the dataset's frequency distribution of various target substrates. We do not have sufficient samples to train a complex DL to classify all the available substrates. Hence we propose to classify the two most frequently occurring target substrates - Xylan and Pectin - and train an LSTM binary classifier. Seventy-four samples belong to these two classes of substrates in a reasonably balanced way. One way to attach uncertainty to the probabilities estimated by the LSTM architecture is to activate the dropout layers during the prediction phase. This will generate multiple copies of the prediction depending on the architecture of the dropout layers. However, we need to decide on how many dropout layers to include and where to place them. Often the prediction intervals are pretty sensitive to the number and placements of dropout layer(s). For instance, the top left and bottom left panels of Figure 3 show the prediction intervals associated with eight held-out test samples obtained when two dropout layers were included - one inside the LSTM and one just before the final prediction layer. In contrast, the top right and bottom right panels of Figure 3 show the prediction intervals associated with the same test samples obtained when the second dropout layer was removed from the foregoing LSTM architecture. Observe how the number and placement of dropout layers influence the variability in the width of these intervals. If we wish to control the width variability, the placement of the dropout layer becomes a tuning parameter and further increases the hyperparameter search space. We empirically show that our MEnKF-ANN approximator, trained on the logit transformed LSTM-estimated probabilities as the response and the embedding of the sequences obtained from LSTM and Doc2Vec operations as two types of features produce more stable prediction intervals regardless of the location of the dropout layer in the original LSTM. Figure 1: Pectin PUL Figure 3: Boxplots showing the Predictions superimposed with the ground truth value from Heavy and Low Just LSTMs Figure 2: Frequency distribution for the various substrates Background This section offers a brief overview of Kalman Filters and Ensemble Kalman Filters and discusses how these methods have been used to train NNs and approximate DLs. For an extensive discussion on KF and EnKF technique, we direct the audience to Katzfuss et al. (2016) and Evensen (2003), respectively. ### Linear State Space Model & Kalman Filter Consider a linear Gaussian state-space model given by \[y_{t}=H_{t}x_{t}+\epsilon_{t},\ \ \epsilon_{t}\sim\mathcal{N}_{m_{t}}( 0,R_{t}) \tag{1}\] \[x_{t}=M_{t}x_{t-1}+\eta_{t},\ \ \eta_{t}\sim\mathcal{N}_{n}( 0,Q_{t}) \tag{2}\] where \(y_{t}\) is the \(m_{t}\) dimensional observation vector at time step \(t\), \(x_{t}\) is the \(n\) dimensional state variable at that time, \(H_{t}\) and \(M_{t}\) denote the observation and the state transition matrices. Assume that the filtering distribution of the state vector at \(t-1\) is given by \[x_{t-1}|y_{1:t-1}\sim\mathcal{N}(\hat{\mu}_{t-1},\hat{\Sigma}_{t-1}), \tag{3}\] KF computes the forecast distribution at \(t\) using (2) as \[x_{t}|y_{1:t-1}\sim\mathcal{N}(\hat{\mu}_{t},\hat{\Sigma}_{t})\] \[\hat{\mu}_{t}:=M_{t}\hat{\mu}_{t-1},\] \[\tilde{\Sigma}_{t}:=M_{t}\hat{\Sigma}_{t-1}M_{t}^{{}^{\prime}}+Q_ {t} \tag{4}\] Once the measurement at time step \(t\) becomes available, the joint distribution of \((x_{t},y_{t})\) is given by \[\begin{pmatrix}x_{t}\\ y_{t}\end{pmatrix}\bigg{|}y_{1:t-1}\sim N\left(\begin{pmatrix}\tilde{\mu}_{t}\\ H_{t}\tilde{\mu}_{t}\end{pmatrix},\begin{pmatrix}\tilde{\Sigma}_{t}&\tilde{ \Sigma}_{t}H_{t}^{{}^{\prime}}\\ H_{t}\tilde{\Sigma}_{t}&H_{t}\tilde{\Sigma}_{t}H_{t}^{{}^{\prime}}+R_{t}\end{pmatrix}\right) \tag{5}\] Then the updated filtering distribution is \(x_{t}|y_{1:t}\sim\mathcal{N}(\hat{\mu}_{t},\hat{\Sigma}_{t})\) where \(\hat{\mu}_{t}\) and \(\hat{\Sigma}_{t}\) are given by \[\hat{\mu_{t}}:=\tilde{\mu_{t}}+K_{t}(y_{t}-H_{t}\tilde{\mu_{t}}),\] \[\hat{\Sigma_{t}}:=(I_{n}-K_{t}H_{t})\tilde{\Sigma}_{t}, \tag{6}\] with \(K_{t}:=\tilde{\Sigma}_{t}H_{t}^{{}^{\prime}}(H_{t}\tilde{\Sigma}_{t}H_{t}^{{} ^{\prime}}+R_{t})^{-1}\) being the Kalman Gain Matrix. For large \(n\) and \(m_{t}\), computing the matrices in (6) is computationally expensive and often leads to numeric instability. ### Ensemble Kalman Filter The idea of EnKF is to take an ensemble of size \(N\) from the filtering distribution at \(t-1\). This ensemble is denoted as \(\hat{x}_{t-1}^{(0)},\ \ \hat{x}_{t-1}^{(1)},...,\hat{x}_{t-1}^{(N)}\sim \mathcal{N}_{n}\left(\hat{\mu}_{t-1},\hat{\Sigma}_{t-1}\right)\). In the forecast step of EnKF, (2) is applied to the ensemble members to obtain their evolution from \(t-1\) to \(t\). That is \[\tilde{x}_{t}^{(i)}=M_{t}\hat{x}_{t-1}^{(i)}+\eta_{t}^{(i)},\ \ \eta_{t}^{(i)}\sim \mathcal{N}(0,Q_{t}),\ \ \ \ i=1,\ldots,N \tag{7}\] It can also be shown that \(\tilde{x}_{t}^{(i)}\sim\mathcal{N}(\tilde{\mu}_{t},\tilde{\Sigma}_{t})\). Similar to the update step of the Kalman Filter, all the members of this ensemble must be updated when the measurement at time step t becomes available. To update these ensemble members, first, a sample is obtained for the measurement error sequences that is \(\epsilon_{t}^{(1)},\epsilon_{t}^{(2)},...,\epsilon_{t}^{(N)}\sim\mathcal{N}_{ m_{t}}(0,R_{t})\). Then using these simulated measurement errors \(N\) perturbed observations \(\tilde{y}_{t}^{(1)},\tilde{y}_{t}^{(2)},\ldots,\tilde{y}_{t}^{(N)}\) are obtained using \(\tilde{y}_{t}^{(i)}=H_{t}\tilde{x}_{t}^{(i)}+\epsilon_{t}^{(i)}\). Since the joint distribution of \((\tilde{x}_{t}^{(i)}\), \(\tilde{y}_{t}^{(i)})\) is the same as in (5), the updating equations are obtained by shifting the forecasted ensemble in (7) as follows \[\hat{x}_{t}^{(i)}=\tilde{x}_{t}^{(i)}+K_{t}(y_{t}-\tilde{y}_{t}^{(i)}),\ \ \ \ i=1, \ldots,N \tag{8}\] It can be easily shown that \(\hat{x}_{t}^{(i)}\sim\mathcal{N}_{n}(\hat{\mu}_{t},\hat{\Sigma}_{t})\). The computational benefit comes from the fact that instead of computing the Kalman gain matrix in (8) explicitly, the sample covariance matrix of the forecasted ensemble (\(\tilde{S}_{t}\), say) is used to estimate the Kalman Gain matrix as \(\hat{K}_{t}:=\tilde{S}_{t}H_{t}^{{}^{\prime}}(H_{t}\tilde{S}_{t}H_{t}^{{}^{ \prime}}+R_{t})^{-1}\). ### KF and EnKF for Deep Learners Although conventional KF is only suitable for estimating parameters in linear state-space models, several extensions have been proposed to generalize KF in nonlinear settings. For instance, Rivals and Personnaz (1998) used extended KF to train feed-forward NN. Wan and Van Der Merwe (2000) introduced the unscented KF that better approximates nonlinear systems while making it amenable to the KF framework. Anderson (2001) used the concept of state augmentation that offered a generic method to handle nonlinearity in state-space models via the KF framework. Iglesias et al. (2013) utilized this state augmentation technique to develop a generic method to train ANNs. They derived the state-augmented KF's forecast and updated equations in ANNs, thereby providing the algebraic framework to train DLs using the Ensemble Kalman Filters approach. These equations were subsequently used by Yegenoglu et al. (2020) to train a Convolutional Neural Network using EnKF. Furthermore, Chen et al. (2019) also used the updating equations in Iglesias et al. (2013) to train a single hidden layer ANN and demonstrated how using state augmentation one can estimate the measurement error variance. State-augmented EnKF formulation also estimated parameters in LSTMs (Chen et al., 2018). All the foregoing models offer techniques to estimate parameters of complex nonlinear DLs using the EnKF framework. However, they are unsuitable when we have multiple feature representations. We want to approximate a DL with a multi-arm ANN trained via EnKF, as discussed in section 2. ## 4 Methodology First, we offer a generic construction of the proposed MEnKF-ANN procedure and describe how this method could be deployed to solve the problem in section 2. We will use the following notations. \(Y\in\mathcal{R}\) is our target response. We have a total of \(m=\sum_{t=1}^{T}m_{t}\) training instances, with \(m_{t}\) being the number of training data points in the \(t^{th}\) batch. \(v_{t}^{f}\in\mathcal{R}^{p}\) and \(v_{t}^{g}\in\mathcal{R}^{q}\) denote two different representations of the features (possibly of different dimensions) for the \(t^{th}\) batch of data. Consider two ANNs, denoted by \(f\) and \(g\). Assume that the neural networks f and g have \(n_{f}\), \(n_{g}\) number of learnable parameters. For illustrative purposes, we will assume \(n_{f}=n_{g}\). If \(n_{f}\neq n_{g}\), we can use suitable padding when updating the weights. In the \(t^{th}\) batch of data, we assign the feature sets \(v^{f}_{t}\) and \(v^{g}_{t}\) to networks \(f\) and \(g\), respectively. We denote \(w^{f}_{t}\) and \(w^{g}_{t}\) to be the updated weights for the neural network \(f\) and \(g\), respectively. ### Matrix Kalman Filter based Multi-arm ANN Consider the state matrix, \(X_{t}\), associated with the \(t^{th}\) batch of data given by \[X_{t}^{(m_{t}+n_{g}+1)\times 2}=\begin{bmatrix}f(v^{f}_{t},w^{f}_{t}),&g(v^{g}_{ t},w^{g}_{t})\\ w^{f}_{t},&w^{g}_{t}\\ 0,&a_{t}\end{bmatrix} \tag{9}\] where \(a_{t}\) and \(b_{t}\) are real-valued scalar parameters. Define \(H_{t}^{m_{t}\times(m_{t}+n_{g}+1)}=[I_{m_{t}},0_{m_{t}\times(n_{g}+1)}]\) and \(G_{t}^{2\times 1}=[1-\sigma(a_{t}),\ \ \sigma(a_{t})]^{T}\) where \(\sigma(.):\mathcal{R}\rightarrow[0,1]\), with the sigmoid function being a popular choice of \(\sigma(.)\). Additionally, define \(\Theta_{t-1}=I_{m_{t}+n_{g}+1}\) and \(\psi_{t-1}=I_{2}\). We are now in a position to define the Matrix Kalman Filter. The measurement equation is given by: \[Y_{t}=H_{t}X_{t}G_{t}+\epsilon_{t} \tag{10}\] with the state evolution equation being \[X_{t}=\Theta_{t-1}X_{t-1}\psi_{t-1}+\eta_{t} \tag{11}\] Writing in \(vec\) format, (11) becomes \[x_{t}=vec(X_{t})=(\psi^{T}_{t-1}\otimes\Theta_{t-1})vec(X_{t-1})+vec(\eta_{t}) \tag{12}\] Now letting \(\phi_{t-1}=\psi^{T}_{t-1}\otimes\Theta_{t-1}\) and \(\tilde{\eta}_{t}=vec(\eta_{t})\) we get from (12) \[x_{t}=\phi_{t-1}x_{t-1}+\tilde{\eta}_{t} \tag{13}\] (10) can similarly be compactified as \[y_{t}=\mathcal{H}_{t}x_{t}+\epsilon_{t} \tag{14}\] where \(\mathcal{H}_{t}=G_{t}^{T}\otimes H_{t}\). Observe that(14) and (13) have the same form as the standard representation of linear state space model described in (1) and (2). Therefore, we can get the matrix state space model's solution by converting it to the vector state space model and then using EnKF to approximate the updating equations. We direct the audience to [Choukroun et al., 2006] for more details on Matrix Kalman Filters. ### Interpreting MEnKF-ANN and a Reparametrization The above construction of \(X_{t}\), \(H_{t}\), and \(G_{t}\) performs automatic model averaging while training. First, consider the matrix multiplication of \(H_{t}X_{t}\) from (10). This would be a \(m_{t}\times 2\) dimensional matrix in which the first column is the prediction, for the \(t^{th}\) batch, from the neural network \(f\) and the second column is the prediction from the neural network \(g\). Post multiplication by \(G_{t}\) would take the weighted average of each row in \(H_{t}X_{t}\) where the weights are defined inside the \(G_{t}\) matrix. Now consider the matrix multiplication of \(H_{t}X_{t}G_{t}\) from (10) \[H_{t}X_{t}G_{t} = \left[f(v_{t}^{f},w_{t}^{f}),\ \ g(v_{t}^{g},w_{t}^{g})\right]\begin{bmatrix}1- \sigma(a_{t})\\ \sigma(a_{t})\end{bmatrix} \tag{15}\] \[= \left[(1-\sigma(a_{t}))f(v_{t}^{f},w_{t}^{f})+\sigma(a_{t})g(v_{ t}^{g},w_{t}^{g})\right]\] (15) clearly demonstrates how our construction explicitly performs model averaging across the batches with \(1-\sigma(a_{t})\) and \(\sigma(a_{t})\) being the convex weights allocated to the ANNs \(f\) and \(g\), respectively. Although the foregoing construction connects Matrix KF formulation with multi-arm ANN and performs explicit model averaging, it suffers from a computational bottleneck. Using (13) and (14) the estimated Kalman Gain Matrix would be \(K_{t}=\widetilde{S}_{t}\mathcal{H}_{t}^{T}(\mathcal{H}_{t}\widetilde{S}_{t} \mathcal{H}_{t}^{T}+\sigma_{y}^{2}I_{m_{t}})^{-1}\). However, in the above parameterization we have \(G_{t}=[1-\sigma(a_{t}),\ \ \sigma(a_{t})]^{T}\) and \(\mathcal{H}_{t}=G_{t}^{T}\otimes H_{t}\). This would require computation of the estimated Kalman Gain matrix for each member in EnKF since, at any given iteration of our MEnKF-ANN, we have an \(a_{t}\) for each member of the ensemble. Thus computation complexity associated with the Kalman Gain computation increases linearly with the size of the ensemble in the above parametrization of the MEnKF-ANN. To alleviate this computational bottleneck, consider the following parametrization: \[X_{t}=\begin{bmatrix}(1-\sigma(a_{t}))f(v_{t}^{f},w_{t}^{f}),\ \sigma(a_{t})g(v_{t}^{g},w_{t}^{g})\\ w_{t}^{f},\ \ w_{t}^{g}\\ 0,\ \ a_{t}\end{bmatrix} \tag{16}\] and \(G_{t}=[1,\ \ 1]^{T}\). We still have explicit model averaging in the measurement equation, i.e., \[H_{t}X_{t}G_{t}=\left[(1-\sigma(a_{t}))f(v_{t}^{f},w_{t}^{f})+\sigma(a_{t})g(v _{t}^{g},w_{t}^{g})\right] \tag{17}\] but \(\mathcal{H}_{t}\) does not depend on \(a_{t}\). Therefore the matrix products for the Kalman Gain computation can now be computed once for each batch. Turning to the variance parameter in the measurement equation (14). Assume \(\epsilon_{t}\sim\mathcal{N}_{m_{t}}(0,\nu_{y}^{2}I_{m_{t}})\) To estimate \(\nu_{y}^{2}\), we augment the state vector as follows: \[X_{t}^{(m_{t}+n_{g}+2)\times 2}=\begin{bmatrix}(1-\sigma(a_{t}))f(v_{t}^{f},w_{t} ^{f}),\ \sigma(a_{t})g(v_{t}^{g},w_{t}^{g})\\ w_{t}^{f},\ \ w_{t}^{g}\\ 0,\ \ a_{t}\\ 0,\ \ b_{t}\end{bmatrix} \tag{18}\] where \(\nu_{y}^{2}=\log(1+e^{b_{t}})\) and \(H_{t}\) in (10) now becomes \([I_{m_{t}},0_{m_{t}\times(n_{g}+2)}]\). We used a softplus transformation of \(\nu_{y}^{2}\), instead of the usual log transformation for computational stability. ### Connecting MEnKF-ANN with DL Recall that our dataset consists of string-valued gene sequences associated with experimentally determined PULs, with the response being the carbohydrate substrates utilized by the said microbe. Since we consider only two categories of PULs, we have a binary classification problem. An LSTM trained with a binary cross-entropy loss is the approximand DL in our case. Suppose \(p\) is the probability of observing a sample of a particular category. In that case, the trained LSTM produces \(\hat{p}\) for each training instance, along with an embedding of the associated gene sequences. Our MEnKF-ANN approximator uses \(logit(\hat{p})\) as the target response. The LSTM embedding of the gene sequences is fed into one arm of the approximator while the other arm ingests Doc2Vec encoding of the gene sequences. Thus, our MEnKF-ANN approximates the probabilities estimated by an LSTM. The convex weights \(\sigma(a)\) ascertain which embedding has more predictive power. Clearly, MEnKF-ANN operates as a model stacker, and the predictive uncertainty interval that it produces, by default, around its target approximand quantifies how well simpler ANNs, fitted without backpropagation, can approximate a deep learner. To initialize the ensemble in the approximator, we draw the members in the state vector (18) from \(\mathcal{N}_{2(m_{t}+n_{g}+2)}(\mathbf{0},\nu_{x}^{2}I)\), where \(\nu_{x}^{2}\) is a tuning parameter that plays a key role in controlling the spread of the ensemble members and the dimension of \(I\) matches with the dimension of normal distribution. Following Chen et al. (2018, 2019), we assume the state transition is deterministic, i.e., \(x_{t}=\phi_{t-1}x_{t-1}\) and hence we do not have the variance parameter corresponding to \(\tilde{\eta}\) in the augmented state vector. When we reach the \(t^{th}\) batch of data, for the \(i^{th}\) member in the ensemble (\(i=1,2,...,N\)), we update each element in the augmented state vector \(w_{t}^{f,(i)}\), \(w_{t}^{g,(i)}\), \(a_{t}^{(i)}\), \(b_{t}^{(i)}\) using the updating equation (8) suitably modified to handle deterministic state transition. ## 5 Simulations We conducted extensive simulations to assess how well our MEnKF-ANN can approximate an LSTM binary classifier. This simulation exercise aims to demonstrate that our MEnKF-ANN is not only "adequate" in approximating the probabilities produced by LSTM but can also capture the "true" probabilities that generate binary labels. We compute the coverage and width of the prediction intervals of the target probabilities in the test set to assess the "adequacy" of the approximator. Then, we compare this coverage and width with those computed directly via an LSTM trained with MC dropout. Admittedly, the prediction intervals obtained from the latter are different from those computed from MEnKF-ANN. However, if the ground truth probabilities are known, an adequate approximator should be able to achieve near-nominal coverage when the approximand is not misspecified. Our simulation strategy mimics the focal dataset and uses the gene sequences associated with the original PULs to generate labels. As mentioned above, we extracted \(\hat{p}\) from the LSTM trained on the original dbCAN-PUL data. We call this LSTM the _true LSTM_. We consider \(\hat{p}\) the true probabilities for synthetic data generation. We then use noisy copies of \(\hat{p}\) to generate a synthetic label in the following way: generate \(logit(\tilde{p}_{i}^{(j)})=logit(\hat{p}_{i})+\epsilon_{i}^{*(j)},\ i=1,2,...,m,j=1,2,...,J\), where \(J\) is the number of the simulated dataset and \(m\), is the number of data points in each simulated set, the perturbation \(\epsilon_{i}^{*(j)}\) are iid Normal(0,0.01\({}^{2}\)). We generate synthetic labels \(\tilde{Y}\) by thresholding \(\tilde{p}_{i}^{(j)}\) at 0.5, i.e \(\tilde{Y}_{i}^{(j)}=I(\tilde{p}_{i}^{(j)}>0.5)\). Then the simulated dataset consists of \(D^{(j)}=\{\mathbf{F},\tilde{Y}^{(j)},\ j=1,2,...,J\}\), where \(\mathbf{F}\) is the set of original gene sequences from dbCAN-PUL. Now in each \(D^{(j)}\), we train a second LSTM (with two dropout layers) and extract \(\tilde{\tilde{p}}_{i}^{(j)},i=1,2,...,m\) and the embedding of the gene sequences. We call these LSTMs, trained on \(D^{(j)}\), the _fitted LSTMs_. Note that the embeddings from _fitted LSTMs_ could potentially be different from those obtained from the _true LSTM_. We denote the embedding from _fitted LSTMs_ by \(v_{i}^{(j),f},\ j=1,2,...J\). Our MEnKF-ANN is constructed to approximate the _fitted LSTMs_. To that end, the approximator uses \(logit(\tilde{\tilde{p}}_{i}^{(j)})\) as the target response. \(v_{i}^{(j),f}\) are supplied as features to one arm of the ANN, the other arm ingests \(v_{i}^{(j),g}\) - the Doc2Vec embedding of \(\mathbf{F}\). Once the MEnKF-ANN is trained, we use a hold-out set in each simulated dataset to generate predictive probabilities from the forecast distribution for each member in the KF ensemble and compute the empirical 95% predictive interval at \(logit^{-1}\) scale. To measure the adequacy of MEnKF-ANN, we compute the proportion of times the foregoing predictive interval contains \(\hat{p}\) in held-out test data. We expect this coverage to be close to the nominal 95%, and the average width of these intervals should not be greater than 0.5. Additionally, observe that the data-generating model uses LSTM embedding of \(F\); hence, using Doc2Vec embedding as input is misspecification. Consequently, we expect the average model weight associated with \(v^{f}\) to be larger than \(v^{g}\). Table 1 shows the performance of MEnKF-ANN in terms of coverage, the average width of prediction intervals, and average LSTM weight under two specifications of ensemble size (\(N\)) and initial ensemble variance (\(\nu_{x}^{2}\)). To compare these results, we offer the coverage and average width of the prediction intervals when both the dropout layers are activated in the _fitted LSTM_ during the prediction phase in Table 2. Observe how MEnKF-ANN recovered the _true probabilities_ even better than the correctly specified LSTM with dropout. The average interval widths obtained from MEnKF-ANN are also lower than those from the _fitted LSTM_. These demonstrate the adequacy of MEnKF-ANN in approximating the target DL. Additionally, we observe that the average LSTM model weight is \(\approx 1\) indicating the ability of our approximator to identify the correctly specified data-generating model. Figure 4 shows the histogram of the predictive samples obtained from the ensemble members for eight test samples in a randomly chosen replicate. The red vertical line denotes the true logits, and the green vertical lines show the fences of the 95% prediction interval. Now, to demonstrate a situation where MEnKF-ANN is "inadequate", we supply the approximator with a completely different feature set representation. Instead of using the LSTM embedding \(v^{f}\), we use word2vec embedding of each gene in the predictor string and take the arithmetic average of these word2vec embeddings to represent the entire sequence. We denote this feature set by \(\tilde{v}^{f}\) and then train the MEnKF-ANN using \(\tilde{v}^{f}\) and \(v^{g}\) as the features and \(logit(\tilde{\tilde{p}}^{(j)})\) as the target response. Evidently, MEnKF-ANN is highly misspecified. Table 3 reports the coverage and average width of the prediction interval obtained from this model. Observing the huge width of the intervals essentially invalidates the point prediction. Such large width indicates that MEnKF-ANN may not approximate the target DL. Therefore, we caution against using the coverage and width metrics to assess the "adequacy" of the _fitted LSTM_ itself. \begin{table} \begin{tabular}{l l l l l} \hline \(N\) & \(\nu_{x}^{2}\) & Coverage & Width & LSTM weight \\ \hline 216 & 16 & 90.25\% & 0.33 & 0.9997 \\ 216 & 32 & 89.25\% & 0.32 & 0.9999 \\ \hline \end{tabular} \end{table} Table 1: Performance of MEnKF-ANN using LSTM embedding and Doc2Vec \begin{table} \begin{tabular}{l l l l} \hline Rate & Reps & Coverage & Width \\ \hline 0.5 & 50 & 81.25\% & 0.53 \\ 0.5 & 200 & 84.50\% & 0.56 \\ \hline \end{tabular} \end{table} Table 2: Coverage and width of prediction intervals obtained from _fitted LSTM_ with two dropout layers ## 6 Application Recall that our focal dataset consists of \(n=74\) samples belonging to Xylan and Pectin. However, training an LSTM on a small sample size would require aggressive regularization, even with this reduced label space. Therefore, we draw on an extensive collection of unlabelled data containing gene sequences associated with CAZyme gene clusters (CGC) computationally predicted from genomic data [20, 17]. Although this unlabelled data contain approximately 250K CGC gene sequences, unlike experimentally characterized PULs, these sequences do not have known carbohydrate substrate information. They hence cannot be directly used for classification purposes. We, therefore, use this unlabelled dataset to learn the word2vec embeddings of each gene appearing in the unlabelled dataset. These embeddings are then used to initialize the embedding layer of the target LSTM classifier. Turning to the labeled dataset, instead of performing full cross-validation, we resort to subsampling procedure [13]. We take a subsample of sixty-six instances for training and hold eight instances for testing purposes. The subsample size (\(b\)) is chosen such that \(b(n)/n\approx 8\sqrt{n}/n\to 0\), as \(n\rightarrow\infty\). Although the subsampling theory requires generating \(\begin{pmatrix}n\\ b\end{pmatrix}\) replicates, the computational cost for generating \(\approx 10^{11}\) replicates, in our case, is prohibitive. Instead, we generate 50 independently subsampled replicates comprising training and testing sets of sizes 66 and 8, respectively. In each replication, an LSTM with the same architecture is trained on the foregoing 66 training instances. Under this scheme, the probability that the \(i^{th}\) instance in our dataset appears at least once in the test set is \(\approx 99.6\%\). The LSTM-estimated probabilities of observing a _Pectin_ substrate are extracted from each replicate. These probabilities are logit transformed and used as the target response for our MEnKF-ANN approximator. We feed the LSTM embedding and Doc2Vec embedding of the gene sequences Figure 4: True Logits Superimposed on Predicted Logits from MEnKF-ANN using LSTM and Doc2Vec embedding \begin{table} \begin{tabular}{l l l l l} \hline \(N\) & \(\nu_{x}^{2}\) & Coverage & Width & Word2Vec weight \\ \hline 216 & 16 & 96.25\% & 0.83 & 0.9155 \\ 216 & 32 & 94.25\% & 0.84 & 0.9787 \\ \hline \end{tabular} \end{table} Table 3: Performance of MEnKF-ANN using Word2Vec and Doc2Vec into the two arms of the approximator along with the foregoing logit-transformed estimated probabilities. We then generate predictions on the held-out test data points in each replicate. Finally, we compare the LSTM-prediction of probabilities with those generated by MEnKF-ANN generated predictions. The average MAE and the proportion of times a 95% prediction interval contains the LSTM-generated predictions in the held-out data set, under two different MEnKF-ANN hyperparameter choices are shown in Table 4 indicating that our approximator can be adequately used to generate the predictions. We do not report the LSTM weights estimated by MEnKF-ANN because, as we observed in the simulation (Table 1), the approximator overwhelmingly prefers the LSTM embeddings. Figure 5 shows the scatter plot of MEnKF-ANN-predicted and LSTM-predicted probabilities for the held-out data across 50 replicates. Figure 6 shows the boxplots associated with MEnKF-ANN predictions for the same set of test samples for which LSTM generated prediction boxplots were shown in the left column of Figure 3. Evidently, MEnKF-ANN can adequately approximate the target LSTM. Turning to the stability of prediction intervals, Table 5 shows the average width of the 95% prediction intervals obtained under two configurations of LSTM and their respective MEnKF-ANN approximators. LSTM\({}_{1}\) has two dropout layers (one in the LSTM layer and one before the final prediction layer) with a 50% dropout rate and 200 replicates. LSTM\({}_{2}\) has one dropout layer (in the LSTM layer) with a 50% dropout rate and 200 replicates. MEnKF-ANN\({}_{11}\) approximates LSTM\({}_{1}\) with 216 ensemble members and \(\nu_{x}^{2}=16\), MEnKF-ANN\({}_{12}\) also approximates LSTM\({}_{1}\), but now with 216 ensemble members and \(\nu_{x}^{2}=32\). Similarly, MEnKF-ANN\({}_{21}\) and MEnKF-ANN\({}_{22}\) approximates LSTM\({}_{2}\) with 216 ensemble members and \(\nu_{x}^{2}=16\) and \(\nu_{x}^{2}=32\), respectively. Observe that the variation in the average width between LSTM\({}_{1}\) and LSTM\({}_{2}\) is considerably higher than the variation between MEnKF-ANN\({}_{11}\) and MEnKF-ANN\({}_{21}\) or between MEnKF-ANN\({}_{12}\) and MEnKF-ANN\({}_{22}\). This indicates that the approximator produces more stable prediction intervals than obtaining prediction by activating the dropout layer during prediction. Finally, we demonstrate how MEnKF-ANN can naturally handle two predictive models with potentially different feature sets. This situation is relevant because, owing to the small sample size, we can train a shallow learner (ANN with backpropagation, for instance) that takes Doc2Vec representation of gene sequences as predictors to estimate the probabilities of observing the _Pectin_ substrate. Now, we can average the probabilities estimated by the LSTM (\(\hat{p}_{LSTM}\), say) and ANN (\(\hat{p}_{ANN}\), say) to produce a model-averaged estimated probability of observing _Pectin_ (\(\hat{\bar{p}}\), say). However, how would we attach uncertainty to \(\hat{\bar{p}}\)? The multi-arm construction of MEnKF-ANN provides a natural solution in this situation. We supply, as described in the foregoing sections, LSTM embeddings and Doc2Vec embeddings to the two arms of MEnKF-ANN but use \(logit(\hat{\bar{p}})\) as the target response here. Thus MEnKF-ANN is now approximating the average output of two primary models. These primary models are trained on the same response variable but use two different representations of features. Table 6 shows the performance of MEnKF-ANN in this situation for some combinations of \(N\) and \(\nu_{x}^{2}\). The coverage is measured with respect to \(\hat{\bar{p}}\) on the test sets. Although the average width and MAE are larger than those reported in Table 4, we observe that the LSTM weights \(\approx\) 0.5, which is what we would expect because MEnKF-ANN is _seeing_ equally weighted outputs from LSTM and ANN. ## 7 Discussion State-augmented Kalman Filter and its variants provide a gradient-free method that can be extended to approximate popular neural network-based deep learners for regression and classification tasks. In this article, we have developed a Matrix Ensemble Kalman Filter-based multi-arm neural network to approximate an LSTM. We have demonstrated that this technique adequately approximates the target DL regarding coverage and the average width of the prediction interval. We have also demonstrated how the in-built model averaging capability can be leveraged to attach \begin{table} \begin{tabular}{l l l l l l} \hline \hline Target model & Average Width & Approximator & Average Width \\ \hline LSTM\({}_{1}\) & 0.492 & MEnKF-ANN\({}_{11}\) & 0.102 \\ & & MEnKF-ANN\({}_{12}\) & 0.085 \\ LSTM\({}_{2}\) & 0.371 & MEnKF-ANN\({}_{21}\) & 0.119 \\ & & MEnKF-ANN\({}_{22}\) & 0.108 \\ \hline \hline \end{tabular} \end{table} Table 6: Performance of MEnKF-ANN trained on the averaged probability of LSTM and shallow ANN \begin{table} \begin{tabular}{l l l l l l} \hline \hline \(N\) & \(\nu_{x}^{2}\) & Coverage & Width & MAE & CPU Time \\ \hline 216 & 16 & 90.50\% & 0.1024 & 0.0200 & 2.39 mins \\ 216 & 32 & 85.50\% & 0.0850 & 0.0161 & 3.67 mins \\ \hline \hline \end{tabular} \end{table} Table 4: Performance of MEnKF-ANN using LSTM embedding and Doc2Vec for dbCAN-PUL data Figure 5: Scatterplot of First LSTM Predicted Probabilities vs. EnKF Predicted Probabilities \begin{table} \begin{tabular}{l l l l l l} \hline \hline Target model & Average Width & Approximator & Average Width \\ \hline LSTM\({}_{1}\) & 0.492 & MEnKF-ANN\({}_{11}\) & 0.102 \\ & & MEnKF-ANN\({}_{12}\) & 0.085 \\ LSTM\({}_{2}\) & 0.371 & MEnKF-ANN\({}_{21}\) & 0.119 \\ & & MEnKF-ANN\({}_{22}\) & 0.108 \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of the average width of prediction interval LSTM + MC Dropout and MEnKF-ANN approximator for each LSTM uncertainty to the averaged predictions generated by two different models. Our simulations suggest that by using an explicit model averaging construction, our approximator can also identify its target approximand. We have also observed that the prediction intervals generated by the approximator are less sensitive to the location of dropout layers and hence provide more stable prediction intervals than obtaining predictions by activating the dropout layers within the DL itself. Admittedly, our procedure requires an additional round of training, but its fast computation time (see Table 4), along with its ability to emulate the approximand, adequately compensate for that. We have also deployed our approximator on a highly accessed database, dbCAN-PUL, to attach uncertainty to the predicted probabilities produced by (a) the primary LSTM model and (b) an ensemble of LSTM and ANN models. The primary LSTM and ANN models were trained to classify two carbohydrate substrates using the gene sequences characterized by the PULs of the gut microbiome. We anticipate this technique will be helpful to domain experts in assessing the reliability of predictions generated by deep learners or an ensemble of learners. In the future, we propose to expand our model to handle more than two classes. This would enable us to utilize the information in the dbCAN-PUL database better. Another possible direction is to develop an analog of MEnKF-ANN that can directly handle binary data. Although the KF technique crucially requires Gaussianity assumption, but Fasano et al. (2021) recently developed an extension of the KF method that can handle binary responses. We are actively investigating Figure 6: Boxplots showing the MEnKF-ANN Predictions superimposed with the ground truth value for Heavy and Low Dropout how this technique can be adapted to our MEnKF-ANN framework. ## 8 Competing interests No competing interest is declared. ## 9 Author contributions statement V.P., Y.Z., and S.G. conceived the models and experiment(s), and V.P. conducted the experiment(s). V.P. and S.G. analyzed the results. Y.Yan and Y.Yin contributed to the training data. V.P. drafted the manuscript. All authors reviewed the manuscript. Y. Yin secured the funding. ## 10 Acknowledgments The authors thank the anonymous reviewers for their valuable suggestions. This work is supported in part by funds from the National Institute of Health (NIH: R01GM140370, R21AI171952) and the National Science Foundation (NSF: CCF2007418, DBI-1933521). In addition, we thank the lab members for their helpful discussions. This work was partially completed utilizing the Holland Computing Center of the University of Nebraska-Lincoln, which receives support from the Nebraska Research Initiative.
2307.14650
Spatial Upsampling of Head-Related Transfer Functions Using a Physics-Informed Neural Network
Head-related transfer function (HRTF) capture the information that a person uses to localize sound sources in space, and thus is crucial for creating personalized virtual acoustic experiences. However, practical HRTF measurement systems may only measure a person's HRTFs sparsely, and this necessitates HRTF upsampling. This paper proposes a physics-informed neural network (PINN) method for HRTF upsampling. The PINN exploits the Helmholtz equation, the governing equation of acoustic wave propagation, for regularizing the upsampling process. This helps the generation of physically valid upsamplings which generalize beyond the measured HRTF. Furthermore, the size (width and depth) of the PINN is set according to the Helmholtz equation and its solutions, the spherical harmonics (SHs). This makes the PINN have an appropriate level of expressive power and thus does not suffer from the over-fitting problem. Since the PINN is designed independent of any specific HRTF dataset, it offers more generalizability compared to pure data-driven methods. Numerical experiments confirm the better performance of the PINN method for HRTF upsampling in both interpolation and extrapolation scenarios in comparison with the SH method and the HRTF field method.
Fei Ma, Thushara D. Abhayapala, Prasanga N. Samarasinghe, Xingyu Chen
2023-07-27T06:55:10Z
http://arxiv.org/abs/2307.14650v2
# Physics Informed Neural Network for Head-Related Transfer Function Upsampling ###### Abstract Head-related transfer functions (HRTFs) capture the spatial and spectral features that a person uses to localize sound sources in space, and thus are vital for creating authentic virtual acoustic experience. However, practical HRTF measurement systems can only provide an incomplete measurement of a person's HRTFs, and this necessitates HRTF upsampling. This paper proposes a physics-informed neural network (PINN) method for HRTF upsampling. Unlike other upsampling methods which are based on the measured HRTFs only, the PINN method exploits the Helmholtz equation as additional information for constraining the upsampling process. This helps the PINN method to generate physically amiable upsamplings which generalize beyond the measured HRTFs. Furthermore, the width and the depth of the PINN are set according to the dimensionality of HRTFs under spherical harmonic (SH) decomposition and the Helmholtz equation. This makes the PINN to have an appropriate level of expressiveness and thus does not suffer from under-fitting and over-fitting problems. Numerical experiments confirm the superior performance of the PINN method for HRTF upsampling in both interpolation and extrapolation scenarios over several datasets in comparison with the SH methods. Head-related transfer function, physics-informed neural network, spherical harmonics, spatial audio, virtual acoustics. ## I Introduction Head-related transfer functions (HRTFs) denote the free field acoustic transfer functions between a point source and a position inside of a person's ear [1]. HRTFs characterize the filtering effect of a person's torso, head, and ears with respect to the direction of sound [1], and contains the spatial and spectral features that a person uses to localize sound sources in space. Spatial audio and virtual acoustic systems rely on the knowledge of HRTFs to reproduce artificial acoustic experience [2]. However, the dependence of HRTFs on a person's anatomy makes HRTFs highly individualized, and thus that accurate measurements of HRTFs over a large number of directions are desired for creating authentic acoustic experience [1]. Nonetheless, the complete measurement of HRTFs is both time-consuming and expensive, which holds most people back from taking their HRTF measurement. Practical HRTF measurement systems usually have to measure HRTFs over a limited number of directions due to the inconvenience of arranging loudspeakers over a whole sphere [1] or due to the time constraint on the measurement process, resulting an incomplete HRTF dataset. The incompleteness of HRTF datasets motivates researchers to upsample them. HRTF upsampling consists of two scenarios: interpolation and extrapolation. (Note that we focus on direction related HRTF upsampling. Distance related HRTF upsampling [3, 4, 5, 6] is not covered by this paper.) For the interpolation scenario, the HRTFs are measured over a limited number of directions due to time constraint, and the aim is to estimate the unknown HRTFs whose directions are between those of the measured HRTFs. Early works on the interpolation are mainly based on the expansion of HRTFs on linear functions, such as the spherical harmonics (SHs) [7, 8, 9], the principle components [10, 11, 12], the spline functions [13], and the wavelet functions [14]. Recent works of HRTF interpolation, on the other hand, are mainly based non-linear modeling with neural networks (NNs) such as the auto encoder [15], the generative adversarial networks [16], and the feature-wise linear modulation [17]. For the extrapolation scenario, HRTFs are measured over a limited polar angle range due to the inconvenience of arranging loudspeakers deep below and high above a person, and the aim is to estimate the unknown HRTFs beyond the range. The missing information over a range makes the extrapolation much more challenging than the interpolation. Up to date, there are only several works about HRTF extrapolation, and the majority of them are based on the SH decomposition. Zhang _et al._ developed an iterative algorithm which successively fills and estimates the unknown HRTFs [18, 19], and successfully recovered a low order HRTF over a full sphere with one quarter of data missing. Zotkin _et al._ proposed a regularized least-square (LS) fit method which estimates the unknown HRTFs at the expense of reduced accuracy for representing the measured HRTFs [20]. Ahrens _et al._ proposed a non-regularized LS fit method which estimates the unknown HRTFs based on a low-order LS fit to the measured HRTFs and estimations of the unknown HRTFs [21]. One problem shared by most of the above mentioned HRTF upsampling methods is that they estimate the unknown HRTFs based on the measured HRTFs only. Their estimations are essentially transformations of the information that is contained in the measured HRTFs. They have not effectively use additional information to further improve the accuracy of their estimations. This fact prompts us to take a different approach to HRTF upsampling: the physics-informed neural network (PINN). The PINN is a special kind of NN which incorporates physical knowledge, i.e., the governing partial differential equation (PDE) of a physical phenomenon, into its architecture [23, 24, 25, 26]. The physics knowledge is the additional information that can help a PINN to model the physic phenomenon beside the physical quantities. Since the seminal works of Raissi and his colleagues [23, 24], PINN has been successfully applied to many areas such as earth quake modeling [27, 28], propeller noise prediction [29], room acoustics [30], and sound field estimation [31]. The HRTFs can be regarded as the sound field around the human head, and sound fields in space obey the Helmholtz equation, the governing equation of acoustic wave propagation [32]. This inspires us to develop a PINN method for HRTF upsampling. We inform the training and the designing of the PINN with physics knowledge from two aspects. First, we use a modified form of the Helmholtz equation as part of the loss function. This helps the PINN to generate physically amiable upsamplings which generalize beyond the training data, and relieve the burden of balancing the PDE loss and the data loss with additional parameters. Second, we set the size of the PINN according to SH decomposition and the Helmholtz equation. Specifically, we set the width of the PINN as half of the dimensionality of HRTFs under SH decomposition [32, 33] and the depth of as three (the order of the Helmholtz equation plus one). This sets the proposed PINN method apart from PINN methods in other works which suffers from under-fitting or over-fitting problems due to inappropriate design of the network [34, 35, 36]. The superior performance of the PINN method for upsampling HRTFs is confirmed by numerical experiments on several datasets, and is compared with that of the SH method, the most widely used HRTF upsampling framework. The rest of this paper is organized as follows. We introduce the problem in Sec. II. We review the SH method in Sec. III and propose the PINN method in Sec. IV. The performance of the PINN method and the SH method are compared by extrapolation and interpolation experiments in Sec. V. Section VI concludes this paper and points out future directions of improvement. ## II Problem formulation We present the layout of a typical HRTF measurement system in Fig. 1, where we set up a Cartesian coordinate system and a spherical coordinate system with respect to the center of a person's head, point \(O\). We denote the Cartesian coordinates and the spherical coordinates as \((x,y,z)\) and \((r,\theta,\phi)\), respectively. The system measures the HRTFs between the loudspeakers which are placed on a sphere \(\mathbb{S}_{2}\) and the microphones which are placed inside of the person's ears. We denote HRTFs as \(P(\omega,r,\theta,\phi)\) in spherical coordinates or as \(P(\omega,x,y,z)\) in Cartesian coordinates [22], where \(\omega=2\pi f\) is the angular frequency and \(f\) is the frequency. Hereafter, we evaluate HRTFs at a single frequency and on a single sphere, and thus we skip frequency \(\omega\) and the sphere radius \(r\) when representing acoustic quantities for notation simplicity. Due to the obstruction of the person's body and the inconvenience of arranging the loudspeakers high above the person, the measurement system may only be able to measure the HRTFs over a polar range \((\theta_{\text{Low}},\theta_{\text{High}})\), where \(0<\theta_{\text{Low}}<\theta_{\text{High}}<\pi\). Due to the time constraint, the measurement system may only be able to measure the HRTFs over limited number of directions. Both of these two scenarios will result in an incomplete HRTF datasets. In this paper, we aim to upsample an incomplete HRTF dataset \(\{P(\theta_{q},\phi_{q})\}_{q=1}^{Q}\) or equivalently \(\{P(x_{q},y_{q},z_{q})\}_{q=1}^{Q}\) into a full or dense dataset. ## III Spherical harmonic methods In this section, we first briefly present the SH decomposition of HRTFs, and then review the regularized SH method for HRTF upsampling [20]. We express HRTFs in spherical coordinates for the ease of SH decomposition. HRTFs can be decomposed onto SHs and their coefficients as [37] \[\mathbf{P}\approx\mathbf{Y}\mathbf{A}, \tag{1}\] where \(\mathbf{P}=[P(\theta_{1},\phi_{1}),P(\theta_{1},\phi_{2}),...,P(\theta_{Q}, \phi_{Q})]^{\intercal}\) denote the measured HRTFs at \((\theta_{q},\phi_{q})_{q=1}^{Q}\) (\((\cdot)^{\intercal}\) is the transpose operation), \(\mathbf{A}=[A_{0,0},A_{1,-1},A_{1,0},A_{1,1},...,A_{U,U}]^{\intercal}\) denote the SH coefficients, and \[\mathbf{Y}=\left[\begin{array}{cccc}Y_{0}^{0}(\theta_{1},\phi_{1})&Y_{1}^{ -1}(\theta_{1},\phi_{1})&...&Y_{U}^{U}(\theta_{1},\phi_{1})\\ Y_{0}^{0}(\theta_{2},\phi_{2})&Y_{1}^{-1}(\theta_{2},\phi_{2})&...&Y_{U}^{U}( \theta_{2},\phi_{2})\\...&...&...&...\\ Y_{0}^{0}(\theta_{Q},\phi_{Q})&Y_{1}^{-1}(\theta_{Q},\phi_{Q})&...&Y_{U}^{U}( \theta_{Q},\phi_{Q})\end{array}\right], \tag{2}\] denotes a \(Q\times(U+1)^{2}\) matrix whose entries are order \(u\) and degree \(v\) SH \(Y_{u}^{v}(\cdot,\cdot)\) evaluated at \((\theta_{q},\phi_{q})_{q=1}^{Q}\). In (1) and (2), \(U\) is the dimensionality of HRTFs [32, 33] under SH decomposition and is normally chosen as \[U=\lceil 2\pi fr_{\text{h}}/c\rceil, \tag{3}\] where \(\lceil\cdot\rceil\) is the ceiling operation, \(c=343\) m/s is the speed of sound propagation, and \[r_{\text{h}}=\begin{cases}0.2\ \text{m},&f\leq 3\text{kHz},\\ 0.09\ \text{m},&f>3\text{kHz},\end{cases} \tag{4}\] Fig. 1: Layout of a typical HRTF measurement system, which measures the HRTFs between the loudspeakers which are placed on a sphere \(\mathbb{S}_{2}\) and the microphones which are placed inside of the person’s ears. is the radius of human head (including the head and torso scattering effect) [32, 33]. In this paper, for simplicity, we choose \[U=\lceil 2\pi fr_{\rm h}/c\rceil\approx\begin{cases}\lceil f/250\rceil,&f\leq 3 \rm kHz,\\ \lceil f/500\rceil,&f>3\rm kHz,\end{cases} \tag{5}\] and show the dimensionality \(U\) of HRTFs as a function of frequency in Fig. 2 for reference. In Fig. 2, we choose \(U=\max\{\lceil f/250\rceil,\lceil f/500\rceil\}\) for \(3\rm\ kHz<f<6\rm\ kHz\)[32]. Note that the sizes of human heads vary, and thus that (3), (5) and Fig. 2 should be regarded as a rule of thumb and are not supposed to be followed exactly. The regularized SH method first estimates the SH coefficients through [20] \[\hat{\bf A}=({\bf Y}^{\sf T}{\bf Y}+\gamma{\bf H})^{-1}{\bf Y}^{ \sf T}{\bf P}, \tag{6}\] where \({\bf H}\) is a \((U+1)^{2}\times(U+1)^{2}\) diagonal matrix whose diagonal entries are \(h_{l,l}=1+u(u+1)\) (\(u\) is the order of corresponding SH), and \(\gamma\) is the regularization parameter. To accurately estimate the SH coefficients up to order \(U\), we need the number of measured HRTFs to be sufficiently large, \(Q>(U+1)^{2}\) or \(U<\sqrt{Q}-1\)[32, 33]. The regularized SH method then estimate the HRTF from an arbitrary direction \((\theta_{e},\phi_{e})\) as \[\hat{P}_{\rm SH}(\theta_{e},\phi_{e}){\approx}\!\!\sum_{u=0}^{U} \sum_{v=-u}^{u}\hat{A}_{u,v}Y_{u}^{v}(\theta_{e},\phi_{e}). \tag{7}\] The regularization in (6) prevents the estimated HRTF (7) from taking exceptionally large value by constraining the amplitudes of the estimated SH coefficients \(\hat{\bf A}=[\hat{A}_{0,0},\hat{A}_{1,-1},\hat{A}_{1,0},\hat{A}_{1,1},...,\hat {A}_{U,U}]^{\sf T}\)[20], especially the high order coefficients. ## IV PINN method In this section, we first briefly introduce the PINN, and then propose a PINN method for HRTF upsampling. We express HRTFs in Cartesian coordinates to simplify the calculation of the Laplacian by the PINN. We normally build a PINN as a multiple layer fully connected feed-forward neural networks [23, 24, 25, 26]. The functionality of one layer is \[\mathfrak{P}({\bf x})=\sigma\left({\bf x}^{T}{\bf w}+b\right), \tag{8}\] where \({\bf x}\) is the input variable vector, \({\bf w}\) is the weight vector, \(b\) is the bias and \(\sigma\) is the activation function. The overall functionality of the PINN is the composition of \(L\) layers \[\Phi({\bf x};\zeta)=\left(\mathfrak{P}_{L}\circ\circ\circ\mathfrak{P}_{2} \circ\mathfrak{P}_{1}\right)({\bf x}), \tag{9}\] where \(\zeta\) represents the set of all trainable parameters. We adjust the parameters \(\zeta\) by minimizing a cost function \[\mathfrak{L}=\frac{1}{Q}\sum_{q=1}^{Q}\left(p_{q}-\Phi({\bf x}_{q };\zeta)\right)^{2}+\lambda\mathfrak{L}_{\rm PDE}({\bf x};\zeta), \tag{10}\] where \(\{{\bf x}_{q},p_{q}\}_{q=1}^{Q}\) are input-output training data pairs which are obtained by testing and measuring a physical system, \(\mathfrak{L}_{\rm PDE}({\bf x};\zeta)\) corresponds to the residual of the governing PDE, and \(\lambda\) is a regularization parameter. For HRTF upsampling, we design a PINN whose structure is shown in Fig. 3, where there are \(L\) hidden layers with \(W\) neurons on each hidden layer, with the inputs being the Cartesian coordinates, the activation function being \(\tanh\), and the output being HRTF estimation \(\hat{P}_{\rm PI}(x,y,z)\). We adjust the trainable parameters by minimizing the following cost function \[\mathfrak{L} =\underbrace{\frac{1}{Q}\sum_{q=1}^{Q}\|P(x_{q},y_{q},z_{q})- \hat{P}_{\rm PI}(x_{q},y_{q},z_{q})\|_{2}^{2}}_{\mathfrak{L}_{\rm data}}\] \[+\underbrace{\frac{1}{D}\sum_{d=1}^{D}\|\frac{1}{(w/c)^{2}} \nabla^{2}\hat{P}_{\rm PI}(x_{d},y_{d},z_{d})+\hat{P}_{\rm PI}(x_{d},y_{d},z_{ d})\|_{2}^{2}}_{\mathfrak{L}_{\rm PDE}}, \tag{11}\] where \(\|\cdot\|_{2}\) is the 2-norm, \(\nabla^{2}\equiv\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{ \partial y^{2}}+\frac{\partial^{2}}{\partial z^{2}}\) is the Laplacian operator [37], \(\{x_{q},y_{q},z_{q}\}_{q=1}^{Q}\) are the Cartesian coordinates of the measured HRTFs, \(\{x_{d},y_{d},z_{d}\}_{d=1}^{D}\) is a super set of \(\{(x_{q},y_{q},z_{q})\}_{q=1}^{Q}\), \(\mathfrak{L}_{\rm data}\) and \(\mathfrak{L}_{\rm PDE}\) denote the data loss and the PDE loss, respectively. Fig. 3: Structure of the PINN: the inputs are the Cartesian coordinates, the outputs are the HRTF estimations \(\hat{P}_{\rm PI}\), there are \(L\) hidden layers with \(W\) neurons on each hidden layer, we calculate the data loss and the PDE loss with respect the HRTF estimations \(\hat{P}_{\rm PI}\) and their Laplacian \(\nabla^{2}\). Fig. 2: Dimensionality \(U\) of HRTFs under SH decomposition and the PINN width \(W\) as functions of frequency \(f\). Once trained, the PINN can estimate the HRTF from an arbitrary direction \((\theta_{e},\phi_{e})\) as \(\hat{P}_{\text{PI}}(x_{e},y_{e},z_{e})\). Note that we regard the HRTFs as the sound field around the human head, and thus that the Cartesian coordinates in (11) correspond to \((r_{\text{h}},\theta,\phi)\). Below we explain the design and training of the PINN in detail: 1) **Loss**: The loss function (11) consists of two parts, the data loss \(\mathcal{L}_{\text{data}}\) and the PDE loss \(\mathcal{L}_{\text{PDE}}\). The data loss \(\mathcal{L}_{\text{data}}\) makes the PINN output to fit the measured HRTFs, or \(\hat{P}_{\text{PI}}(x_{q},y_{q},z_{q})\approx P(x_{q},y_{q},z_{q})\) for \(q\in[1,Q]\). The PDE loss \(\mathcal{L}_{\text{PDE}}\) regularizes the PINN output to conform with the Helmholtz equation, the governing equation of acoustic wave propagation, at \(\{(x_{d},y_{d},z_{d})\}_{d=1}^{D}\). This helps the PINN to generate physical amiable output at and beyond the training data. The regularization in (6), on the other hand, does not necessarily make the SH methods to generate physical amiable output as shown in the experiment section. 2) **Helmholtz equation:** As shown in (10), PINNs are multiple target optimization problems, and one would balance different loss terms with additional parameters such as \(\lambda\). Although tuning the additional parameter may improve the performance of the PINN, we decide not to do so because the tuning process can be tedious [40]. Instead, we use a special form of the Helmholtz equation as the PDE loss in (11), where the \((\omega/c)^{2}\) term is used as the denominator for the Laplacian \(\nabla^{2}\) rather than as a multiplier for the PINN output \(\hat{P}_{\text{PI}}\). The modification makes the magnitude of the PDE loss comparable with that of data loss, and more importantly reveals a different point of view of the Helmholtz equation. That is, the Helmholtz equation can be regraded as a fitting to the HRTFs with the Laplacian. With the magnitudes of the PDE loss and the data loss comparable with each other and without any apparent reason to prefer the data fitting or the Laplacian fitting, we simply add the two losses together without balancing them with additional parameters. In this way, the training of the PINN is greatly simplified and does result in a good HRTF upsampling performance. 3) **Training 1:** In Fig. 4 (a), Fig. 5 (a), and Fig. 6 (a), we present the normalized amplitudes of HRTFs at three different frequencies. These figures show that the amplitudes of the left parts where \(\phi<\pi\) tend to be different from the amplitudes of the right parts where \(\phi>\pi\). This is typical for HRTFs due to the head shadowing effect, and makes the PINN unable to estimate the two parts with the same accuracy because of their different levels of contributions to the loss (8). This fact informs us to train the PINN for the left part and right part separately, and merge the results afterwards. 4) **Training 2:** The HRTFs are complex values which are difficult to model with scalar activation functions. To simplify the training process, we train the PINN with the real part and the imaginary part of HRTFs separately, and merge the results afterwards. 5) **PINN size:** We provide guidance on the size of the PINN, specifically its width \(W\) (the number of neurons on each hidden layer) and depth \(L\) (hidden layer number). First, we set the width \(W\) based on the dimensionality \(U\) of HRTFs under SH decomposition. For the PINN method, the width \(W\) is the number of components that are needed to model the HRTFs [41]. For the SH method, the dimensionality \(U\) is the maximum complexity of HRTFs under SH decomposition. The similar roles of the width \(W\) in the PINN method and the dimensionality \(U\) in the SH method and the fact that the HRTF is fundamentally a characteristic of the human subject irrespective of the basis function inspire us to set the width \(W\) according to \(U\)[41]. Based on our experience about SH analysis, we find two possible values for the width \(W\): \((U+1)^{2}\) and \(2U+1\). \((U+1)^{2}\) is the total number of SHs up to order \(U\), and \(2U+1\) is the number of SHs of order \(U\)[33]. The SHs form an orthogonal function set where different orders of SHs represent different levels of complexity of HRTFs [32, 33]. The \(\tanh\) functions with different arguments are not orthogonal with respect to each other, and thus that less number of \(\tanh\) functions together with their arguments maybe sufficient to model HRTFs. The difference between SHs and the \(\tanh\) function prompts us to choose the less value of the two as the width \(W=2U+1\). Further considering that **Training 1** and **Training 2** each halves the complexity of HRTFs, we arrive at the final choice for width \(W\)[32, 33] \[W =\frac{2U+1}{2\times 2}\approx U/2\approx\begin{cases}\lceil f/500\rceil,&f \leq 3\text{kHz},\\ \lceil f/100\rceil,&f>3\text{kHz}.\end{cases} \tag{12}\] We present the width \(W\) as a function of frequency in Fig. 2 for reference, where we let \(W=\max\{\lceil f/500\rceil,\lceil f/1000\rceil\}\) for 3 kHz \(<f<\) 6 kHz [32]. Note that similar to (3) and (5), the width (12) should be regraded as a rule of thumb and is not supposed to be followed exactly. Indeed, for HRTF upsampling, we are facing a data insufficient condition, and to avoid the over-fitting problem we may need to slightly reduce the width \(W\). Second, with the width of the PINN set to be \(W=U/2\), we find that a depth of \(L=3\) is sufficient and necessary for HRTF modeling. This maybe because the Helmholtz equation is a second order PDE which requires three conditions to specify a unique solution. Further theoretical investigations are necessary to justify the choices and provide better guidance for the PINN design. This will be one of our future works. ## V Numerical experiments In this section, we compare the PINN method performance with that of the SH method which has provided valuable inspirations to the design of the PINN and has been widely used for both extrapolation and interpolation scenarios. Let \((\theta,\phi)\) be the direction of a HRTF. We transfer the spherical coordinates \((r_{\text{h}},\theta,\phi)\) into Cartesian coordinates. We use these Cartesian coordinates as the inputs to the PINN. We normalize the amplitudes of HRTFs to be within \([-1,1]\). We conduct upsampling on the amplitude of HRTFs rather on the magnitude to avoid potential impacts on perceptual features of HRTFs. We evaluate the performance of all methods by the upsampling error \[\mathcal{E}{=}10\log_{10}\frac{\sum_{e=1}^{E}\|P(\theta_{e},\phi_{e})-\hat{P}( \theta_{e},\phi_{e})\|_{2}^{2}}{\sum_{e=1}^{E}\|P(\theta_{e},\phi_{e})\|_{2}^{2}}, \tag{13}\] where \(P(\theta_{e},\phi_{e})\) and \(\hat{P}(\theta_{e},\phi_{e})\) are the unknown HRTFs and are their estimations at \(\{(\theta_{e},\phi_{e})\}_{e=1}^{E}\), respectively. ### _Extrapolation_ In this section, we aim to extrapolate the unknown HRTFs whose directions are beyond those of the measured HRTFs. We conduct experiments on HRTFs from the 3D3A dataset [38]. We set the simulated HRTFs of subject 37 left ear at 100, 200,..., 16000 Hz as the ground truth. We use \(\{P(\theta_{\varphi},\phi_{a})\}_{q=1}^{1368}\) whose polar angle \(0.2\pi\leq\theta_{q}\leq 0.8\pi\) as the measured HRTFs. We aim to extrapolate the unknown HRTFs \(\{P(\theta_{e},\phi_{e})\}_{e=1}^{362}\) where \(\theta_{e}<0.2\pi\) or \(\theta_{e}>0.8\pi\). We implement the SH method [20] following (1) - (7), and set \(\gamma=0.02\) in (6) according to a trial-and-error process. For the PINN method, we initialize the trainable parameters with the Xavier initialization [42]. We train the PINN for \(10^{7}\) epochs with a learning rate of \(10^{-5}\) using the ADAM optimizer. We evaluate the data loss \(\mathcal{L}_{\text{data}}\) with respect to the 1368 measured HRTFs, and the PDE loss \(\mathcal{E}_{\text{PDE}}\) with respect to the Cartesian coordinates of all 1368+362=1730 HRTFs. We implement an additional method which is similar to the PINN method, except that we do not add the PDE loss \(\mathfrak{L}_{\mathrm{PDE}}\) to the loss function (11). Hereafter, we refer to this method as the NN method. We first conduct the experiment at 8 kHz. We present the real part of the ground truth HRTF in Fig. 4 (a), where the dashed lines \(\theta=0.2\pi\) and \(\theta=0.8\pi\) are the boundaries between the measured and the unknown HRTFs. Figure 4 (b) denotes the estimation obtained from the SH method with \(U=16\). Figure 4 (c) and (d) denote the estimations obtained from the NN method with \(L=3,W=16\) and \(L=3,W=8\), respectively. Figure 4 (e) and (f) denote the estimations obtained from the PINN method with \(L=3,W=16\) and \(L=3,W=8\), respectively. Figure 4 (g) and (h) denote the estimations obtained from the PINN method with \(L=2,W=8\) and \(L=4,W=8\), respectively. From Fig. 4 (b), we can see that the upsampling accuracy of the SH method for the unknown HRTFs, where \(\theta_{e}<0.2\pi\) or \(\theta_{e}>0.8\pi\), is poor. From Fig. 4 (c), we can see that the NN method with width \(W=16\) estimates the unknown HRTFs with spurious values. This problem is mitigated in Fig. 4 (d) by reducing the width to be \(W=8\). However, the estimation is still not satisfying as the estimated unknown HRTFs show a curve around \(\theta=0.9\pi\) that does not present in the ground truth. Figure 4 (b), (c), and (d) reveal the problem of the SH method and the NN method. That is, they have no control over the estimations and thus can assign arbitrary values to the unknown HRTFs. The PINN method, on the other hand, has some control over the estimations, because the PDE loss \(\mathfrak{L}_{\mathrm{PDE}}\) contains the contribution from the unknown HRTFs. Figure 4 (e) shows that the PINN method with \(W=16\) tends to assign zero to the unknown HRTFs. Zero is a valid but trivial solution of the Helmholtz equation [34]. Large area of zero value estimation indicates that the PINN method with \(W=16\) has more expressiveness than it needs to be, causing the over-fitting problem. Figure 4 (f) shows that the PINN method with an appropriate level of expressiveness, or width \(W=8\), can accurately estimate the unknown HRTFs. Figure 4 (g) shows that the PINN method with less depth \(L=2\) and hence less expressiveness, omits some details shown in the ground truth HRTFs. This is the under-fitting problem. Figure 4 (h) shows that the PINN method with more depth \(L=4\) and hence more expressiveness, suffers from the over-fitting problem as Fig. 4 (e) because it also tends to assign zero to the unknown HRTFs. The upsampling errors of the SH method Fig. 4 (b), the NN method with \(W=16\) Fig. 4 (c), and \(W=8\) Fig. 4 (d) are 1.6 dB, -1.4 dB, and -10.8 dB, respectively. The upsampling errors of the PINN methods are -5.7 dB, -17.5 dB, -12.3 dB, and -14.5 dB for Fig. 4 (e), (f), (g), and (h), respectively. This experiment demonstrates the superior HRTF upsampling performance of the PINN method. However, the good performance can only be achieved with the incorporation of physics knowledge and proper design of the PINN. Hereafter, unless otherwise stated, we assume the \(U\) for the SH method is calculated through (3), and the depth and the width for both the NN method and the PINN method are \(L=3\) and \(W=U/2\), respectively. We next repeat the experiment on the same 3D3A HRTFs but at 16 kHz. We present the real part of the ground truth HRTF in Fig. 5 (a). Figure 5 (b), (c), and (d) denote the estimations obtained from the SH method, the NN method, and the PINN method, respectively. In this case, the SH method fails to estimate to the unknown HRTFs. The NN method, without any control over the estimation process, assigns spurious values to the unknown HRTFs. The PINN method, on the other hand, can estimate the unknown HRTFs with a better accuracy. The upsampling errors of the SH method, the NN method, and the PINN method are 3.1 dB, 5.4 dB, and -4.8 dB, respectively. We present the HRTFs upsampling errors of the SH method and the PINN method for the 3D3A HRTFs over a broad frequency range in Table 1. (The upsampling errors of the NN method are consistently larger than that of the PINN methods, and thus are not shown.) As shown in Table 1, the upsampling errors of the PINN method is smaller than that of the SH method over the whole range. The experiment results for upsampling the imaginary part of the 3D3A HRTFs using the three methods are similar to Fig. 4, Fig. 5, and Table 1, and thus are not shown for brevity. We repeat the experiment on the high resolution spherical nearfield (HRSN) dataset [39]. The dataset contains HRTFs of the Neumann KU100 dummy head which is measured at different distances. We use the right ear HRTFs measured with a loudspeaker array on a 1.0 m radius sphere at frequencies 375, 750,..., 20250 Hz as the ground truth. We use \(\{P(\theta_{q},\phi_{q})\}_{q=1}^{257}\) whose polar angle \(0.2\pi\leq\theta_{q}\leq 0.8\pi\) as the measured HRTFs. We aim to extrapolate the unknown HRTFs \(\{P(\theta_{e},\phi_{e})\}_{e=1}^{545}\) where \(\theta_{e}<0.2\pi\) or \(\theta_{e}>0.8\pi\). The implementations of the SH method, the NN method, and the PINN method are similar to their implementations for the 3D3A HRTFs. We show the upsampling results at a high frequency \(f=20.25\) kHz in Fig. 6, where Fig. 6 (a) shows the real part of the ground truth, and Fig. 6 (b), (c), and (d) show the estimations obtained from the SH method, the NN method, and the PINN method, respectively. Comparing Fig. 6 with Fig. 5, we can see that the upsampling result of three methods for the 3D3A HRTFs at 16 kHz and for the HRSN HRTFs at 20.25 kHz are similar. The SH method fails the upsampling task. The NN method assigns spurious values to the unknown HRTFs. The upsampling accuracy of the PINN method is the best among the three methods. The upsampling errors of the SH method, the NN method, and the PINN method are 4.6 dB, 10.7 dB, and -3.2 dB, respectively. We present the HRTFs upsampling errors of the SH method and the PINN method for the HRSN HRTFs over a broad frequency range in Table II. (The upsampling errors of the NN methods are larger than that of the PINN method, and thus are not shown.) Comparing Table I and Table II, we can see that the upsampling errors of both methods for the measured HRSN HRTFs are larger than the corresponding errors for the simulated 3D3A HRTFs. For these two datasets, Fig. 4: Extrapolation: 3D3A dataset, HRTF 8 kHz: (a) Ground truth, (b) SH, \(U=16\), (c) NN, \(L=3,W=16\), (d) NN, \(L=3,W=8\), (e) PINN, \(L=3,W=16\), (f) PINN, \(L=3,W=8\), (g) PINN, \(L=2,W=8\), (h) PINN, \(L=4,W=8\). Fig. 5: Extrapolation: 3D3A dataset, HRTF 16 kHz: (a) Ground truth, (b) SH estimation, (c) NN estimation, (d) PINN estimation. Fig. 6: Extrapolation: HRSN dataset, HRTF 20.25 kHz: (a) Ground truth, (b) SH estimation, (c) NN estimation, (d) PINN estimation. the upsampling errors the PINN methods are consistently smaller than that of the SH method. The experiment results for upsampling the imaginary part of the HRSN HRTFs using the three methods are similar to Fig. 6, and Table 2, and thus are not shown for brevity. The extrapolation experiments are further conducted on the CHEDAR dataset [44] and the HUTUBS dataset [45]. We randomly select five subjects from these two datasets, and conduct HRTF upsampling at 10 kHz. Similar to the 3D3A case and the HRSN case, we extrapolate the unknown HRTFs beyond \(0.2\pi\leq\theta\leq 0.8\pi\) based on the HRTFs within the range. We present the upsampling errors of the PINN method together with subject number in Table III. As shown in Table III, for these two datasets, the PINN method can achieve upsampling errors around -10 dB and -12 dB, respectively. The upsampling errors of the PINN method on other frequencies for these two datasets are similar to the Table I and Table II, and thus are not shown for brevity. ### _Interpolation_ In this section, we aim to interpolate the unknown HRTFs whose directions are between those of the measured HRTFs. We conduct the experiment on same 3D3A HRTFs. We use HRTFs \(\{P(\theta_{q},\phi_{q})\}_{q=1}^{1368}\) whose polar angle \(0.2\pi\leq\theta_{q}\leq 0.8\pi\) as the ground truth. We randomly select one third (456) of the ground truth HRTFs as the measured HRTFs and the rest (912) as the unknown HRTFs. We show the arrangement of the measured HRTFs and the unknown HRTFs in Fig. 7 for reference. Note that as shown in Fig. 7, there are regions where no HRTFs are measured, and this make Fig. 7 not necessary the best arrangement for testing the performance of the PINN method. Further investigation of the optimal arrangement of the measured HRTFs is beyond the scope of this paper, and will be one of our future works. The implementations of the SH method and the PINN method are the same as in Sec. V-A, except that for the SH method we set \(\gamma=0\) according to a trial-and-error process. We do have implemented the NN method, but the same as in Sec. V-A, the performance of the NN method is inferior to that of the PINN method. Thus, the results of the NN method are not shown for brevity. We compare the performance of two methods at 8 kHz in Fig. 8, where Fig. 8 (a) shows the real part of the ground truth, and Fig. 8 (b) shows the measured HRTFs. Figure 8 (c) and (d) denote the estimations obtained from the SH method and from the PINN method, respectively. In this case, the dimensionality of the HRTFs under SH decomposition is \(U=16\), which is less than \(\sqrt{456}-1\)[32, 33]. The measured HRTFs provides the SH method with enough information to accurately estimate the SH coefficients up to order \(U=16\). The upsampling error of the SH method is -14.9 dB. Comparing Fig. 8 (c) and (d), we can see that the PINN method can estimate the unknown HRTFs with a better accuracy. The upsampling error of the PINN method is -23.4 dB. **check in the figure \(\theta\) and \(\phi\) is wrong.** We compare the performance of two methods at 16 kHz in Fig. 9, where Fig. 9 (a) shows the real part of the ground truth, and Fig. 9 (b) shows the measured HRTFs. Figure 9 (c) and (d) denote the estimations obtained from the SH method and the PINN method, respectively. Comparing Fig. 9 (a) and Fig. 8 (a), we can see that the HRTFs at 16 kHz is much more complex then the HRTFs at 8 kHz. Figure 9 (b) shows more discontinues regions than Fig. 8 (b), and this makes it more challenging to estimate the unknown HRTFs. In this case, the dimensionality of the HRTF under SH decomposition is \(U=32\), which is more than \(\sqrt{456}-1\)[32, 33]. The measured HRTFs do not provide enough information for the SH method to accurately estimate the SH coefficients up to order \(U=32\). Fig. 9 (c) shows that the SH method estimation has some dark and bright spots which do not present in the ground truth, and the upsampling error is \(-0.8\) dB. Fig. 9 (d) shows that the PINN method has accurately estimated the ground truth HRTFs, and the upsampling error is \(-14.5\) dB. We present the HRTFs upsampling errors of the SH method and the PINN method for the 3D3A HRTFs over a broad frequency range in Table IV. As shown in Table IV, the upsampling errors of the PINN method is consistently smaller than that of the SH method. The experiment results for upsampling the imaginary part of the 3D3A HRTFs using the two methods are similar to Table IV, and thus are not shown Fig. 7: Arrangement of the measured and the unknown HRTFs. for brevity. We repeat the interpolation experiment on the HRSN HRTFs. We use \(\{P(\theta_{q},\phi_{q})\}_{q=1}^{2157}\) whose polar angle \(0.2\pi\leq\theta_{q}\leq 0.8\pi\) as the ground truth. Similar to the interpolation for the 3D3A HRTFs, we randomly select one third (719) of the ground truth HRTFs as the measured HRTFs and the rest (1438) as the unknown HRTFs. The arrangement of the measured HRTFs and the unknown HRTFs is similar to Fig. 7, and thus is not shown for brevity. The implementations of the SH method and the PINN method are similar to their implementations for the 3D3A HRTFs. We show the upsampling results at a high frequency \(f=20.25\) kHz in Fig. 10, where Fig. 10 (a) shows the real part of the ground truth, Fig. 10 (b) shows the measured HRTFs. Figure 10 (c) and (d) denote the estimations obtained from the SH method and the PINN method, respectively. The upsampling errors of the SH method and the PINN method are \(-1.6\) dB and \(-13.1\) dB, respectively. We present the HRTFs upsampling errors of the SH method and the PINN method for the HRSN HRTFs over a broad frequency range in Table V. Similar to Table IV, the upsampling errors of the PINN method is smaller than that of the SH method over the broad frequency range. The experiment results for upsampling the imaginary part of the HRSN HRTFs using the two methods are similar to Table V, and thus are not shown for brevity. The interpolation experiments are further conducted on the HUTUBS dataset [45] and the IRCAM dataset [46]. We randomly select five subjects from these two datasets, and conduct HRTF upsampling at 10 kHz. Similar to the 3D3A case and the HRSN case, we use the HRTFs where \(0.2\pi\leq\theta\leq 0.8\pi\) as the ground truth. We randomly select one Fig. 8: Interpolation: 3D3A dataset, HRTF 8 kHz: (a) Ground truth, (b) the measured HRTFs, (c) SH estimation, (d) PINN estimation. Fig. 10: Interpolation: HRSN dataset, HRTF 20.25 kHz: (a) Ground truth, (b) the measured HRTFs, (c) SH estimation, (d) PINN estimation. Fig. 9: Interpolation: 3D3A dataset, HRTF 16 kHz: (a) Ground truth, (b) the measured HRTFs, (c) SH estimation, (d) PINN estimation. third of the ground truth as the measured HRTFs and the rest as the unknown HRTFs. We aim to estimate the unknown HRTFs based on the measured ones. We present the upsampling errors of the PINN method together with subject number in Table VI. As shown in Table VI, for the HUTUBS dataset, the PINN method can achieve upsampling errors less than -20 dB. The upsampling errors for the IRCAM dataset are around -10 dB, which is not that satisfying. This maybe because unlike other datasets which contain either simulated or measured HRTFs of artificial heads, the IRCAM dataset contains measured HRTFs of human subjects from 1680 directions. It is unclear whether human subjects can keep still over the measurement process. The upsampling errors of the PINN method on other frequencies for these two datasets show similar trends as in Table IV and Table V. That is, the upsampling errors increase with the increment of frequencies. The results are not shown for brevity. ## VI Conclusion This paper proposed a PINN method for upsampling HRTFs. The performance of most existing HRTF upsampling methods is limited by the fact that they use the information of the measured HRTFs only. The proposed PINN method exploits the Helmholtz equation, the governing differential equation of acoustics, as additional information to improve the HRTF upsampling accuracy. Furthermore, based on the SH decomposition of the HRTFs and the Helmholtz equation, we set the PINN with an appropriate width and depth. This helps the PINN to avoid under-fitting and over-fitting problems. The additional information provided by the Helmholtz equation and a suitable size help the PINN to outperform the SH method in both extrapolation and interpolation scenarios. The design of the PINN is still empirical, and we need further theoretical investigation to determine an optimal size of the PINN. Another interesting extension of this work would be incorporating even more physics information, such as head and ear geometry, into the design and training of the PINN. This will be one of our future works.
2305.05642
A duality framework for generalization analysis of random feature models and two-layer neural networks
We consider the problem of learning functions in the $\mathcal{F}_{p,\pi}$ and Barron spaces, which are natural function spaces that arise in the high-dimensional analysis of random feature models (RFMs) and two-layer neural networks. Through a duality analysis, we reveal that the approximation and estimation of these spaces can be considered equivalent in a certain sense. This enables us to focus on the easier problem of approximation and estimation when studying the generalization of both models. The dual equivalence is established by defining an information-based complexity that can effectively control estimation errors. Additionally, we demonstrate the flexibility of our duality framework through comprehensive analyses of two concrete applications. The first application is to study learning functions in $\mathcal{F}_{p,\pi}$ with RFMs. We prove that the learning does not suffer from the curse of dimensionality as long as $p>1$, implying RFMs can work beyond the kernel regime. Our analysis extends existing results [CMM21] to the noisy case and removes the requirement of overparameterization. The second application is to investigate the learnability of reproducing kernel Hilbert space (RKHS) under the $L^\infty$ metric. We derive both lower and upper bounds of the minimax estimation error by using the spectrum of the associated kernel. We then apply these bounds to dot-product kernels and analyze how they scale with the input dimension. Our results suggest that learning with ReLU (random) features is generally intractable in terms of reaching high uniform accuracy.
Hongrui Chen, Jihao Long, Lei Wu
2023-05-09T17:41:50Z
http://arxiv.org/abs/2305.05642v1
A duality framework for generalization analysis of random feature models and two-layer neural networks ###### Abstract We consider the problem of learning functions in the \(\mathcal{F}_{p,\pi}\) and Barron spaces, which are natural function spaces that arise in the high-dimensional analysis of random feature models (RFMs) and two-layer neural networks. Through a duality analysis, we reveal that the approximation and estimation of these spaces can be considered equivalent in a certain sense. This enables us to focus on the easier problem of approximation and estimation when studying the generalization of both models. The dual equivalence is established by defining an information-based complexity that can effectively control estimation errors. Additionally, we demonstrate the flexibility of our duality framework through comprehensive analyses of two concrete applications. * The first application is to study learning functions in \(\mathcal{F}_{p,\pi}\) with RFMs. We prove that the learning does not suffer from the curse of dimensionality as long as \(p>1\), implying RFMs can work beyond the kernel regime. Our analysis extends existing results [10] to the noisy case and removes the requirement of overparameterization. * The second application is to investigate the learnability of reproducing kernel Hilbert space (RKHS) under the \(L^{\infty}\) metric. We derive both lower and upper bounds of the minimax estimation error by using the spectrum of the associated kernel. We then apply these bounds to dot-product kernels and analyze how they scale with the input dimension. Our results suggest that learning with ReLU (random) features is generally intractable in terms of reaching high uniform accuracy. ## 1 Introduction One of the fundamental problems in theoretical machine learning is to understand how certain high-dimensional functions can be learned _efficiently_ using machine learning models [1, 1] such as neural networks and random feature models. Denote by \(\mathcal{X}\subset\mathbb{R}^{d}\) the input domain and \(\mathcal{F}\) the function class of interest. We say that \(\mathcal{F}\) can be learned efficiently if both the approximation error and estimation error scale polynomially with the input dimension \(d\). Otherwise, the learning is said to suffer from or exhibit the _curse of dimensionality_ (CoD) [1]. It is well-known that learning traditional function spaces such as Sobolev and Besov spaces suffers from the CoD, regardless of the machine learning models used [23, 24]. Analyses with these spaces cannot explain the success of machine learning in solving high-dimensional problems. To address this, it is crucial to identify the appropriate function spaces for a given machine learning model such that the functions can be learned efficiently using that model [1, 1]. This can provide insight into the model's strengths and limitations when dealing with high-dimensional data. This paper focuses on this issue for three popular machine learning models: kernel methods, random feature models, and two-layer neural networks. Kernel methods are a class of methods using the hypothesis class: \(\{\sum_{i=1}^{n}\alpha_{i}k(x_{i},\cdot):\alpha\in\mathbb{R}^{n}\}\), where \(\{x_{i}\}_{i=1}^{n}\) are training data and \(k:\mathcal{X}\times\mathcal{X}\mapsto\mathbb{R}\) is a kernel function. The popular function space used in kernel method analysis is the reproducing kernel Hilbert space (RKHS) [1], denoted by \(\mathcal{H}_{k}\). RKHS is favored for two main reasons. First, it can be learned efficiently with kernel methods in high dimensions, as demonstrated in studies such as [1, 15]. Second, the Hilbert structure and reproducing property of RKHS provide a rich set of mathematical tools that make analysis easier. For instance, the use of RKHS allows for the representation of functions as inner products with respect to the kernel, i.e., \(f(x)=\langle f,k(x,\cdot)\rangle_{\mathcal{H}_{k}}\), enabling the application of techniques from functional analysis. Neural networks are another class of models which have achieved remarkable success in solving high-dimensional problems [14]. However, it remains unclear which high-dimensional functions can be efficiently learned using them. In this paper, we focus on two-layer neural networks: \[f(x;\theta)=\sum_{j=1}^{m}a_{j}\phi(x,v_{j}), \tag{1}\] where \(\phi:\mathcal{X}\times\mathcal{V}\mapsto\mathbb{R}\) is a feature function and \(\theta=\{(a_{j},v_{j})\}_{j=1}^{m}\) are the parameters to be learned. The feature function is typically the form of \(\phi(x,v)=\sigma(v^{T}x)\), where \(\sigma:\mathbb{R}\mapsto\mathbb{R}\) is a nonlinear activation function. The high-dimensional analysis of two-layer neural networks dates back to the pioneer works by Andrew Barron [1, 15]. In these papers, Barron defined the spectral Barron space [13], which consists of functions that satisfy \(C_{f}=\int(1+\|\xi\|)|\hat{f}(\xi)|\,\mathrm{d}\xi<\infty\), and proved that these functions can be efficiently learned using two-layer neural networks. Following Barron's work, [1, 14] defined variation spaces and [1] reformulated these spaces using integral representations, which were denoted by \(\mathcal{F}_{1}\). More recently, [1] provided a probabilistic interpretation of Barron's work and defined the Barron spaces, which are an infinite union of a family of RKHSs. These spaces played a critic role in understanding the capabilities and limitations of two-layer neural networks in high dimensions. Random feature models (RFMs) are another type of closely related models that have the same form as Equation (1), but with the weights \(\{v_{j}\}_{j=1}^{m}\) being _i.i.d._ samples drawn from a fixed distribution \(\pi\in\mathcal{P}(\mathcal{V})\). In RFMs, the features are predetermined, and only the outer coefficients are learnable. When the \(\ell_{2}\) norm of the coefficients is penalized, RFMs are equivalent to kernel methods with kernel \(\hat{k}_{m}(x,x^{\prime})=\frac{1}{m}\sum_{j=1}^{m}\phi(x,v_{j})\phi(x^{\prime},v_{j})\) according to the representer theorem [12]. It is important to note that as \(m\to\infty\), the law of large numbers (LLN) implies that \[\hat{k}_{m}(x,x^{\prime})\to k_{\pi}(x,x^{\prime}):=\int_{\mathcal{V}}\phi(x,v )\phi(x^{\prime},v)\,\mathrm{d}\pi(v). \tag{2}\] Hence RFMs are often viewed as a Monte-Carlo approximation of kernel methods with kernel \(k_{\pi}\)[10], and consequently, most theoretical analyses of RFMs only consider target functions in the associated RKHS \(\mathcal{H}_{k_{\pi}}\)[11, 10, 12]. However, it should be stressed that RFMs are not kernel methods if the \(\ell_{p}\) norm of coefficients is penalized with \(p<2\)[13, 14, 15]. Recently, RFMs have been also found useful for understanding neural networks [1, 1, 12, 13, 14]. ### Our contributions In this paper, we study the learning of the \(\mathcal{F}_{p,\pi}\) and Barron spaces, whose definitions are motivated by the high-dimensional analysis of RFMs and two-layer neural networks, respectively. Consider a RFM with infinitely many features: \(f_{a}=\int_{\mathcal{V}}a(v)\phi(\cdot,v)\,\mathrm{d}\pi(v)\), where the parameters are the coefficient function \(a(\cdot)\). For any \(p\geq 1\), \(\mathcal{F}_{p,\pi}\) is defined by \[\mathcal{F}_{p,\pi}:=\{f_{a}\,:\,\|a\|_{L^{p}(\pi)}<\infty\},\qquad\|f\|_{ \mathcal{F}_{p,\pi}}=\inf_{f_{a}=f}\|a\|_{L^{p}(\pi)}. \tag{3}\] The Barron space \(\mathcal{B}\) is given by \[\mathcal{B}=\cup_{\pi\in\mathcal{P}(\mathcal{V})}\mathcal{F}_{2,\pi}. \tag{4}\] These spaces have been widely adopted to analyze RFMs and two-layer neural networks [1, 2, 1, 16] in high dimensions. In particular, [11] proved \(\mathcal{F}_{2,\pi}=\mathcal{H}_{k_{\pi}}\), where the kernel \(k_{\pi}\) is given by (2); therefore, studying \(\mathcal{F}_{p,\pi}\) is also highly relevant for understanding kernel methods. In this paper, we take a duality perspective to provide a unified analysis of the \(\mathcal{F}_{p,\pi}\) and \(\mathcal{B}\) spaces. This approach enables us to gain a more general and comprehensive understanding of the properties of these spaces, and gain insights into their relevance for understanding kernel methods, two-layer neural networks, and RFMs. The duality property.By exploiting the Banach structure of \(\mathcal{F}_{p,\pi}\) and \(\mathcal{B}\), we establish a dual equivalence between the approximation and estimation for learning functions in \(\mathcal{F}_{p,\pi}\) and \(\mathcal{B}\). To put it simply, our major result for the case of \(\mathcal{F}_{p,\pi}\) can be informally stated as follows _The \(L^{q}(\rho)\) estimation of \(\mathcal{F}_{p,\pi}\) is equivalent to the \(L^{p^{\prime}}(\pi)\) approximation of \(\mathcal{F}_{q^{\prime},\rho}\),_ where \(p^{\prime},q^{\prime}\) denote the Holder conjugates of \(p,q\), satisfying \(1/p+1/p^{\prime}=1,1/q+1/q^{\prime}=1\). An analogous dual equivalence also holds for the Barron space \(\mathcal{B}\). This duality property enables us to concentrate on analyzing the easier problem between approximation and estimation, leading to a unified analysis of learning in these spaces under various metrics. The information-based complexity.To establish the aforementioned dual equivalence, we introduce an _information-based complexity_(\(I\)-complexity) that can effectively control the (minimax) estimation errors in various settings. A similar complexity has been utilized in approximation theory to study optimal interpolations on deterministically obtained clean data [14]. However, in statistical learning settings, data are typically randomly sampled from a distribution, and the model may not necessarily interpolate data due to the presence of noise. To address this issue, we modify the definition of \(I\)-complexity to make it more appropriate for this setting. The \(I\)-complexity might be of independent interest and could potentially be applied beyond the analysis of the \(\mathcal{F}_{p,\pi}\) and \(\mathcal{B}\) spaces. Nonetheless, we leave the exploration of its potential applications for future work. Applications.Our duality framework allows us to simplify the proofs and strengthen the conclusions of many existing results. We illustrate this by providing two specific applications: * _Random feature learning beyond kernel regime._ First, we extend the result in [1, Proposition 1] by showing that the unit ball of \(\mathcal{F}_{p,\pi}\) can be uniformly approximated by random features, where uniformity means that the choice of random weights \(v_{1},\ldots,v_{m}\) is independent of target functions. In contrast to [1], our result is not restricted to the case of \(\mathcal{F}_{2,\pi}\) and the proof is also much simpler. Next, we provide a comprehensive analysis of learning \(\mathcal{F}_{p,\pi}\) functions using RFMs. Our analysis shows that both the sample and parameter complexities scale with the input dimension \(d\) polynomially, as long as \(p>1\). This result suggests that RFMs can efficiently learn functions that are not necessarily contained in RKHS. * \(L^{\infty}\) _learnability of RKHS_. We consider the learning of functions in RKHS under the \(L^{\infty}\) metric: \(\|\hat{f}-f^{*}\|_{\infty}=\sup_{x\in\mathcal{X}}|\hat{f}(x)-f^{*}(x)|\), where \(\hat{f},f^{*}\) denote the learned model and the target function, respectively. This \(L^{\infty}\) learnability is crucial for understanding the performance of kernel methods and neural networks in safety- and security-critical scenarios. By exploiting the dual equivalence, we show that the \(L^{\infty}\) estimation of a RKHS (i.e., \(\mathcal{F}_{2,\pi}\)) is equivalent to the \(L^{2}\) approximation of the Barron space \(\mathcal{B}\) with random features. To bound the error of the latter, we adopt the spectral-based approach developed in [10]. We derive both lower and upper bounds on the \(L^{\infty}\) minimax errors based on the kernel spectrum. In particular, we examine how the \(L^{\infty}\) learnability depends on the input dimension \(d\) for dot-product kernels of the form \(k(x,x^{\prime})=\mathbb{E}_{v\sim\tau_{d-1}}[\sigma(v^{T}x)\sigma(v^{T}x^{ \prime})]\). Specifically, we prove * For non-smooth activation functions, such as ReLU, the minimax errors grow exponentially with the input dimension \(d\); * For sufficiently smooth activation functions, the error scales with \(d\) only polynomially. Note that dot-produce kernels arise naturally in studying RFMs [14] and neural networks [13] and therefore, the above results not only apply to kernel methods but also provide insights into RFMs and neural networks. One immediate implication is that \(L^{\infty}\) learning with ReLU random features/neural networks is subject to the CoD, although the \(L^{2}\) learning is not [1, 1]. The above examples demonstrate the versatility of our duality framework, and we believe it has the potential to be applied in other contexts and settings beyond these specific examples. ### Related works Duality of RFMs and neural networks.In [1], a dual formulation for training energy-based models with overparameterized two-layer neural networks was provided. In contrast, we focus on supervised learning and generalization analysis. We note that there is a concurrent work [12], which provides fine-grained structure characterizations of the \(\mathcal{F}_{p,\pi}\) and \(\mathcal{B}\) spaces under the framework of reproducing kernel Banach space (RKBS) [11]. We instead establish the dual equivalence between approximation and estimation of these spaces. Random feature learning.The early works [14, 14] focused on the approximation of RFMs for functions within the associated RKHS. [1] provided a fine-grained analysis of the approximation error by using the corresponding covariance operator. In contrast, we offer a duality analysis, which provides simpler proofs and applies to target functions beyond the RKHS. Note that [13] also considered the problem of learning \(\mathcal{F}_{p,\pi}\). However, their analysis only considered minimum-norm estimators under the noiseless case and is limited to a highly overparameterized regime with \(m\gg(n\log n)^{2}\), where \(m\) is the number of features and \(n\) is the sample size. In contrast, our analysis in Section 6 does not require overparameterization and is applicable to both noisy and noiseless cases. This is made possible by our duality framework. \(L^{\infty}\) learnability of RKHS and neural networks.The \(L^{\infty}\) learning of RKHS was first studied in [17] for the case of deterministic samples, where both upper and lower bounds are derived by using the kernel spectrum. [13] improved the upper bound by allowing samples to be randomly drawn from an input distribution. In this paper, we show that similar bounds can be easily derived from our duality framework and in particular, we extends the lower bound to the noisy case, which is more common in machine learning. Additionaly, we improve the upper bound for dot-product kernels in two aspects. First, our upper bound (Theorem 22) is obtained by using RFMs, while [12, 13] used the eigenfunctions of the associated kernel as the fixed feature. Second, [12, 13] required the uniform boundedness of eigenfunctions, but the eigenfunctions of dot-product kernels, which are spherical harmonics, do no satisfy this condition. More recently, [1] showed that the \(L^{\infty}\) estimation of deep neural networks suffers from the CoD. Our analysis shows a stronger result: actually, the CoD occurs for the much simpler RFMs if the activation function is non-smooth, such as ReLU. Connection with [11]We acknowledge that [11] applied a similar approach to analyze reinforcement learning by defining a quantity called perturbation complexity. Specifically, they analyzed the case where the reward function comes from a RKHS and as a result, their analysis mainly focused on the \(\mathcal{F}_{2,\pi}\) space. In contrast, we focus on supervised learning setup and we provide a comprehensive analysis of the \(\mathcal{F}_{p,\pi}\) spaces for all \(p\geq 1\) and the Barron spaces. ### Organization In Section 2, we clarify notations and preliminaries. In Section 3, we define the information-based complexity and show how it controls various estimation errors of learning a function class. In Section 4, we define the \(\mathcal{F}_{p,\pi}\) and Barron spaces, which will play critical roles in our duality analysis. In Section 5, we present the dual equivalence between estimation and approximation for learning \(\mathcal{F}_{p,\pi}\) and Barron spaces. Section 6 and 7 present two applications of our duality framework: Random feature learning beyond kernel regime and \(L^{\infty}\) learning of RKHS. ## 2 Preliminaries Notations.Let \(\Omega\) be a subset of a Euclidean space. We denote by \(\mathcal{P}(\Omega)\) the set of probability measures on \(\Omega\) and \(\mathcal{M}(\Omega)\) the space of signed Radon measures equipped with the total variation norm \(\|\mu\|_{\mathcal{M}(\Omega)}=\|\mu\|_{\mathrm{TV}}\). Given \(z_{1},\ldots,z_{n}\in\Omega\), denote by \(\hat{\rho}_{n}=\frac{1}{n}\sum_{i=1}^{n}\delta_{z_{i}}\) the empirical distribution. For any \(\rho\in\mathcal{P}(\Omega)\), let \(\|\cdot\|_{p,\rho}\) be the \(L^{p}(\rho)\) norm. When \(p=2\), we write \(\|\cdot\|_{\rho}=\|\cdot\|_{2,\rho}\) for convenience and let \(\langle f,g\rangle_{\rho}=\int f(x)g(x)\,\mathrm{d}\rho(x)\) for any \(f,g\in L^{2}(\rho)\). Let \(C_{0}(\Omega)\) the space of continuous functions vanishing at infinity equipped with the uniform norm(\(L^{\infty}\) norm) \(\|g\|_{C_{0}(\mathcal{X})}:=\|g\|_{\infty}=\sup_{x\in\mathcal{X}}|g(x)|\). We shall occasionally use \(L^{\infty}\) and \(\|\cdot\|_{\infty}\) to denote \(\|\cdot\|_{C_{0}(\mathcal{X})}\), which is different from \(L^{\infty}(\rho)\). One should not confuse \(L^{\infty}\) and \(L^{\infty}(\rho)\). For any vector \(v\), denote by \(\|v\|_{p}=\left(\sum_{i}|v_{i}|^{p}\right)^{1/p}\) the \(\ell^{p}\) norm. When \(p=2\), we drop the subscript for simplicity. For any \(p\in[1,\infty)\), denote by \(p^{\prime}\) the Holder conjugate saisifying \(\frac{1}{p}+\frac{1}{p^{\prime}}=1\). For a normed vector space \(\mathcal{F}\), let \(\mathcal{F}(r):=\{f\in\mathcal{F}:\|f\|_{\mathcal{F}}\leq r\}\) be the ball of radius \(r\). Let \(\mathbb{S}^{d-1}=\{x\in\mathbb{R}^{d}:\|x\|_{2}=1\}\) and \(\tau_{d-1}=\mathrm{Unif}(\mathbb{S}^{d-1})\). We use \(a\lesssim b\) to mean \(a\leq Cb\) for an absolute constant \(C>0\) and \(a\gtrsim b\) is defined analogously. We use \(a\asymp b\) if there exist absolute constants \(C_{1},C_{2}>0\) such that \(C_{1}b\leq a\leq C_{2}b\). Rademacher complexity.Given \(x_{1},x_{2},\ldots,x_{n}\in\mathcal{X}\), the (empirical) Rademacher complexity of a function class \(\mathcal{F}\) is defined by \[\widehat{\mathrm{Rad}}_{n}(\mathcal{F})=\mathbb{E}_{\xi_{1},\ldots,\xi_{n}}[ \sup_{f\in\mathcal{F}}\frac{1}{n}\sum_{i=1}^{n}f(x_{i})\xi_{i}], \tag{5}\] where \(\xi_{1},\ldots,\xi_{n}\) are _i.i.d._ Rademacher random variables, i.e., \(\mathbb{P}(\xi_{i}=1)=\mathbb{P}(\xi_{i}=-1)=1/2\). In particular, we will use the following classical theorem to bound the gap between an empirical quantity and its population counterpart. Mercer decomposition.Before presenting our results, we first recall some basic facts about the eigen decomposition of a kernel. For any kernel \(k:\mathcal{X}\times\mathcal{X}\mapsto\mathbb{R}\), the associated integral operator \(\mathcal{T}_{k}:L^{2}(\gamma)\mapsto L^{2}(\gamma)\) is given by \(\mathcal{T}_{k}f=\int k(\cdot,x)f(x)\,\mathrm{d}\gamma(x).\) When \(k\) is continuous and \(\mathcal{X}\) is compact, the Mercer's theorem guarantees the existance of eigen decomposition of \(k\): \(k(x,x^{\prime})=\sum_{i=1}^{\infty}\mu_{i}e_{i}(x)e_{i}(x^{\prime})\). Here \(\{\mu_{i}\}_{i=1}^{\infty}\) are the eigenvalues in a decreasing order and \(\{e_{i}\}_{i=1}^{\infty}\) are the orthonormal eigenfunctions satisfying \(\int e_{i}(x)e_{j}(x)\,\mathrm{d}\gamma(x)=\delta_{i,j}\). Note that the decomposition depends on the input distribution \(\gamma\) and when needed, we will denote by \(\mu_{i}^{k,\gamma}\) the \(i\)-th eigenvalue to explicitly emphasize the influence of \(k\) and \(\gamma\). We are also interested in the following quantity: \[\Lambda_{k,\gamma}(m)=\sqrt{\sum_{i=m+1}^{\infty}\mu_{i}^{k,\gamma}}, \tag{6}\] which will be used in Section 7 to bound the \(L^{\infty}\) learnability of RKHS. We refer to [20, Section 12.3] for more details about Mercer's decomposition. ## 3 The Information-based complexity Our generalization analysis relies on the information-based complexity (\(I\)-complexity) proposed in [17]. The \(I\)-complexity of a function class \(\mathcal{F}\) defined in [17, Remark 1.3.4] is \[I_{n}(\mathcal{F})=\inf_{x_{1},\ldots,x_{n}\in\mathcal{X}}\sup_{f\in\mathcal{F },f(x_{i})=0}\|f\|_{\infty}. \tag{7}\] Intuitively speaking, \(I_{n}(\mathcal{F})\) quantifies the complexity of a function class with the minimax \(L^{\infty}\) norm of functions in \(\mathcal{F}\) that interpolate the zero function at \(n\) points. [17] showed that \(I_{n}(\mathcal{F})\) can control the minimax error of approximating \(\mathcal{F}\) with the information of only \(n\) data points. However, the definition (7) cannot be directly applied to analyze ML models. First, the input data in machine learning are often randomly sampled from an input distribution, while \(I_{n}(\mathcal{F})\) measures the complexity with the worst case over any \(n\) data points. Second, the definition only considers the interpolation regime, while it is often prefered in machine learning that our models do not interpolate data as it may cause the _overfitting_ issue. This is particularly the case when data are noisy. To resolve these issues, we define the following modified \(I\)-complexity. **Definition 1** (\(I\)-complexity).: Let \(\mathcal{F}\) be a set of functions, \(\nu\in\mathcal{P}(\mathcal{X})\) be a probability distribution over \(\mathcal{X}\), and \(\|\cdot\|_{\mathcal{M}}\) be a norm used to measure prediction errors. The \(I\)-complexity of \(\mathcal{F}\) with respect to the distribution \(\nu\) is defined as \[I_{\nu}(\mathcal{F},\mathcal{M},\epsilon)=\sup_{f\in\mathcal{F},\,\|f\|_{ \nu}\leq\epsilon}\|f\|_{\mathcal{M}}. \tag{8}\] The above modification makes the \(I\)-complexity useful for the generalization analysis in machine learning as it includes both the input distribution information and include the non-interpolate estimators. Next, we will show that how this complexity can bound estimation errors. **Remark 2**.: _It should be noted that Definition 1 allows \(\nu\) to be a general distribution to measure fitting errors and \(\mathcal{M}\) to be a general norm of measuring prediction errors. In particular, one may choose \(\nu=\hat{\rho_{n}}=\frac{1}{n}\sum_{i=1}^{n}\delta_{x_{i}}\), which is the fitting error on the training data. One can also choose \(\nu=\rho\), yielding a distribution-dependent complexity \(\mathcal{I}_{\rho}(\mathcal{F},\mathcal{M},\epsilon)\). We will show that this quantity provides a lower bound of the minimax error of estimating \(\mathcal{F}\) in noisy case. It is worth noting that the flexibility of allowing to measure fitting and prediction errors with different metrics might be also useful for analyzing the problem of out-of-distribution generalization [21] and reinforcement learning [11, 12]._ ### Bounding estimation errors. Consider the supervised learning with data \(\{(x_{i},y_{i}=f(x_{i}))\}_{i=1}^{n}\). Suppose that \(\mathcal{F}\) is a Banach space of functions over \(\mathcal{X}\) and \(f\in\mathcal{F}(1)\). Consider an estimator \(\hat{f}\in\mathcal{F}(1)\) with the empirical error satisfying \[\left(\frac{1}{n}\sum_{i=1}^{n}(\hat{f}(x_{i})-y_{i})^{2}\right)^{1/2}=\|\hat {f}-f\|_{\hat{\rho}_{n}}\leq\epsilon.\] Then, the population error of \(\hat{f}\) can be bounded by the \(I\)-complexity: \[\|\hat{f}-f\|_{\mathcal{M}}\leq\sup_{\|g\|_{\mathcal{F}}\leq 2,\,\|g\|_{ \hat{\rho}_{n}}\leq\epsilon}\|g\|_{\mathcal{M}}=2I_{\hat{\rho}_{n}}(\mathcal{ F}(1),\mathcal{M},\epsilon/2), \tag{9}\] where the first step is because \(\|\hat{f}-f\|_{\mathcal{M}}\leq\|\hat{f}\|_{\mathcal{M}}+\|f\|_{\mathcal{M}}\leq 2\). Next, we further show that the \(I\)-complexity can also bound the minimax estimation errors. We call each measurable map from \((\mathcal{X}\times\mathbb{R})^{n}\) to \(\mathcal{F}\) an estimator and denote by \(\mathcal{A}_{n}\) the set of all estimators: \((\mathcal{X}\times\mathbb{R})^{n}\mapsto\mathcal{F}\). For an estimator \(T_{n}\), the estimation error of \(T_{n}\) is given by \[\|T_{n}(\{(x_{i},y_{i})\}_{i=1}^{n})-f\|_{\mathcal{M}}.\] The following two lemmas show that the \(I\)-complexity defined in Definition 1 can control the minimax error of learning a set of functions. Here we only state the results and the proofs can be found in Appendix A. The sample-dependent minimax error.For fixed input data \(x_{1},\cdots,x_{n}\) and noiseless output, the sample-dependent minimax estimation error is given by \[\inf_{T_{n}\in\mathcal{A}_{n}}\sup_{\|f\|_{\mathcal{F}}\leq 1}\|T_{n}(\{(x_ {i},f(x_{i}))\}_{i=1}^{n})-f\|_{\mathcal{M}}. \tag{10}\] In this case, the worst-case error may depend on the samples \(x_{1},x_{2},\ldots,x_{n}\) and this error measures how much we can extract from the specific \(n\) samples in the minimax sense. The following proposition shows that this minimax estimation error can be quantified by the \(I\)-complexity \(I_{\hat{\rho}_{n}}\): **Proposition 3**.: _For any \(x_{1},\cdots,x_{n}\in\mathcal{X}\), we have_ \[I_{\hat{\rho}_{n}}(\mathcal{F}(1),\mathcal{M},0)\leq\inf_{T_{n}\in\mathcal{A}_ {n}}\sup_{\|f\|_{\mathcal{F}}\leq 1}\|T(\{(x_{i},f(x_{i}))\}_{i=1}^{n})-f\|_{ \mathcal{M}}\leq 2I_{\hat{\rho}_{n}}(\mathcal{F}(1),\mathcal{M},0).\] It is worth noting that the above result holds for any \(x_{1},x_{2},\ldots,x_{n}\in\mathcal{X}\), which are not necessary to be _i.i.d._ samples drawn from an input distribution. The distribution-dependent minimax error.Suppose that the training data \(S=\{(x_{i},y_{i})\}_{i=1}^{n}\) are independently generated by \(x_{i}\sim\rho\), \(y_{i}=f(x_{i})+\xi_{i}\) with \(\xi_{i}\sim\mathcal{N}(0,\varsigma^{2})\) being the Gaussian noise. Consider the following minimax error \[\inf_{T_{n}\in\mathcal{A}_{n}}\sup_{\|f\|_{\mathcal{F}}\leq 1}\mathbb{E}\,\|T_{n}( \{(x_{i},f(x_{i})+\xi_{i})\}_{i=1}^{n})-f\|_{\mathcal{M}}, \tag{11}\] where the expection is taken with respect to the sampling of \(S\). This minimax error is the common choice in statistical learning setup (see [26, Section 15]). In this definition, the worst case only depends on the distribution instead of specific samples. The following lemma shows that the complexity \(I_{\rho}\) with \(\epsilon=\varsigma/\sqrt{n}\) gives a lower bound of the minimax error (11): **Proposition 4**.: _We have_ \[\inf_{T_{n}\in\mathcal{A}_{n}}\sup_{\|f\|_{\mathcal{F}}\leq 1}\mathbb{E}\,\|T _{n}(\{(x_{i},f(x_{i})+\xi_{i})\}_{i=1}^{n})-f\|_{\mathcal{M}}\geq I_{\rho} \left(\mathcal{F}(1),\mathcal{M},\frac{\varsigma}{\sqrt{n}}\right)\] Note that in some specific cases, the \(I\)-complexity also provides an upper bound of the distribution-dependent minimax error. We refer to Section 7 for details. **Remark 5**.: _What we discuss above only considers the estimation part since the output of estimator is in the target space \(\mathcal{F}\). In other words, for any estimator in \(A_{n}\), the approximation error is exactly zero. However, in practice, an estimate \(\hat{f}\) may live in the hypothesis space \(\mathcal{H}\), which can be different from \(\mathcal{F}\). In such a case, there may exist approximation error because of the difference between \(\mathcal{H}\) and \(\mathcal{F}\). In Section 5, we establish that for various function spaces of interest in studying RFMs and two-layer neural networks, the \(I\)-complexity also controls the approximation error._ ## 4 The \(\mathcal{F}_{p,\pi}\) and Barron spaces Let \(\phi:\mathcal{X}\times\mathcal{V}\to\mathbb{R}\) be a general parametric feature map and \(\mathcal{X}\) and \(\mathcal{V}\) denote the input and weight space, respectively. A typical example of feature map is \(\phi(x,v)=\sigma(v^{T}x)\), which arises naturally in analyzing neural networks and random feature models. Here \(\sigma:\mathbb{R}\mapsto\mathbb{R}\) is a nonlinear activation function. In this section, we define some function spaces induced by \(\phi\), which will be utilized in our subsequent duality analysis. We first define the function spaces for random feature models. **Definition 6**.: Let \(\pi\in\mathcal{P}(\mathcal{V})\) be a probability distribution over the weight space. For \(1\leq p\leq\infty\), the \(\mathcal{F}_{p,\pi}\) space on \(\mathcal{X}\) is defined as \[\mathcal{F}_{p,\pi}:=\left\{f=\int_{\mathcal{V}}a(v)\phi(\cdot,v)\,\mathrm{d} \,\pi(v):\,a\in L^{p}(\pi)\right\},\] equipped with the norm \[\|f\|_{\mathcal{F}_{p,\pi}}=\inf_{a\in A_{f}}\|a\|_{p,\pi},\quad\text{where}\, A_{f}=\left\{a\in L^{p}(\pi):\,f=\int_{\mathcal{V}}a(v)\phi(\cdot,v)\,\mathrm{d} \,\pi(v)\right\}. \tag{12}\] In this definition, we consider functions admitting the integral representation with respect to the feature map \(\phi\). For a given \(f\), the associate \(a(\cdot)\) will be called the representation of \(f\). Note that the representations may not be unique. Hence, taking infimum in (12) ensures the norm of \(f\) to be measured using the optimal representation. In addition, this makes the \(\mathcal{F}_{p,\pi}\) well-defined in the sense that it is independent of the choice of representation. Some observations go as follows. * It follows trivially from the Holder's inequality that \[\mathcal{F}_{\infty,\pi}\subset\mathcal{F}_{p,\pi}\subset\mathcal{F}_{q,\pi} \subset\mathcal{F}_{1,\pi},\quad\text{ if }\,p\geq q.\] (13) * The \(\mathcal{F}_{p,\pi}\) space provides a natural setting to study the approximation power of RFMs. Specifically, if \(v_{j}\stackrel{{ iid}}{{\sim}}\pi\) for \(j=1,\ldots,m\), then the law of large numbers (LLN) implies that as \(m\to\infty\), \[\frac{1}{m}\sum_{j=1}^{m}a_{j}\phi(x,v_{j})\to\int_{\mathcal{V}}a(v)\phi(x,v) \,\mathrm{d}\pi(v)\] and the covergence/approximation rate is determined by the \(L^{p}(\pi)\) norm of \(a(\cdot)\). In other words, the \(\mathcal{F}_{p,\pi}\) norm of \(f\) controls the rate of approximating \(f\) with RFMs. While existing works on RFMs have focused on the case of \(p=2\), it is possible to obtain similar approximation rates for all \(p\in(1,\infty]\) by using the Marcinkiewicz-Zygmund-type LLN [11, Theorem 2.5.8]. More details can be found in Section 6. We draw attention to the case where \(p=2\), for which [10] proved \(\mathcal{F}_{2,\pi}\) is an RKHS: \[\mathcal{F}_{2,\pi}=\mathcal{H}_{k_{\pi}},\quad\text{ with }\quad k_{\pi}(x,x^{ \prime}):=\int_{\mathcal{V}}\phi(x,v)\phi(x^{\prime},v)\,\mathrm{d}\,\pi(v). \tag{14}\] Furthermore, the converse is also true: any RKHS can be represented as \(\mathcal{F}_{2,\pi}\), as shown in the following lemma. **Lemma 7**.: _If a kernel \(k\) has a finite trace, i.e., \(\int_{\mathcal{X}}k(x,x)\,\mathrm{d}\,\rho(x)<\infty\), then there exists a weight probability space \((\mathcal{V},\pi)\) and feature function \(\phi:\mathcal{X}\times\mathcal{V}\mapsto\mathbb{R}\) such that_ \[k(x,x^{\prime})=k_{\pi}(x,x^{\prime}):=\int_{\mathcal{V}}\phi(x,v)\phi(x^{ \prime},v)\,\mathrm{d}\,\pi(v).\] Proof.: By Mercer's decomposition: \(k(x,x^{\prime})=\sum_{j=1}^{\infty}\lambda_{j}e_{j}(x)e_{j}(x^{\prime})\), where \((e_{j})_{j\geq 1}\) are a set of orthonormal basis in \(L^{2}(\rho)\). Since \(\sum_{j=1}^{\infty}\lambda_{j}=\int_{\mathcal{X}}k(x,x)\,\mathrm{d}\,\rho(x)<\infty\), we can choose \(\mathcal{V}=\mathbb{N}^{+}\) and define a probability measure on \(\mathbb{N}_{+}\) by \(\pi(j)=\frac{\lambda_{j}}{\sum_{i}\lambda_{i}}\). Let \(\phi(x,j)=\sqrt{\sum_{i}\lambda_{i}}e_{j}(x)\) for \(j\in\mathbb{N}_{+}\). Then, \(k(x,x^{\prime})=\int_{\mathbb{N}_{+}}\phi(x,j)\phi(x^{\prime},j)\,\mathrm{d} \,\pi(j)\). We now turn to define the function space for studying two-layer neural networks. **Definition 8** (Barron space).: Given a feature map \(\phi:\mathcal{X}\times\mathcal{V}\to\mathbb{R}\) such that \(\phi(x,\cdot)\in C_{0}(\mathcal{V})\) for any \(x\in\mathcal{X}\), we define the Barron space \(\mathcal{B}\) on \(\mathcal{X}\) as \[\mathcal{B}:=\left\{f=\int_{\mathcal{V}}\phi(\cdot,v)\,\mathrm{d}\,\mu(v):\, \mu\in\mathcal{M}(\mathcal{V})\right\},\] equipped with the norm \[\|f\|_{\mathcal{B}}=\inf_{\mu\in M_{f}}\|\mu\|_{\mathrm{TV}},\quad\text{where }M_{f}=\left\{\mu\in\mathcal{M}(\mathcal{V}):\,f=\int_{\mathcal{V}}\phi(\cdot,v)\,\mathrm{d}\,\mu(v)\right\}. \tag{15}\] The definition above, originally proposed in [11], is equivalent to the variation-based definition used in [1, 1, 12, 13]. When \(\phi(x,v)=\max(v^{\top}x,0)\), it is also equivalent to the moment-based definition proposed in [1, 1]. All of these definitions are somehow equivalent [1, 13]. In our duality analysis, we will specifically adopt the above Radon measure-based definition as we relies on the fact that \(\mathcal{M}(\mathcal{X})\) is the dual space of \(C_{0}(\mathcal{X})\). However, to better understand the connection between the Barron space and \(\mathcal{F}_{p,\pi}\) space, it is prefered to take the moment-based definition [1, 1]. Specifically, by extending [1, Proposition 3], we have the following lemma, whose proof can be found in Appendix B. **Lemma 9** (An alternative definition of Barron spaces).: _For any \(1\leq p\leq\infty\), we have_ \[\mathcal{B}=\cup_{\pi\in\mathcal{P}(\mathcal{V})}\mathcal{F}_{p,\pi},\quad \|f\|_{\mathcal{B}}=\inf_{\pi\in\mathcal{P}(\mathcal{V})}\|f\|_{\mathcal{F}_{p,\pi}}. \tag{16}\] It is shown that \(\mathcal{B}\) is the union of all \(\mathcal{F}_{p,\pi}\) spaces of a fixed \(p\in[1,\infty]\). Surprisingly, the union is independent of the value of \(p\). One can also treat (16) as an alternative definition of the Barron space \(\mathcal{B}\), which extends the moment-based definition in [1, 1] to general feature maps. By choosing \(p=2\) and noting \(\mathcal{F}_{2,\pi}=\mathcal{H}_{k_{\pi}}\), we have \(\|f\|_{\mathcal{B}}=\inf_{\pi\in\mathcal{P}(\mathcal{V})}\|f\|_{\mathcal{H}_{ k_{\pi}}}\). This implies that two-layer neural networks can be viewed as _adaptive kernel methods_[1]. Additionally, by setting \(p=1\), we obtain \(\mathcal{B}=\cup_{\pi\in\mathcal{P}(\mathcal{V})}\mathcal{F}_{1,\pi}\), suggesting that \(\mathcal{B}\) is much larger than the \(L^{1}\)-type space \(\mathcal{F}_{1,\pi}\). In particular, if \(\pi\) admits a density on \(\mathcal{V}\), then \(\mathcal{F}_{1,\pi}\) fails to include all the functions implemented by finite-neuron neural networks. All these imply that the Barron space \(\mathcal{B}\) should not be naively interpreted as a \(L^{1}\)-type space and the feature adaptivity plays a more critical role in \(\mathcal{B}\). **Remark 10**.: _We are aware of that Definition 6 and 8 can be unified by the RKBS framework [1, 11]. However, adopting such an approach would make the definitions, and particularly the statements of our key results in Section 5, overly abstract and difficult to comprehend. Therefore, we will adhere to the concrete definitions presented above and refer interested readers to the concurrent work [1], where the RKBS structures of these spaces are discussed in detail._ ## 5 The dual equivalence between approximation and estimation In this section, we establish a duality framework that connects approximation and estimation for two-layer neural networks and random feature models. To achieve this, we need to define the following conjugate spaces. **Definition 11** (Conjugate space).: For any \(\gamma\in\mathcal{P}(\mathcal{X})\), let \(\tilde{\mathcal{F}}_{q,\gamma}\) be a function space over the weight space \(\mathcal{V}\) given by \[\tilde{\mathcal{F}}_{q,\gamma}=\left\{g=\int_{\mathcal{X}}b(x) \phi(x,\cdot)\,\mathrm{d}\,\gamma(x):\,b\in L^{q}(\gamma)\right\}\] equipped with the norm \[\|g\|_{\tilde{\mathcal{F}}_{q,\gamma}}=\inf_{b\in L^{q}(\gamma)} \|b\|_{q,\gamma},\quad\text{where}\,B_{f}=\left\{b\in L^{q}(\gamma):\,g=\int_ {\mathcal{X}}b(x)\phi(x,\cdot)\,\mathrm{d}\,\gamma(x)\right\}.\] Similarly, we define the \(\tilde{\mathcal{B}}\) space on \(\mathcal{V}\): \[\tilde{\mathcal{B}}=\left\{g=\int_{\mathcal{X}}\phi(x,\cdot)\, \mathrm{d}\,\mu(x):\mu\in\mathcal{M}(\mathcal{X})\right\}\] and the norm is defined in the same way as (15). It is worth noting that the function spaces \(\tilde{\mathcal{F}}_{q,\gamma}\) and \(\tilde{\mathcal{B}}\) are defined over the weight domain \(\mathcal{V}\), while \(\mathcal{F}_{p,\pi}\) and \(\mathcal{B}\) are defined over the input domain \(\mathcal{X}\). ### Main result For clarity, we will begin by presenting our main result for the interpolation regime. **Theorem 12** (Interpolation regime).: _Let \(x_{1},\cdots,x_{n}\in\mathcal{X}\)._ 1. _For_ \(1<p\leq\infty\)_,_ \(1\leq q<\infty\)_, we have_ \[\sup_{\|f\|_{\mathcal{F}_{p},\pi}\leq 1,f(x_{i})=0}\|f\|_{q,\rho}=\sup_{\|g\|_{ \tilde{\mathcal{F}}^{\prime},\rho}\leq 1}\inf_{c_{1},\cdots,c_{n}}\left\|g- \sum_{i=1}^{n}c_{i}\phi(x_{i},\cdot)\right\|_{p^{\prime},\pi}.\] (17) 2. _For_ \(1\leq q<\infty\)_, we have_ \[\sup_{\|f\|_{\mathcal{E}_{2},\pi}\leq 1,f(x_{i})=0}\|f\|_{\rho}=\sup_{\|g\|_{ \tilde{\mathcal{F}}_{2},\rho}\leq 1}\inf_{c_{1},\cdots,c_{n}}\left\|g- \sum_{i=1}^{n}c_{i}\phi(x_{i},\cdot)\right\|_{C_{0}(\mathcal{V})}\] (18) 3. _For_ \(1<p\leq\infty\)_, we have_ \[\sup_{\|f\|_{\mathcal{F}_{p},\pi}\leq 1,f(x_{i})=0}\|f\|_{C_{0}(\mathcal{X})}= \sup_{\|g\|_{\tilde{\mathcal{B}}}\leq 1}\inf_{c_{1},\cdots,c_{n}}\left\|g- \sum_{i=1}^{n}c_{i}\phi(x_{i},\cdot)\right\|_{p^{\prime},\pi}.\] (19) For each equality stated above, the left-hand side is exactly the \(I\)-complexity that governs the estimation error when learning the corresponding function spaces, as stated in Lemma 3, while the right-hand side is exactly the worse-case error of approximating the corresponding conjugate space with fixed features \(\{\phi(x_{i},\cdot)\}_{i=1}^{n}\). Importantly, these equalities hold for any \(x_{1},\ldots,x_{n}\in\mathcal{X}\), regardless of whether they are sampled from a specific distribution. Thus, Theorem 12 establishes a form of duality between the estimation and approximation of the \(\mathcal{F}_{p,\pi}\) and \(\mathcal{B}\) spaces. In simpler terms, we can informally summarize the theorem as follows. * The \(L^{q}(\rho)\) estimation of \(\mathcal{F}_{p,\pi}\) is equivalent to the \(L^{p^{\prime}}(\pi)\) approximation of \(\tilde{\mathcal{F}}_{q^{\prime},\rho}\). * The \(L^{q}(\rho)\) estimation of \(\mathcal{B}\) is equivalent to the \(L^{\infty}\) approximation of \(\tilde{\mathcal{F}}_{q^{\prime},\rho}\). * The \(L^{\infty}\) estimation of \(\mathcal{F}_{p,\pi}\) is equivalent to the \(L^{p^{\prime}}(\pi)\) approximation of \(\tilde{\mathcal{B}}\). Here the \(L^{\infty}\) should be understood as the uniform metric \(\|\cdot\|_{C_{0}(\mathcal{X})}\). **Remark 13**.: _Regarding Theorem 12, it is worth noting that we conjecture that (17) and (19) do not hold when \(p=1\). This is due to our proof relying on the property \(L^{p}(\pi)=(L^{p^{\prime}}(\pi))^{*}\) for \(1<p<\infty\). Nevertheless, this property does not hold in general for \(p=1\), i.e., \(L^{1}(\pi)\) is not the dual of \(L^{\infty}(\pi)\)._ To better understand the dual equivalence, we examine some concrete cases below. Case 1.Consider the case of \(q=2\). For \(p=2\), we have \[\sup_{\|f\|_{\mathcal{F}_{2},\pi}\leq 1,\,f(x_{i})=0}\|f\|_{\rho}=\sup_{\|g\|_{ \tilde{\mathcal{F}}_{2},\rho}\leq 1}\inf_{c_{1},\cdots,c_{n}}\left\|g- \sum_{i=1}^{n}c_{i}\phi(x_{i},\cdot)\right\|_{\pi}, \tag{20}\] implying that the \(L^{2}\) estimation of a RKHS is equivalent to the \(L^{2}\) approximation of a conjugate RKHS. For the Barron space, we have \[\sup_{\|f\|_{\mathcal{B}}\leq 1,\,f(x_{i})=0}\|f\|_{\rho}=\sup_{\|g\|_{\tilde{ \mathcal{F}}_{2,\rho}}\leq 1}\inf_{c_{1},\cdots,c_{n}}\left\|g-\sum_{i=1}^{n}c_{i} \phi(x_{i},\cdot)\right\|_{C_{0}(\mathcal{X})}, \tag{21}\] implying that estimating the Barron space is equvalent to the uniform approximation of an associated RKHS. By comparing (20) and (21), we can observe that estimatimations of two-layer neural networks and random feature models are equivalent to the approximation of the same RKHS but under different norms to measure approximation errors (\(L^{2}\) vs. \(L^{\infty}\)). This can also be interpreted as a precise characterization of how much larger the space \(\mathcal{B}\) is compared to \(\mathcal{F}_{2,\pi}\). Case 2.When \(p=2,q=\infty\), we have \[\sup_{\|f\|_{\mathcal{F}_{2,\pi}}\leq 1,\,f(x_{i})=0}\|f\|_{C_{0}(\mathcal{X}) }=\sup_{\|g\|_{\mathcal{B}}\leq 1}\inf_{c_{1},\cdots,c_{n}}\left\|g-\sum_{i=1}^ {n}c_{i}\phi(x_{i},\cdot)\right\|_{\pi}\] The left hand side corresponds to the \(I\)-complexity that governs the \(L^{\infty}\) estimation error of learning \(\mathcal{F}_{2,\pi}\). This implies that we can obtain the \(L^{\infty}\) estimation error through the \(L^{2}\) approximation of the corresponding Barron space. By utilizing this approach, we conduct a comprehensive analysis of when the \(L^{\infty}\) estimation of RKHS exhibits CoD or not in Section 7. Case 3.For \(1<p\leq 2\), \(q=2\), we have \[\sup_{\|f\|_{\mathcal{F}_{p},\pi}\leq 1}\inf_{c_{1},\cdots,c_{m}}\left\|f- \sum_{j=1}^{m}c_{j}\phi(\cdot,v_{j})\right\|_{\rho}=\sup_{\|g\|_{\tilde{ \mathcal{F}}_{2,\rho}}\leq 1,\,g(x_{i})=0}\|g\|_{p^{\prime},\pi},\] The left hand side is the worst-case error of approximating \(\mathcal{F}_{p,\pi}\) with random features. When \(1<p<2\), \(\mathcal{F}_{p,\pi}\) is larger than the RKHS \(\mathcal{F}_{2,\pi}\). Therefore, this duality allows to study the random feature approximation beyond RKHS. In Section 6, we delve into this observation in detail. The non-interpolation regime.We now present the general form of the duality equivalence, which applies to the non-interpolation regime. **Theorem 14**.: _Let \(\nu\in\mathcal{P}(\mathcal{X})\) be a probability distribution and \(1<r\leq\infty\)._ 1. _For_ \(1<p\leq\infty,\,1\leq q<\infty\)_, suppose that_ \(\sup_{\|f\|_{\mathcal{F}_{p},\pi}\leq 1}\|f\|_{r,\nu}<\infty\)_, we have_ \[\sup_{\|f\|_{\mathcal{F}_{p},\pi}\leq 1,\|f\|_{r,\nu}\leq\epsilon}\|f\|_{q, \rho}=\sup_{\|g\|_{\tilde{\mathcal{F}}^{\prime},\rho}\leq 1}\inf_{c\in L^{r^{ \prime}}(\nu)}\left(\left\|g-\int_{\mathcal{X}}c(x)\phi(x,\cdot)\,\mathrm{d}\, \nu(x)\right\|_{p^{\prime},\pi}+\epsilon\|c\|_{r^{\prime},\nu}\right).\] (22) 2. _For_ \(1\leq q<\infty\)_, suppose that_ \(\sup_{\|f\|_{\mathcal{B}}\leq 1}\|f\|_{r,\nu}<\infty\)_, we have_ \[\sup_{\|f\|_{\mathcal{B}}\leq 1,\|f\|_{r,\nu}\leq\epsilon}\|f\|_{q,\rho}= \sup_{\|g\|_{\mathcal{F}^{\prime},\rho}\leq 1}\inf_{c\in L^{r^{\prime}}(\nu)} \left(\left\|g-\int_{\mathcal{X}}c(x)\phi(x,\cdot)\,\mathrm{d}\,\nu(x)\right\| _{C_{0}(\mathcal{V})}+\epsilon\|c\|_{r^{\prime},\nu}\right).\] (23) 3. _For_ \(1<p\leq\infty\)_, suppose that_ \(\sup_{\|f\|_{\mathcal{F}_{p},\pi}\leq 1}\|f\|_{r,\nu}<\infty\)_, we have_ \[\sup_{\|f\|_{\mathcal{F}_{p},\pi}\leq 1,\|f\|_{r,\nu}\leq\epsilon}\|f\|_{C_{0 }(\mathcal{X})}=\sup_{\|g\|_{\mathcal{B}}\leq 1}\inf_{e\in L^{r^{\prime}}(\nu)} \left(\left\|g-\int_{\mathcal{X}}c(x)\phi(x,\cdot)\,\mathrm{d}\,\nu(x)\right\| _{p^{\prime},\pi}+\epsilon\|c\|_{r^{\prime},\nu}\right).\] (24) Note that taking \(\nu=\frac{1}{n}\sum_{i=1}^{n}\delta_{x_{i}}\) and \(\epsilon=0\) recovers Theorem 12. Comparing with Theorem 12, we introduce another duality pairing: the \(L^{r}(\nu)\) constraint on fitting errors (\(\|f\|_{r,\nu}\leq\epsilon\)) in estimation and the \(L^{r^{\prime}}(\nu)\) regularization of the coefficient function (\(\epsilon\|c\|_{r^{\prime},\nu}\)) in approximation. Specifically, Theorem 14 generalizes Theorem 12 in the following ways: * First, the right hand side represents the error of approximation with the norm of coefficients regularized. This becomes more intuitive by taking \(\nu=\hat{\rho}_{n}\), in which case the right hand side of (22) becomes \[\sup_{g\in\tilde{\mathcal{F}}_{q^{\prime},\rho}}\inf_{c_{1},\ldots,c_{n}}\left( \|f-\sum_{i=1}^{n}c_{i}\phi(x_{i},\cdot)\|_{p^{\prime},\pi}+\epsilon\left( \frac{1}{n}\sum_{i=1}^{n}|c_{j}|^{p}\right)^{1/p}\right).\] This allows us to study regularized estimators whose coefficient norms can be well controlled. For more details, see Section 6. * Second, the choice of \(\nu\) is flexible and not limited to the empirical distribution \(\hat{\rho}_{n}\). For instance, by taking \(\nu=\rho\), the left hand side becomes the \(I\)-complexity that provides a lower bound for the distribution-dependent minimax error, as shown in Lemma 4. We remark that the generality of Theorem 14 allows us to tailor the duality equivalence to different scenarios and provides a more comprehensive understanding of the trade-off between approximation and estimation errors. ### An intuitive proof of Theorem 14 Here we provide a proof for the special case of (22) (the proofs for (23) and (24) are similar) in Theorem 14 to illustrate the role of duality in our analysis. Assuming strong duality, we show that the optimization problems on both sides of (22) are the Lagrangian dual of each other. For a formal proof of Theorem 14, we refer to Appendix C. Proof.: _(Informal)_ For \(g\in\tilde{\mathcal{F}}_{q^{\prime},\rho}\), let \(e=g-\int_{\mathcal{X}}c(x)\phi(x,\cdot)\,\mathrm{d}\,\nu(x)\). Then, minimization problem in the right hand side of (22) can be written in the following constrained form \[\inf_{c\in L^{r^{\prime}}(\nu),e\in L^{p^{\prime}}(\pi)} \|e\|_{p^{\prime},\pi}+\epsilon\|c\|_{r^{\prime},\nu}\] \[\text{s.t. }e=g-\int_{\mathcal{X}}c(x)\phi(x,\cdot)\, \mathrm{d}\,\nu(x),\] The associated Lagrangian: \(J_{g}:L^{r^{\prime}}(\nu)\times L^{p^{\prime}}(\pi)\times L^{p}(\pi)\mapsto \mathbb{R}\) is given by \[\mathcal{J}_{g}(c,e,\lambda)=\|e\|_{p^{\prime},\pi}+\int_{\mathcal{V}}\lambda (v)\left(g(v)-\int_{\mathcal{X}}c(x)\phi(x,v)\,\mathrm{d}\,\rho(x)-e(v)\right) \mathrm{d}\,\pi(v)+\epsilon\|c\|_{r^{\prime},\nu}.\] The dual objective \(\mathcal{D}_{g}(\lambda)=\inf_{c,e}J(c,e,\lambda)\) is given by \[\mathcal{D}_{g}(\lambda)=\begin{cases}\int_{\mathcal{V}}\lambda(v)g(v)\,\mathrm{d }\,\pi(v),&\quad\text{if }\left\|\int_{\mathcal{V}}\lambda(v)\phi(\cdot,v)\,\mathrm{d}\,\pi(v) \right\|_{r,\nu}\leq\epsilon,\,\|\lambda\|_{p,\pi}\leq 1,\\ -\infty,&\quad\text{otherwise,}\end{cases}\] Hence, the dual problem is \[\sup_{\lambda\in S}\int_{\mathcal{V}}\lambda(v)g(v)\,\mathrm{d}\,\pi(v),\] where \[S=\left\{\lambda\in L^{p}(\pi):\left\|\int_{\mathcal{V}}\lambda(v)\phi(\cdot, v)\,\mathrm{d}\,\pi(v)\right\|_{r,\nu}\leq\epsilon,\,\|\lambda\|_{p,\pi}\leq 1 \right\}. \tag{25}\] By strong duality we arrive at \[\inf_{c\in L^{r^{\prime}}(\nu)}\left(\left\|g-\int_{\mathcal{X}}c(x)\phi(x, \cdot)\,\mathrm{d}\,\nu(x)\right\|_{p^{\prime},\pi}+\epsilon\|c\|_{r^{\prime}, \nu}\right)=\sup_{\lambda\in S}\langle\lambda,g\rangle. \tag{26}\] Now we take supremum over the unit ball of \(\tilde{\mathcal{F}}_{q^{\prime},\rho}\) on the both sides of (26): \[\sup_{\|g\|_{\tilde{\mathcal{F}}_{q^{\prime},\rho}}\leq 1} \sup_{\lambda\in S}\langle\lambda,g\rangle =\sup_{\|b\|_{q^{\prime},\rho}\leq 1}\sup_{\lambda\in S}\int_{ \mathcal{V}}\lambda(v)\left(\int_{\mathcal{X}}b(x)\phi(x,v)\,\mathrm{d}\,\rho( x)\right)\mathrm{d}\,\pi(v)\] \[=\sup_{\|b\|_{q^{\prime},\rho}\leq 1}\sup_{\lambda\in S}\int_{ \mathcal{X}}b(x)\left(\int_{\mathcal{V}}\lambda(v)\phi(x,v)\,\mathrm{d}\,\pi(v )\right)\mathrm{d}\,\rho(x)\] \[=\sup_{\|b\|_{q^{\prime},\rho}\leq 1}\sup_{\|f\|_{\mathcal{F} _{p,\pi}}\leq 1,\|f\|_{r,\nu}\leq\epsilon}\int_{\mathcal{X}}b(x)f(x)\, \mathrm{d}\,\rho(x)\] \[=\sup_{\|f\|_{\mathcal{F}_{p,\pi}}\leq 1,\|f\|_{r,\nu}\leq \epsilon}\sup_{\|b\|_{q^{\prime},\rho}\leq 1}\int_{\mathcal{X}}b(x)f(x)\, \mathrm{d}\,\rho(x)\] \[=\sup_{\|f\|_{\mathcal{F}_{p,\pi}}\leq 1,\|f\|_{r,\nu}\leq \epsilon}\|f\|_{q,\rho},\] where the third step follows from the definition of \(S\) in (25). Hence, we complete the proof. ## 6 Random feature learning beyond kernel regime In this section, we employ the duality framework to investigate the learnability of functions in \(\mathcal{F}_{p,\pi}\) using RFMs. While prior analyses of RFMs have primarily focused on the case where \(p=2\), our duality framework enables us to examine the entire range of \(p\in(1,\infty)\). The crux of our approach involves employing a (local) Rademacher complexity-based bound in the dual space, where the problem is significantly simplified. We first introduce the assumptions about the feature function \(\phi\), which is required for our duality analysis. We fist make some boundedness assumption. **Assumption 15**.: In the case of \(2\leq q<\infty\), we assume that there exists a constant \(M_{q}\) such that \(\|\phi(\cdot,v)\|_{q,\rho}\leq M_{q}\) for any \(v\in\mathcal{V}\). In the case of \(q=\infty\), we assume \(\phi(\cdot,v)\in C_{0}(\mathcal{X})\) and \(\|\phi(\cdot,v)\|_{\infty}\leq M_{\infty}\) for any \(v\in\mathcal{V}\). We also assume that the Rademacher complexities of the corresponding \(\mathcal{F}_{p,\pi}\) and \(\mathcal{B}\) space are well-controlled. **Assumption 16**.: We assume a Rademacher complexity bound for the space \(\tilde{\mathcal{F}}_{q^{\prime},\rho}\) or \(\tilde{\mathcal{B}}\) space: there exists a constant \(R_{q}\) such that for any \(v_{1},\cdots,v_{n}\in\mathcal{V}\), \[\widehat{\operatorname{Rad}}_{n}(\tilde{\mathcal{F}}_{q^{\prime},\rho}(1))\leq\frac{R_{q}}{\sqrt{n}}\quad\text{for the case}\,2\leq q<\infty, \tag{27}\] \[\text{or}\quad\widehat{\operatorname{Rad}}_{n}(\tilde{\mathcal{B} }(1))\leq\frac{R_{\infty}}{\sqrt{n}}\quad\text{for the case}\,q=\infty. \tag{28}\] Note that \(O(n^{-1/2})\) is the natural scaling of the Rademacher complexity of most function classes of interest. The following lemma demonstrates that the Rademacher complexity bounds specified in Assumption 16 are applicable to many natural choices of feature function \(\phi\). The proof is provided in Appendix D.1. **Lemma 17**.: 1. _For_ \(2\leq q<\infty\)_, suppose that the activation function_ \(\phi\) _satisfies_ \[\|\phi(\cdot,v)\|_{\rho}\leq R,\] _for any_ \(v\in\mathcal{V}\)_. Then (_27_) in Assumption_ 16 _holds with_ \(R_{q}\leq\sqrt{q}R\)__ 2. _For_ \(q=\infty\)_, if_ \(\mathcal{X}\) _and_ \(\mathcal{V}\) _are both supported on_ \(\{x:\|x\|_{2}\leq R\}\) _and_ \(\phi(x,v)=\sigma(x^{\top}v)\)_, where_ \(\sigma:\mathbb{R}\to\mathbb{R}\) _is_ \(L\)_-Lipschitz and_ \(\sigma(0)=0\)_. Then (_28_) in Assumption_ 16 _holds with_ \(R_{\infty}\leq LR^{2}\)_._ Next, we present the random feature approximation bound for functions in \(\mathcal{F}_{p,\pi}\): **Theorem 18**.: _Suppose \(1<p\leq 2\) and \(v_{j}\stackrel{{\text{iid}}}{{\sim}}\pi\) for \(j\in[m]\). If Assumption 15 and 16 hold, then w.p. at least \(1-\delta\) over the sampling of \(\{v_{j}\}_{j=1}^{m}\), there exists an absolute constant \(C_{1}\), such that for any \(C\geq C_{1}\), we have_ \[\sup_{\|f\|_{\mathcal{F}_{p,\pi}}\leq 1}\inf_{\|\mathbf{c}\|_{p} \leq Cm^{1/p}}\left\|f-\frac{1}{m}\sum_{j=1}^{m}c_{j}\phi(\cdot,v_{j})\right\| _{q,\rho}\lesssim\left(\frac{M_{q}^{p^{\prime}-2}R_{q}^{2}\log^{3}m+M_{q} \log(1/\delta)}{m}\right)^{1/p^{\prime}},\,2\leq q<\infty.\] \[\sup_{\|f\|_{\mathcal{F}_{p,\pi}}\leq 1}\inf_{\|\mathbf{c}\|_{p} \leq Cm^{1/p}}\left\|f-\frac{1}{m}\sum_{j=1}^{m}c_{j}\phi(\cdot,v_{j})\right\| _{C_{0}(\mathcal{X})}\lesssim\left(\frac{M_{\infty}^{p^{\prime}-2}R_{\infty}^ {2}\log^{3}m+M_{\infty}\log(1/\delta)}{m}\right)^{1/p^{\prime}}. \tag{29}\] The proof is deferred to Appendix D.2. This theorem establishes the uniform approximability of \(\mathcal{F}_{p,\pi}\) for RFMs. In other words, upon sampling the random features \(\phi(\cdot,v_{1}),\ldots,\phi(\cdot,v_{m})\), any function in \(\mathcal{F}_{p,\pi}(1)\) can be approximated effectively using these features. This finding is in agreement with the common use of RFMs in practice, where a single set of random features is repeatedly used to learn multiple functions through optimization of outer coefficients. Note that the rate of approximation error scales as \(O(m^{-(p-1)/p})\). As \(p\) approaches \(1\), this rate deteriorates, consistent with the observation that \(\mathcal{F}_{p,\pi}\) becomes larger as \(p\) decreases towards \(1\), as shown in (13). It should be noted that our bound blows up when \(p=1\), but this does not imply that \(\mathcal{F}_{1,\pi}\) cannot be approximated by RFMs with a rate. In fact, \(\mathcal{F}_{1,\pi}\) is a subset of \(\mathcal{B}\) and \(\mathcal{B}\) can be approximated with a rate by RFMs. However, in general, the rate scales like \(O(m^{-1/d})\)[22], which suffers from the CoD if no further conditions on the feature function and weight distribution are imposed. These observations suggest that our bound is not tight when \(p\) is close to \(1\). It is noteworthy that the approximation rate is independent of the value of \(q\) for all cases. Specifically, the \(L^{2}\) approximation and the \(L^{\infty}\) approximation have the same rate. This contrasts with the estimation problem, where \(L^{\infty}\) estimation suffers from the CoD, while \(L^{2}\) estimation does not. More detailed discussion on this issue can be found in Section 7. Comparison with [16].The special case of \(p=q=2\) (i.e., the \(L^{2}(\pi)\) approximation of the RKHS \(\mathcal{F}_{2,\pi}\)) has been proved in [16]. However, the proof heavily relies on the Hilbert structure of \(\mathcal{F}_{2,\pi}\) and exploits the corresponding covariance operator. In contrast, our result holds for a general choice of \(p\) and \(q\) and is derived naturally from our duality framework. Notably, our proof is substantially simpler and more concise. Theorem 18 provides bounds solely on the approximation error. We next study the learning under finite sample case. **Theorem 19**.: _Suppose that \(f^{*}\in\mathcal{F}_{p,\pi}(1)\) and \(x_{i}\overset{\text{iid}}{\sim}\rho\), \(y_{i}=f^{*}(x_{i})+\xi_{i}\), where the noise is distributed as \(\xi_{i}\sim\mathcal{N}(0,\varsigma^{2})\). Let \(v_{1},\cdots,v_{m}\) be i.i.d. random weights sampled from \(\pi\). Consider the estimator \(\hat{f}=\frac{1}{m}\sum_{j=1}^{m}\hat{c}_{j}\phi(\cdot,v_{j})\) with \(\hat{c}\) given by_ \[\hat{c}:=\operatorname*{argmin}_{\|c\|_{p}\leq\lambda}\frac{1}{n}\sum_{i=1}^{ n}\left(y_{i}-\frac{1}{m}\sum_{j=1}^{m}c_{j}\phi(x_{i},v_{j})\right)^{2}. \tag{30}\] _Assume \(\sup_{x\in\mathcal{X},v\in\mathcal{V}}|\phi(x,v)|\leq M\). Then, with an appropriate choice of \(\lambda\), for any \(\delta\in(0,1)\), it holds w.p. at least \(1-\delta\) over the sampling of data \(\{(x_{i},y_{i})\}_{i=1}^{n}\) and features \(\{v_{j}\}_{j=1}^{m}\) that_ \[\|\hat{f}-f^{*}\|_{\rho}^{2}\lesssim M\varsigma\sqrt{\frac{p^{\prime}+\log(1/ \delta)}{n}}+\frac{p^{\prime}M^{2}\log^{3}n+M\log(1/\delta)}{n}+\left(\frac{M ^{p^{\prime}}\log^{3}m+M\log(1/\delta)}{m}\right)^{2/p^{\prime}}.\] This theorem provides an upper bound of the total error of learning functions in \(\mathcal{F}_{p,\pi}\) and the proof is deferred to Appendix D.3. Note that here we focus on norm-constrained estimators but similar arguments can be straightforwardly extended to penalized estimators. The upper bound on the total error is comprised of two components: the estimation error and the approximation error. Notably, the estimation error rate is independent of the value of \(p\), and exhibits scaling of \(O(n^{-1})\) in the noiseless case, and \(O(n^{-1/2})\) in the presence of noise. This stands in contrast to the approximation error, for which the error rate deteriorates as decreasing \(p\) to \(1\). In summary, we have proved that RFMs can effectively learn functions in \(\mathcal{F}_{p,\pi}\) as long as \(p>1\) and is independent of \(d\). This demonstrates the applicability of RFMs beyond the kernel regime where \(p=2\). ## 7 \(L^{\infty}\) learnability of RKHS It is well-known that functions in RKHS can be learned efficiently under the \(L^{2}\) metric but this may not be sufficient when the \(L^{\infty}\) metric is more relevant, e.g., in security- and safety-critical applications. In this section, we consider the problem of learning functions in RKHS under the \(L^{\infty}\) norm: \[\|\hat{f}-f^{*}\|_{\infty}=\|\hat{f}-f^{*}\|_{C_{0}(\mathcal{X})},\] where \(\hat{f}\) is an estimator and \(f^{*}\) is the target function, respectively. By Lemma 7, for any kernel \(k\), there exist \(\phi:\mathcal{X}\times\mathcal{V}\mapsto\mathbb{R}\) and \(\pi\in\mathcal{P}(\mathcal{V})\) such that \(k(x,x^{\prime})=k_{\pi}(x,x^{\prime})=\int_{\mathcal{V}}\phi(x,v)\phi(x^{ \prime},v)\,\mathrm{d}\pi(v)\) and \(\mathcal{H}_{k}=\mathcal{F}_{2,\pi}\). Hence it suffice to consider the \(\mathcal{F}_{2,\pi}\) space, for which we can apply our dual equivalence. Specifically, Theorem 12 implies that the \(L^{\infty}\) estimation of \(\mathcal{F}_{2,\pi}\) is equivalent to the \(L^{2}(\pi)\)-approximation of \(\tilde{\mathcal{B}}\) with random features; the latter has been systematically investigated in [14, 15]. Specifically, we shall utilize the spectral-based approach developed in [14] in our analysis. ### Lower bounds To present our results, we need to introduce the dual kernel. For any \(\gamma\in\mathcal{P}(\mathcal{V})\), define the dual kernel \(\tilde{k}_{\gamma}:\mathcal{V}\times\mathcal{V}\to\mathbb{R}\) as \[\tilde{k}_{\gamma}(v,v^{\prime}):=\int_{\mathcal{X}}\phi(x,v)\phi(x,v^{\prime}) \,\mathrm{d}\,\gamma(x). \tag{31}\] **Theorem 20**.: _Recall that \(\Lambda_{\tilde{k}_{\gamma},\pi}(m)=\sqrt{\sum_{i=m+1}^{\infty}\mu_{i}^{ \tilde{k}_{\gamma},\pi}}\), where \(\{\mu_{i}^{\tilde{k}_{\gamma},\pi}\}_{i=1}^{\infty}\) denotes the spectrum of the dual kernel \(\tilde{k}_{\gamma}\) with respect to \(\pi\) in a decreasing order. Let \(\tilde{\Lambda}_{\pi}(m)=\sup_{\gamma\in\mathcal{P}(\mathcal{V})}\Lambda_{ \tilde{k}_{\gamma},\pi}(m)\). Recall that we use \(\mathcal{A}_{n}\) to denote the set of all measurable estimators._ 1. _For any input data_ \(x_{1},\cdots,x_{n}\in\mathcal{X}\)_. We have_ \[\inf_{T_{n}\in\mathcal{A}_{n}}\sup_{\|f\|_{\mathcal{F}_{2,\pi} }\leq 1}\|T_{n}(\{(x_{i},f(x_{i}))\}_{i=1}^{n})-f\|_{C_{0}(\mathcal{X})}\gtrsim \tilde{\Lambda}_{\pi}(m).\] 2. _Suppose that_ \(x_{i}\stackrel{{ iid}}{{\sim}}\rho,\xi_{i}\stackrel{{ iid}}{{\sim}}\mathcal{N}(0,\varsigma^{2})\)_. Let_ \(\tilde{s}_{\rho}=\int_{\mathcal{V}}\tilde{k}_{\rho}(v,v)\,\mathrm{d}\,\pi(v)\) _be the trace of_ \(\tilde{k}_{\rho}\)_, we have_ \[\inf_{T_{n}\in\mathcal{A}_{n}}\sup_{\|f\|_{\mathcal{F}_{2,\pi} }\leq 1}\mathbb{E}\,\|T_{n}(\{(x_{i},f(x_{i})+\xi_{i})\}_{i=1}^{n})-f\|_{C_{ 0}(\mathcal{X})}\gtrsim\min\left(1,\frac{\varsigma}{\sqrt{\tilde{s}_{\rho}}} \right)\tilde{\Lambda}_{\pi}(m).\] The proof is deferred to Appendix E.1. This theorem shows that the error of \(L^{\infty}\) estimation can be lower bounded using the spectrum of dual kernels in both the sample-dependent case and the distribution-dependent case. Next, we further show that lower bounds can also be controlled by the primal kernel \(k\). **Corollary 21**.: _Let \(\mathcal{H}_{k}\subset C_{0}(\mathcal{X})\) be the RKHS associated with the kernel \(k\)._ 1. _For any input data_ \(x_{1},\cdots,x_{n}\in\mathcal{X}\)_, we have_ \[\inf_{T_{n}\in\mathcal{A}_{n}}\sup_{\|f\|_{\mathcal{H}_{k}}\leq 1}\|T_{n}(\{(x_{i},f(x_{i}))\}_{i=1}^{n})-f\|_{C_{0}(\mathcal{X})}\gtrsim\sup_{\gamma\in \mathcal{P}(\mathcal{X})}\Lambda_{k,\gamma}(m).\] 2. _Suppose that_ \(x_{i}\stackrel{{ iid}}{{\sim}}\rho,\xi_{i}\stackrel{{ iid}}{{\sim}}\mathcal{N}(0,\varsigma^{2})\)_. Let_ \(s_{\rho}=\int_{\mathcal{X}}k(x,x)\,\mathrm{d}\,\rho(x)\) _be the trace of_ \(k\)_. We have_ \[\inf_{T_{n}\in\mathcal{A}_{n}}\sup_{\|f\|_{\mathcal{H}_{k}}\leq 1}\mathbb{E}\,\|T_{ n}(\{(x_{i},f(x_{i})+\xi_{i})\}_{i=1}^{n})-f\|_{C_{0}(\mathcal{X})}\gtrsim \min\left(1,\frac{\varsigma}{\sqrt{s_{\rho}}}\right)\sup_{\gamma\in\mathcal{ P}(\mathcal{X})}\Lambda_{k,\gamma}(m).\] The above result relies on the \(\mathcal{F}_{2,\pi}\) representation of the RKHS, which is established in Lemma 7. Specifically, we show that for any kernel \(k\), there exist a feature function \(\phi:\mathcal{X}\times\mathcal{V}\mapsto\mathbb{R}\) and a probability measure \(\pi\in\mathcal{P}(\mathcal{V})\) such that \(\mathcal{H}_{k}=\mathcal{F}_{2,\pi}\) and the spectrum of the corresponding dual kernel is the same as that of \(k\). This provides a crucial link between the primal and dual representations of the RKHS, and enables us to leverage the duality framework to derive the minimax estimation rate. For a detailed proof, we refer to Appendix E.2. ### Upper bounds We now turn to establish similar upper bounds of uniform estimation errors. Specifically, we focus on the dot-product kernels. Assume \(\mathcal{X}=\mathcal{V}=\mathbb{S}^{d-1},\pi=\rho=\tau_{d-1}\), and the feature function is given by a single neuron without bias: \(\phi(x,v)=\sigma(x^{\top}v)\), where \(\sigma:\mathbb{R}\to\mathbb{R}\) is a nonlinear activation function. In such a case, \(\mathcal{F}_{2,\pi}\) and \(\tilde{\mathcal{F}}_{2,\rho}\) are essentially the same space and we will use \(\mathcal{F}_{2,\tau_{d-1}}\) to denote them without specifying the input domain for simplicity. The kernel associated to \(\mathcal{F}_{2,\tau_{d-1}}\) is dot-product: \[k(x,x^{\prime})=\int_{\mathbb{S}^{d-1}}\sigma(v^{\top}x)\sigma(v^{\top}x^{ \prime})\,\mathrm{d}\,\tau_{d-1}(v)=\kappa(x^{\top}x^{\prime}), \tag{32}\] where \(\kappa:[-1,1]\to\mathbb{R}\). Let \(\{\lambda_{j}\}_{j=1}^{\infty}\) be the eigenvalues of \(k\) on \(L^{2}(\tau^{d-1})\) in a decreasing order. The spectral decomposition of \(k\) is given by \[\kappa\left(x^{T}x^{\prime}\right)=\sum_{k=0}^{\infty}\sum_{j=1}^{N(d,k)}t_{k }Y_{k,j}(x)Y_{k,j}\left(x^{\prime}\right), \tag{33}\] where \(t_{k}\) is the eigenvalue and the spherical harmonics \(Y_{k,j}\) is the corresponding eigenfunction that satisfies \(\mathbb{E}_{x^{\prime}\sim\tau_{d-1}}\left[\kappa\left(x^{T}x^{\prime}\right) Y_{k,j}\left(x^{\prime}\right)\right]=t_{k}Y_{j,k}(x)\). Note that \(\{\lambda_{j}\}_{j}\) are the eigenvalues counted with multiplicity, while \(\{t_{k}\}_{k}\) are the eigenvalues counted without multiplicity. We refer to [22, Section 2.1] for more details about the eigendecomposition of dot-product kernels. **Theorem 22**.: _Suppose \(k:\mathbb{S}^{d-1}\times\mathbb{S}^{d-1}\mapsto\mathbb{R}\) is a dot-product kernel taking the form of (32). For any non-increasing function \(L:\mathbb{N}^{+}\to\mathbb{R}^{+}\) that satisfies \(\Lambda_{k,\tau_{d-1}}(m)\leq L(m)\), let \(q_{L}(d)=\sup_{k\geq 1}\frac{L(k)}{L((d+1)k)}\). Let \(\{(x_{i},y_{i})\}_{i=1}^{n}\) be \(n\) samples indepdently drawn from \(x_{i}\sim\tau_{d-1}\), \(y_{i}=f^{*}(x_{i})+\xi_{i}\), where the noise \(\xi_{i}\)'s are mean-zero and \(\varsigma\)-subgaussian and the target function \(f^{*}\in\mathcal{H}_{k}(1)\). Consider the estimator_ \[\hat{f}=\operatorname*{argmin}_{\|f\|_{\mathcal{H}_{k}}\leq 1}\frac{1}{n}\sum_{ i=1}^{n}(f(x_{i})-y_{i})^{2}.\] _Then with probability at least \(1-\delta\) over the sampling of \(\{(x_{i},y_{i})\}_{i=1}^{n}\), we have_ \[\|\hat{f}-f^{*}\|_{C_{0}(\mathbb{S}^{d-1})}\lesssim\inf_{m\geq 1}\left[ \sqrt{q_{L}(d)L(m)}+\sqrt{m}(\epsilon(n,\varsigma,\delta)+e(n,\delta)) \right], \tag{34}\] _where \(\epsilon(n,\varsigma,\delta)=\left(\frac{\varsigma^{2}\kappa(1)(1+\log(1/ \delta))}{n}\right)^{1/4},\)\(e(n,\delta)=\sqrt{\frac{\kappa(1)^{2}\log^{3}n+\kappa(1)\log(1/\delta)}{n}}\)._ This theorem presents a spectral-based upper bound for the \(L^{\infty}\) estimation error and the proof can be found in Appendix E.3. To obtain the tightest bound, one can choose \(L(m)=\Lambda_{k,\tau_{d-1}}(m)\). However, since the exact value of \(\Lambda_{k,\tau_{d-1}}(m)\) is often unknown, the introduction of \(L(m)\) is mainly for the convenience of calculating the constant \(q_{L}(d)\). Take \(L(m)=\Lambda_{k,\tau_{d-1}}(m)=\sum_{j=m+1}^{\infty}\lambda_{j}\). If \(\lambda_{j}\sim j^{-(1+2\beta)}\), then we roughly have that \(L(m)\sim m^{-2\beta}\) and \(q_{L}(d)\sim d^{2\beta}\). Plugging them into (34) yields \[\|\hat{f}-f^{*}\|_{C_{0}(\mathcal{X})}\lesssim_{\beta,\varsigma,\delta}\inf_{m \geq 1}\left(d^{\beta}m^{-\beta}+m^{1/2}n^{-1/4}\right)\lesssim_{\beta,\varsigma, \delta}d^{\frac{\beta}{2\beta+1}}n^{-\frac{\beta}{2(2\beta+1)}}, \tag{35}\] where we hide constants that depend on \(\beta,\varsigma\) and \(\delta\). It is evident that if \(\beta\) does not depend on \(d\), then the error rate does not exhibit the CoD. **Remark 23**.: _The rotational invariance assumption plays a critical role in the analysis presented above, and the result may potentially be extended to settings where the densities of \(\pi\) and \(\rho\) are strictly positive. In addition, by utilizing localization techniques (see, e.g., [20, Chapter 13]) to tackle noise-induced errors, one may be able to obtain tighter bounds. However, our main focus here is to understand how the error rate depends on the kernel spectrum, rather than to pursue optimal rates._ ### Examples We now turn to instantiate the lower and upper bounds established above for concrete examples. Specifically, we focus on dot-product kernels that take the form of (32) and discuss how the smoothness of \(\sigma(\cdot)\) affects the \(L^{\infty}\) learnability. These kernels are of particular interest in understanding RFMs and neural networks [14, 15, 16]. To present our results, we will restate some results from [15, Section 4] when needed. Non-smooth activations.Consider the ReLU\({}^{\alpha}\) activation function \(\sigma(t)=\max(0,t)^{\alpha}\) with \(\alpha\in\mathbb{N}^{+}\). [15, Proposition 5] shows that there exists a constant \(C_{\alpha,d}\) depending on \(1/d\) polynomially such that \[\Lambda_{k,\tau_{d-1}}(m)\geq C_{\alpha,d}m^{-\frac{2\alpha+1}{d-1}}. \tag{36}\] Combining (36) with Corollary 21 yields the follow: **Corollary 24**.: _For \(\sigma(t)=\max(0,t)^{\alpha}\) with \(\alpha\in\mathbb{N}^{+}\), there exists a constant \(C_{\alpha,d}\) that depends on \(1/d\) polynomially such that_ 1. _For any input data_ \(x_{1},\cdots,x_{n}\in\mathcal{X}\)_, we have_ \[\inf_{T_{n}\in\mathcal{A}_{n}}\sup_{\|f\|_{\mathcal{H}_{k}}\leq 1}\|T_{n}(\{(x_{i },f(x_{i}))\}_{i=1}^{n})-f\|_{C_{0}(\mathbb{S}^{d-1})}\geq C_{\alpha,d}n^{- \frac{2\alpha+1}{2(d-1)}}.\] 2. _Suppose_ \(x_{i}\stackrel{{ iid}}{{\sim}}\rho\) _and_ \(\xi_{i}\stackrel{{ iid}}{{\sim}}\mathcal{N}(0,\varsigma^{2})\) _for_ \(i=1,\ldots,n\)_. We have_ \[\inf_{T_{n}\in\mathcal{A}_{n}}\sup_{\|f\|_{\mathcal{H}_{k}}\leq 1}\mathbb{E}\left\|T_ {n}(\{(x_{i},f(x_{i})+\xi_{i})\}_{i=1}^{n})-f\right\|_{C_{0}(\mathbb{S}^{d-1} )}\geq\min(1,\varsigma)C_{\alpha,d}n^{-\frac{2\alpha+1}{2(d-1)}}.\] The lower bound given in Corollary 24 suggests that kernel methods induced by the ReLU\({}^{\alpha}\) activation functions suffers from CoD. This immediately implies that \(L^{\infty}\) learning with popular ReLU neural networks also suffers from CoD since the Barron space \(\mathcal{B}\) contains the RKHS \(\mathcal{F}_{2,\tau_{d-1}}\) as its subset (according to Lemma 7). Smooth activations.We now turn to smooth activation functions, such as sigmoid, softplus, arctan, GELU, and Swish/SiLU. All popular smooth activation functions satisfy the following assumption: **Assumption 25**.: Assume that \(B_{k}:=\max_{t\in\mathbb{R}}|\sigma^{(k)}(t)|\lesssim\Gamma(k-1)\). See [15] for the verification of this assumption. Under this assumption, [15, Proposition 9] shows that \[\Lambda_{k,\tau_{d-1}}(m)\lesssim\frac{1}{m}. \tag{37}\] By asserting \(L(m)=\frac{C}{m}\) in Theorem 22 yields the follow: **Corollary 26**.: _Suppose \(k:\mathbb{S}^{d-1}\times\mathbb{S}^{d-1}\mapsto\mathbb{R}\) is a dot-product kernel taking the form of (32) and the activation function \(\sigma\) satisfies Assumption 25. Let \(\{(x_{i},y_{i})\}_{i=1}^{n}\) be \(n\) samples indepdently drawn from \(x_{i}\sim\tau_{d-1},\,y_{i}=f^{*}(x_{i})+\xi_{i}\), where the noise \(\xi_{i}\)'s are mean-zero and \(\varsigma\)-subgaussian and the target function \(f^{*}\in\mathcal{H}_{k}(1)\). Consider the estimator_ \[\hat{f}=\operatorname*{argmin}_{\|f\|_{\mathcal{H}_{k}}\leq 1}\frac{1}{n}\sum_{ i=1}^{n}(f(x_{i})-y_{i})^{2}.\] _Then with probability at least \(1-\delta\) over the sampling of \(\{(x_{i},y_{i})\}_{i=1}^{n}\), we have_ \[\|\hat{f}-f^{*}\|_{C_{0}(\mathbb{S}^{d-1})}\lesssim_{\varsigma,\delta}d^{1/4} n^{-1/8}. \tag{38}\] Corollary 26 gives that the \(L^{\infty}\) error scales as \(O(n^{-1/8})\). Thus, we can conclude that in this case, the \(L^{\infty}\) learning is tractable. ## 8 Conclusion In this paper, we proposed a duality framework to analyze the approximation and estimation of the \(\mathcal{F}_{p,\pi}\) and Barron spaces, which is relevant to understand kernel methods, RFMs, and neural networks. Specifically, we establish a dual equivalence between approximation and estimation for learning functions in these spaces, which allows us to convert an approximation problem to an estimation problem and vice versa. Therefore, in analysis, one only needs to focus on the easier one. To demonstrate the power of our duality framework, we provide comprehensive analyses of two specific problems: random feature learning beyond RKHS and the \(L^{\infty}\) estimation of RKHS. Our duality analysis recovers existing results with much simpler proofs and stronger conclusions. To establish the dual equivalence, we introduce an information-based complexity to measure the capacity of a function class and show how it controls the minimax error of estimating that function class. For future work, it is promising to apply our duality framework to different learning settings, e.g., unsupervised learning, out-of-distribution learning, and reinforcement learning. In this paper, we only consider the supervised learning setting. In fact, a similar analysis has been conducted in [1, 23] to understand reinforcement learning. In addition, the proposed information-based complexity can be also useful for studying the statistical properties of other function spaces. For instance, [17] already adopted similar ideas to study the optimal approximation of various traditional function spaces, such as Sobolev space. ## Acknowledgements Lei Wu is supported in part by a startup fund from Peking University. Hongrui Chen is partially supported by the elite undergraduate training program of School of Mathematical Sciences at Peking University
2306.03763
ChatGPT Informed Graph Neural Network for Stock Movement Prediction
ChatGPT has demonstrated remarkable capabilities across various natural language processing (NLP) tasks. However, its potential for inferring dynamic network structures from temporal textual data, specifically financial news, remains an unexplored frontier. In this research, we introduce a novel framework that leverages ChatGPT's graph inference capabilities to enhance Graph Neural Networks (GNN). Our framework adeptly extracts evolving network structures from textual data, and incorporates these networks into graph neural networks for subsequent predictive tasks. The experimental results from stock movement forecasting indicate our model has consistently outperformed the state-of-the-art Deep Learning-based benchmarks. Furthermore, the portfolios constructed based on our model's outputs demonstrate higher annualized cumulative returns, alongside reduced volatility and maximum drawdown. This superior performance highlights the potential of ChatGPT for text-based network inferences and underscores its promising implications for the financial sector.
Zihan Chen, Lei Nico Zheng, Cheng Lu, Jialu Yuan, Di Zhu
2023-05-28T21:11:59Z
http://arxiv.org/abs/2306.03763v4
# ChatGPT Informed Graph Neural Network for Stock Movement Prediction ###### Abstract ChatGPT has demonstrated remarkable capabilities across various natural language processing (NLP) tasks. However, its potential for inferring dynamic network structures from temporal textual data, specifically financial news, remains an unexplored frontier. In this research, we introduce a novel framework that leverages ChatGPT's graph inference capabilities to enhance Graph Neural Networks (GNN). Our framework adeptly extracts evolving network structures from textual data, and incorporates these networks into graph neural networks for subsequent predictive tasks. The experimental results from stock movement forecasting indicate our model has consistently outperformed the state-of-the-art Deep Learning-based benchmarks. Furthermore, the portfolios constructed based on our model's outputs demonstrate higher annualized cumulative returns, alongside reduced volatility and maximum drawdown. This superior performance highlights the potential of ChatGPT for text-based network inferences and underscores its promising implications for the financial sector. Large language models Graph neural networks Quantitative finance Stock market ## 1 Introduction The task of predicting stock price movements stands as one of the most intricate and elusive challenges in modern times. The potential for substantial investment gains underscores the urgent necessity of achieving accurate predictions [1]. Owing to the efficient market hypothesis, stock prices are assumed to encapsulate all relevant market information [2, 3]. This makes the process of distinguishing genuine signals from noise an intricate endeavor that can severely impact forecasting efficacy. The academic community has responded to this challenge by formulating a wide array of statistical and machine learning models that exploit diverse features such as historical prices, news items, and market events for forecasting purposes [4, 5, 6, 7]. However, these approaches often fail to fully recognize and incorporate the latent inter-dependencies among different equities, thus curtailing their potential for generating accurate predictions. The complexity of forecasting stock price movements is further compounded when considering these latent inter-dependencies among equities. Two primary challenges are: 1) identifying the companies that have relevance, and 2), modeling how information permeates through them. The stock price of a company can be viewed as a synergy of the stock prices of related companies that share certain relationships with the focal company (e.g., competitors, substitutes, suppliers, etc.) [8, 9]. Moreover, the propagation of external events can have varying impact speeds on different relevant companies, giving rise to a phenomenon called "lead-lag effect" [10]. Despite efficient identification and modeling being critical, existing methods pose many limitations in capturing dynamic relationships and modeling market evolution (details will be discussed in the Related Work section). Large Language Models (LLMs), such as ChatGPT, have garnered considerable scholarly attention since their introduction. While their applications in the expansive financial economics domain are still in a nascent stage, LLMs have demonstrated remarkable performance across a wide range of Natural Language Processing (NLP) tasks [11; 12]. One key factor contributing to ChatGPT's success is its extensive knowledge of entities (e.g., companies, people, events) and their relationships, which are acquired through training on massive datasets. Therefore, leveraging LLMs to automatically extract latent relationships between companies may be more efficient than manual extraction or extraction with handcrafted features [9]. In the study, we present a novel approach that exploits large language models, specifically ChatGPT, to predict stock price movement. Our approach begins with employing ChatGPT to identify and extract latent inter-dependencies among equities, the results of which yield a dynamic, evolving graph that undergoes daily updates. Following this, a Graph Neural Network (GNN) is employed to generate embeddings for the target companies. The resultant embeddings are then integrated with a Long Short-Term Memory (LSTM) model to forecast stock movements for the upcoming trading day. We evaluate the proposed model's performance using a real-world dataset, setting the DOW 30 companies as our targets. Given the last update to the DOW 30 composition in August 2020, we choose the period from September 1, 2020, to December 30, 2022, as our target period in order to capture contemporary market trends. To prevent potential data leakage issues, considering that the ChatGPT model was trained on data available only up to September 2021, we designate the test period to begin from October 1, 2021. In the task of stock movement forecasting, the experimental results demonstrate that our model consistently surpasses all baseline models in weighted F1, Micro, and Macro F1 metrics with a minimum improvement of 1.8%. Moreover, we leverage the output of our model to construct portfolios using both long-only and long-short strategies. The evaluation of portfolio performance indicates that our model consistently exceeds benchmarks in terms of cumulative returns during the out-of-sample period. Our model also manifests a lower annualized volatility and a reduced maximum drawdown. Both results in stock movement forecasting and portfolio performance evaluation underscore the effectiveness of our ChatGPT-informed GNN model, highlighting the promising implications of LLMs for financial data processing. This paper offers two salient contributions. First, to the best of our knowledge, this is the first study of ChatGPT's capacity to infer network structures from textual data in the financial economics area. While ChatGPT's robust proficiency across various NLP tasks has been well established in the existing literature [13; 11], our work distinguishes itself by pioneering the connection between time-series textual data and dynamic network structures. The subsequent integration of the ChatGPT-informed network structures with GNNs also harnesses the power of deep learning models when processing large-scale, streaming datasets. Second, our experimentation with a real-world dataset provides compelling evidence of our model's superior performance in stock movement forecasting. By constructing a portfolio based on our model's outputs, the back-testing results consistently exhibit a higher annualized return, coupled with lower volatility and drawdown. The complexity of the stock market arises from the intricate interplay of numerous interconnected factors, such as economic indicators, the financial standing of corporations, and investor sentiment. Such intertwined dynamics render stock movement prediction a formidable task. Given that previous research has showcased how marginal advancements in predictive accuracy can translate into significant profit increments [1; 14], the heightened performance of our model underscores its substantial practical implications in the broader financial arena. The remainder of this paper is organized as follows. The next section provides an overview of related work on stock movement prediction, large language models, and graph neural networks. We then delve into the details of our proposed model, discussing the network structure inference using ChatGPT and the process of incorporating ChatGPT's network outputs with GNN. Subsequently, we present our experimental setup and results. We conclude the paper by highlighting potential limitations and suggesting directions for future research. ## 2 Related Work The forecasting of financial time series, especially in relation to stock movement prediction, has emerged as a major challenge nowadays. The ability to accurately predict stock movement is of paramount importance in shaping investment decisions, controlling financial risks, devising effective trading strategies, and comprehending the intricacies of the overall market. Despite its criticality, this forecasting task presents substantial difficulties for both researchers and practitioners. As per the Efficient Market Hypothesis [2], stock prices reflect all accessible information pertaining to the equity, encapsulating its historical prices, corporate events, and relevant news. Conversely, the theory of random walks postulates that future prices are as unpredictable as a series of accumulated random fluctuations [3]. Consequently, the abundance of intricate data coupled with the inherent unpredictability introduces a substantial difficulty in distinguishing meaningful signals from random noises for effective predictions. Over the years, scholars have utilized a wide variety of methods and data sources to model stock movement. Traditional statistical approaches, including linear regression, auto-regression (AR), moving average (MA), ARIMA, and GARCH, have been extensively employed for financial time series forecasting [7]. Beyond these conventional statistical methods, machine learning techniques such as k-nearest neighbors (KNN), support vector machine (SVM), random forest, and deep learning-based methods are gaining significant traction owing to their superior predictive capacities [4; 5; 6]. In addition to modeling the relationship between historical and future prices, researchers have integrated alternate data sources like news articles, social media data, and financial reports for enhanced prediction [15]. However, these techniques fall short of capturing the latent inter-dependencies of stocks, thereby limiting their predictive potential. Accurately predicting stock price movement becomes more intricate when considering the latent inter-dependencies of equities. The fluctuation of one stock can significantly impact the movement of other related stocks [8]. These relationships between stocks may manifest themselves in various ways. For instance, companies could be competitors or substitutes. For example, the bankruptcy of Silicon Valley Bank instigated a downward spiral in many bank stocks due to investor apprehension about systemic risks in the financial sector [16]. Alternatively, these connections between equities could stem from companies sharing supply chains. For example, the rise of ChatGPT and Microsoft's investment in OpenAI led to a surge not only in Microsoft's stock price but also in associated upstream and downstream companies like NVIDIA and Intel [17]. Furthermore, given the varying degrees of inter-dependency between companies, an event may influence a set of stocks at different speeds, a phenomenon known as the lead-lag effect [10]. For example, an event like "Developers file a lawsuit against Microsoft over intellectual property" would immediately impact Microsoft's stock price and gradually affect other IT companies utilizing user-generated data to train for-profit machine learning algorithms [18]. In an effort to capture the intricate interconnections among equities, researchers have proposed the use of Graph Neural Networks (GNN) to consolidate market information across stocks. GNN represents a novel branch of deep learning methods grounded in graph theory, wherein companies serve as nodes, and links are established between two companies sharing certain relationships [19]. By propagating information across the network, GNN enables each node in the graph to be aware of its context, encompassing neighboring nodes and their properties [20]. This leads to more effective learning and representation of the market data. For instance, Cheng et al. [14] developed a multi-modality GNN for predicting stock price movements, demonstrating superior performance compared to other non-graph-based deep learning methods. Given that GNN relies on well-defined graph structures for information propagation, accurately capturing the latent inter-dependencies among equities is crucial. Currently, two approaches are predominantly in use. The first approach involves extracting structural event tuples or leveraging text similarity from companies' business descriptions [9] to identify company resemblances. The rationale is that companies offering similar products or services or frequently Figure 1: Framework Overview: Combining Graph Neural Network and ChatGPT to predict stock movements. mentioned together in social media news are likely to share related stock behaviors. However, this approach may fall short of encompassing relevant domain knowledge. For instance, while both events, "David Peter leaves Starbucks" and "Steve Jobs quits Apple," pertain to employee departures, the latter would have a more profound impact on the stock market, given Steve Jobs' pivotal role in Apple. To counter this limitation, recent studies propose the integration of GNN with Financial Knowledge Graphs (FinKG) [21; 22], in which financial domain knowledge is predefined [23; 24]. You et al. have established the efficacy and scalability of GNNs when grappling with dynamic graph structures in real-world scenarios [25]. Nevertheless, the use of predefined knowledge graphs introduces new challenges. Firstly, manually created knowledge graphs or knowledge graphs built with man-made features often fail to cover all relevant information. Secondly, as the knowledge graph is predefined, it struggles to update in a timely manner and capture emergent relationships as the market evolves. In our study, we propose to use Large Language Models (LLMs), such as ChatGPT, to address previously noted limitations. Although LLMs are still nascent in their application to financial economics, they have already garnered considerable scholarly interest in other areas. While not initially designed for financial data processing, LLMs have demonstrated their capability to excel in a broad spectrum of Natural Language Processing (NLP) tasks, ranging from language translation to text summarization, question answering, sentiment analysis, and text generation [12]. Recent research has illuminated the value of these models in the financial realm. For example, Yang and Menczer [26] reveal the utility of ChatGPT in distinguishing credible news sources. Similarly, Lopez-Lira and Tang [27] indicate a robust correlation between the sentiment ChatGPT generated for news headlines and the ensuing daily stock market returns. A key element in ChatGPT's success is it has learned extensive knowledge concerning entities (such as companies, individuals, and events) and their relationships from massive training datasets. Additionally, by utilizing an attention mechanism and undergoing fine-tuning via Reinforcement Learning from Human Feedback (RLHF) [28], ChatGPT can better comprehend the context of textual input and identify relationships among targeted entities. These distinctive characteristics render ChatGPT an ideal tool for automatically identifying latent inter-dependencies among equities and constructing stock networks/graphs. The utilization of ChatGPT to construct these graphs offers several advantages over previous methods [9; 21; 22] for network construction : 1. ChatGPT can deduce relationships between target entities from any textual input, which facilitates the use of more comprehensive and up-to-date data sources such as financial news, social media data, and corporate reports. 2. As the relationships of interest are not confined to a predefined set of keywords, ChatGPT can recognize a broader range of relationships among companies, extending beyond shared business services and supply chains. 3. While fine-tuning is not available for ChatGPT or later versions, the method we propose can be generalized to other released versions of LLMs such as InstructGPT, Large Language Model Meta AI (LLaMA), Low-rank Adaptation (LoRA), among others. This adaptability enables more accurate applications and domain-specific customization. ## 3 Method Our objective is to predict the stock movement (up, down, or neutral) for a set of target companies on the next trading day. Suppose we have a total of \(N\) target companies, where \(i\) denotes a specific company, \(t\) represents a timestamp, and \(L\) corresponds to the lookback length. Accordingly, our predictive task uses features from time \(t\) to \(t+L\) to forecast stock movement at time \(t+L+1\). To achieve this, we propose a novel framework that integrates ChatGPT and Graph Neural Network (GNN) for stock movement prediction. This framework consists of three main components: network structure inference from financial news using ChatGPT, company embedding through GNN, and stock movement prediction using sequential models and fully-connected neural networks. A comprehensive overview of our proposed framework is presented in Figure 1. We further elaborate on each component in the subsequent sections. ### Network Structure Inference via ChatGPT Our framework necessitates two types of time-series input features: news headlines and stock market data. The stock market data encompasses daily market information for each company, including price details (e.g., open, close, high, low), daily ask and bid, volume, and ordinary dividend amount. We use \(\mathbf{S}_{t}\) to denote market data at time \(t\), where \(\mathbf{s}_{i,t}\) denotes the associated data of a specific company. On the other hand, news headlines, sourced daily from reputable media outlets, are not company-specific and could cover various public companies. We thus exploit the inferential capabilities of ChatGPT to discern: 1) Which target companies could be affected by the day's news, and 2) How will these companies be affected: positively, negatively, or neutrally? To operationalize this, we design the following prompt for daily news headline input to ChatGPT: Forget all your previous instructions. I want you to act as an experienced financial engineer. I will offer you financial news headlines in one day. Your task is to: 1. Identify which target companies will be impacted by these news headlines. Please list at least five of them. 2. Only consider companies from the target list. 3. Determine the sentiments of the affected companies: positive, negative, or neutral. 4. Only provide responses in JSON format, using the key "Affected Companies". 5. Example output: ["Affected Companies": {Company 1: "positive", Company 2: "negative"}] 6. News Headlines are separated by "n" News Headlines:... The ChatGPT response provides two insightful elements: the companies being affected by the news and their corresponding sentiment. Because prior research has demonstrated a strong association between ChatGPT's sentiment on next day's stock return [27], we primarily focus on the "Affected Companies" output to construct a ChatGPT-Informed graph structure to feed GNN at the current stage. We build the graph \(G_{t}=(V,E_{t})\) at each timestamp by representing each target company as a node and building an edge between two companies if they were considered as "being affected together" by ChatGPT. For instance, if the "Affected Companies" output at \(t\) is ['BA', 'AMGN', 'MSFT'], we construct edges \(E_{t}\) among these ticker pairs: 'BA' - 'AMGN', 'BA' - 'MSFT', and 'AMGN' - 'MSFT'. After gleaning these inferred relationships from news using ChatGPT, we input these graphs sequentially into a Graph Neural Network (GNN) to generate company embeddings. The GNN operation method is discussed in the next section. ### Company Embedding through GNN At this stage, we leverage the Graph Neural Network (GNN) to transform the nodes (companies) into vector representations. As a cutting-edge model for deep learning, GNN is adept at handling complex graph structures and embedding nodes into lower-dimensional vectors that encapsulate both nodes' attributes and network topology [19]. In our context of predicting stock movement, the GNN integrates the features of a company and its closely interconnected companies at a given timestamp to generate embeddings. Consequently, each company's embedding through the GNN incorporates its unique features as well as the features of relevant companies which ChatGPT considered are affected together by the news headline. Taking company \(i\) and associated features at time \(t\) as an example, we formally describe the GNN embeddings process as follows: \[\mathbf{h}_{i,t}^{\text{GNN}}=\mathrm{GNN}\left(\mathbf{h}_{i,t};\mathbf{m} _{i,t};\Theta_{GNN}\right) \tag{1}\] where \(\Theta_{GNN}\) symbolizes the trainable parameters in each layer of GNN, \(\mathbf{h}_{i,t}\) denotes the original feature of company \(i\), and \(\mathbf{m}_{i,t}\) represents the aggregated information from its neighbors at time \(t\). The final GNN embedding of company \(i\) is denoted as \(\mathbf{h}_{i,t}^{\text{GNN}}\). ### Sequential Models and Output Layers Retaining the information of the company and its neighbors, the output of the GNN is subsequently concatenated with the corresponding company's stock market data. We utilize a Long Short-Term Memory (LSTM) model as the sequential model in our framework. These combined data vectors are sequentially input into the LSTM, generating aggregated embeddings specific to each company over the lookback period. Concurrently, the stock market data undergoes a separate LSTM model to generate another set of embeddings. These two sets of embeddings are concatenated again and fed through a fully connected neural network layer to generate the final prediction for the stock movement. The process can be formalized as follows: \[\mathbf{h}_{i,t}^{\text{COMB}}=\mathrm{CONCAT}\left(\mathbf{h}_{i,t}^{\text{GNN}},\mathbf{s}_{i,t}\right) \tag{2}\] \[\mathbf{h}_{i}^{\text{COMB}}=\mathrm{LSTM}\left(\left[\mathbf{h }_{i,t}^{\text{COMB}},\cdots\mathbf{h}_{i,t+L}^{\text{COMB}}\right];\Theta_{ LSTM_{1}}\right) \tag{3}\] \[\mathbf{h}_{i}^{\text{STOCK}}=\mathrm{LSTM}\left(\left[\mathbf{s}_{i,t}, \cdots\mathbf{s}_{i,t+L}\right];\Theta_{LSTM_{2}}\right) \tag{4}\] \[\hat{y}_{i}=MLP\left(\mathrm{CONCAT}\left(\mathbf{h}_{i}^{\text{ COMB}},\mathbf{h}_{i}^{\text{STOCK}}\right);\Theta_{MLP}\right) \tag{5}\] where \(\Theta_{MLP}\), \(\Theta_{LSTM_{i}}\), and \(\Theta_{LSTM_{2}}\) are the trainable parameters. Furthermore, given that we predict stock movement at \(t+L+1\), this is a classification task with three categories: up, down, and neutral. Following previous literature [14], we generate the category for the ground truth based on the return (\(R_{i}=p_{i,t}/p_{i,t-1}-1\), where \(p_{i}\) is the stock price) and defined thresholds (\(r_{\text{up}}=0.01\), \(r_{\text{down}}=-0.01\)) as follows: \[y_{i}=\left\{\begin{array}{ll}\text{up}&R_{i}\geq r_{\text{up}}\;,\\ \text{neutral}&r_{\text{down}}<R_{i}<r_{\text{up}}\;,\\ \text{down}&R_{i}\leq r_{\text{down}}\end{array}\right. \tag{6}\] Finally, we employ cross entropy to generate the loss by comparing the predicted value with the ground truth. This loss value is then backpropagated through the model, allowing for the adjustment of trainable parameters during the iterative learning process. In the following section, we apply our proposed model to a real-world dataset to assess its performance. ## 4 Experiment We evaluate the effectiveness of our proposed framework using a real-world dataset comprising the Dow Industrial Average 30 Companies (DOW 30) as the main subjects. Since the DOW 30 composition was last updated on August 31, 2020, we opt for the period from September 1, 2020, to December 30, 2022, as our target interval to capture the contemporary market trends. The training period extends from September 1, 2020, to September 30, 2021, consistent with the final data point integrated into ChatGPT's model training. Accordingly, the test period spans from October 1, 2021, to December 30, 2022. To gather input features for both periods, we acquire daily numerical variables of each DOW 30 company from the CRSP Databases as stock market data. For the financial news headlines, we collect 2,713,233 and 3,717,666 unique headlines for the training and test periods respectively, gleaned from 5,489 unique providers. We then extract news that not only originates from reputable media outlets but also explicitly mentions at least one DOW 30 company. This filtration process yields a refined total of 115,549 news headlines, partitioned into 50,941 for training and 64,608 for testing. In recognition of the temporal sensitivity of news and its lag effect on the stock market, we meticulously align the news timestamp with the subsequent market period. For instance, a news headline recorded before 16:00 on Day \(t\) is linked with the same day's market data, and employed to predict stock movements on Day \(t+1\). Conversely, if a headline is logged after 16:00, it is assigned to the succeeding day (Day \(t+1\)) and used to forecast stock movement on the following day (Day \(t+2\)). This stratagem ensures the purity of out-of-sample test results, further precluding potential data leakage. Our benchmark selection is rooted in the two types of input features we utilize. For stock market data, we deploy Long Short-Term Memory (LSTM) method [29], renowned for its effectiveness in large scale time-series data analysis, and ARIMA model that are lauded for its skill in managing univariate time-series forecasting. To leverage the financial news headlines, we employ state-of-the-art sentence transformers to embed headlines into vectors, which are subsequently used as input to a MLP model for classification. Furthermore, corroborating previous research that affirms the predictive power of ChatGPT's sentiment outputs for stock movements [27], we incorporate the sentiment judgment from ChatGPT on stocks as a benchmark. We assess our proposed model on two tasks: First, we scrutinize its performance in financial forecasting, specifically targeting stock movement classification. The evaluation metrics included weighted F1, Macro F1, and Micro F1 scores. Second, we construct a portfolio based on the model outputs, and evaluate its performance in terms of accumulated return, volatility, Sharpe ratio, and maximum drawdown. Detailed results from these experiments will be elucidated in the subsequent section. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & Weighted F1 & Micro F1 & Macro F1 \\ \hline ChatGPT & 0.3970 & 0.4607 & 0.3085 \\ News-Embed & 0.4059 & 0.4318 & 0.3425 \\ Stock-LSTM & 0.4036 & 0.4132 & 0.3455 \\ **Our Model** & 0.4133 & 0.4423 & 0.3529 \\ \hline \hline \end{tabular} \end{table} Table 1: Model Performance of Stock Movement Prediction ### Financial Forecasting of Stock Movement The experimental results of stock movement forecasting are presented in Table 1, with two primary observations being made clear. First, our proposed model persistently outshines both the stock-LSTM and News-DL models in all three metrics, recording a minimum enhancement of 1.8%. Notably, our model distinguishes itself from stock-LSTM by employing dynamic graph structures that ChatGPT generates from daily financial news. This suggests the potency of ChatGPT's zero-shot learning capability in inferring networks from text, thus advancing the predictive performance. Also, it is important to emphasize the inherent difficulty of accurately predicting stock movements, where marginal improvements can bring about significant additional profits. Earlier research has demonstrated a 0.005 increase in the Micro F1 score can result in a profit increase of 12%, and a 1% enhancement can lead to a 30% profit surge [1, 14]. Consequently, our model offers considerable practical implications within the financial field. Second, though past studies have emphasized the strong correlation between the sentiment outputs from ChatGPT and stock movements [27], our findings indicate that amalgamating these outputs with graph neural networks amplifies performance. Despite ChatGPT delivering commendable Micro F1 scores, this is largely due to an inherent data imbalance during the testing phase, as the 58.5% of stock movements were neutral. ChatGPT's predictive prowess falters when forecasting stock downtrends, with a score of 10.88%, compared to our model's 19.46% in this category. This pattern echoes in time-series models like ARIMA, which predominantly predict all movements as neutral. The enhanced ability of our model to forecast both upward and downward movements is instrumental in aiding investors to limit losses and maintain portfolio stability. In the following section, we will construct portfolios based on the outputs of the models and evaluate their economic performance. ### Evaluation of Portfolio Performance We also evaluate the economic implications of our model by constructing a portfolio grounded in the model's outputs. Given that each stock's predicted outcome by our model is either upward, neutral, or downward for the next trading day, we first construct the portfolio with a long-only strategy, which lessens the exposure to risks such as short squeezes and funding liquidity. Specifically, we distribute equal investments across the stocks forecasted to rise by our model, while the remaining stocks in the portfolio are not invested in. We apply the same strategy to our proposed model and benchmarks, and conduct a backtest on these long portfolios during the out-of-sample period (October 1, 2021, to Figure 2: Comparison of Portfolio Performance During the Test Period December 30, 2022). The cumulative returns for the Long Portfolio are depicted in Figure 2. As seen from the figure, our proposed model consistently outperforms both the LSTM and ChatGPT model in terms of cumulative returns. This persistent superiority signifies the effectiveness of our model in predicting positive stock returns. Moreover, we implement a long-short strategy that forecasts both negative and positive stock returns in order to construct a self-financing portfolio. The outcomes reveal that our proposed model persistently surpasses baselines. Notably, the portfolio derived from ChatGPT outputs exhibits significantly higher annualized volatility (23.61%) compared to our model (14.06%). The maximum drawdown of the ChatGPT model (0.2112) also substantially exceeds that of our model (0.1242). As previously noted, this discrepancy is primarily due to ChatGPT's limitations in predicting negative returns, thereby rendering it prone to higher volatility. In summary, our proposed model surpasses the baseline models in the task of predicting stock movements. The results provide compelling evidence that coupling GNN with ChatGPT's capabilities of inferring network structures from financial news can notably augment the predictive capacity of a model. Additionally, portfolio construction guided by our model's output consistently delivers superior performance compared to the benchmarks. This outperformance is exhibited through increased cumulative returns, along with lower annualized volatility and maximum drawdown. The robust performance across these two areas underscores the potential real-world applicability of our model in the finance industry. ## 5 Discussion This study introduces a novel framework that capitalizes on the graph inference capabilities of ChatGPT to augment GNN forecasting performance. In our approach, ChatGPT initially distills evolving network structures from daily financial news. These inferred networks are subsequently incorporated into the GNN to produce vector embeddings, which are subsequently used in downstream prediction tasks. We assess the efficacy of our model using real-world data from the DOW 30 companies spanning from October 2021 to December 2022. The empirical findings demonstrate that our model surpasses all benchmarks in forecasting stock movements. Moreover, when portfolios are constructed based on our model's outputs, they showcase superior cumulative returns while simultaneously exhibiting reduced volatility and drawdowns. Our research contributes to the literature by assessing the capacity of modern Language Learning Models (LLMs) to infer network structures from text. Further, it pioneers the implementation of networks inferred by ChatGPT to enhance the capabilities of GNNs. The outperformance of our model in practical scenarios emphasizes its potential implications for the financial sector, offering new perspectives and strategies in the realm of financial engineering. Despite, to the best of our knowledge, this is the first study that integrates ChatGPT-inferred networks with GNNs, the paper is not without its limitations. First, our model leverages stock market data and time-stamped news headlines as input features. Given that stock market dynamics are influenced by a complex web of interconnected factors (including economic indicators, corporate financial health, and investor sentiment), enhancing our model with additional input features could further boost its predictive accuracy. Similarly, our study solely utilizes the network structure inferred by ChatGPT as input for GNN. Future research could consider incorporating sentiment scores as edge attributes to further improve the model's performance. Second, our study only utilizes basic network structures in the model. However, these structures could be upgraded to more sophisticated architectures, such as replacing LSTM with transformer-based models, or employing more advanced GNN models. It is worth noting that, due to the limited scope of our sample - the DOW 30 companies - more complex GNN structures could potentially lead to oversmoothing issues [30]. To avoid this, future research should consider expanding the dataset to include more companies, which would synergize well with deep learning's strength in handling large datasets. Lastly, the dataset utilized in our experiment, which ends in October 2021, was the final data point input into ChatGPT. Recent advancements in ChatGPT include browsing ability and Plugins, allowing it to interact with the most recent news and information. We posit that enriching our model with the latest financial news and market information will enhance its performance, leading to more accurate forecasts and facilitating improved informed decision-making for both researchers and practitioners.
2309.10824
Generalized non-autonomous Cohen-Grossberg neural network model
In the present paper, we investigate both the global exponential stability and the existence of a periodic solution of a general differential equation with unbounded distributed delays. The main stability criterion depends on the dominance of the non-delay terms over the delay terms. The criterion for the existence of a periodic solution is obtained with the application of the coincide degree theorem. We use the main results to get criteria for the existence and global exponential stability of periodic solutions of a generalized higher-order periodic Cohen-Grossberg neural network model with discrete-time varying delays and infinite distributed delays. Additionally, we provide a comparison with the results in the literature and a numerical simulation to illustrate the effectiveness of some of our results.
Ahmed Elmwafy, José J. Oliveira, César M. Silva
2023-09-04T13:11:21Z
http://arxiv.org/abs/2309.10824v1
###### Abstract ###### Abstract In the present paper, we investigate both the global exponential stability and the existence of a periodic solution of a general differential equation with unbounded distributed delays. The main stability criterion depends on the dominance of the non-delay terms over the delay terms. The criterion for the existence of a periodic solution is obtained with the application of the coincide degree theorem. We use the main results to get criteria for the existence and global exponential stability of periodic solutions of a generalized higher-order periodic Cohen-Grossberg neural network model with discrete-time varying delays and infinite distributed delays. Additionally, we provide a comparison with the results in the literature and a numerical simulation to illustrate the effectiveness of some of our results. **Generalized non-autonomous Cohen-Grossberg neural network model** Ahmed Elmwafy, Jose J. Oliveira, Cesar M. Silva _Keywords_: Cohen-Grossberg neural network, Periodic solutions, Global exponential stability, Coincide degree theorem, discrete and distributed delays. _Mathematics Subject Classification System 2020_: 34K20, 34K25, 34K60, 92B20. ## 1 Introduction In the past decades, due to application in various sciences, delayed functional differential equations have attracted the attention of an increasing number of researchers. In many fields, such as population dynamics, ecology, epidemiology, disease evolution, and neural networks, differential equations with delay have served as models. As a result of their widespread use in several fields including image and signal processing [20], pattern recognition [35], optimization [30], and content-addressable memory [34], delayed neural networks have had their dynamical behaviours extensively studied [39], [33], [14]. Obtaining results about the convergence characteristics of neural networks is crucial in these applications. To keep the entire network from acting chaotically, convergent dynamics are required. Significantly, the global convergent dynamics imply that every trajectory of the network can converge to some equilibrium state or invariable sets so that, when used as an associative memory, every state in the underlying space can serve as a key to recover certain stored memory. As an outcome, the state space is entirely covered by different basins of the stored memories. Furthermore, the globally convergent dynamics indicate that the neural network algorithm will ensure convergence to an optimal solution from each initial guess when used as an optimization solver [7]. The fact that the connectivity weights, the neuron charging time, and the external inputs change throughout time is another important consideration. Thus, it is relevant to introduce and investigate neural network models that incorporate the temporal structure of neural activities. Among the various neural network models that have been extensively investigated and applied, Cohen-Grossberg which was first introduced and investigated by Cohen and Grossberg [5] by the following system of ordinary differential equations, \[\frac{dx_{i}(t)}{dt}=-a_{i}(x_{i}(t))\Big{[}b_{i}(x_{i}(t))-\sum_{j=1}^{n}c_{ij}f _{j}(x_{j}(t))+I_{i}(t)\Big{]},\,\,\,t\geq 0,\,\,i=1,\ldots,n \tag{1.1}\] where \(n\) is a natural number indicates the number of neurons, \(x_{i}(t)\) is the \(i\)th neuron state at time t, \(a_{i}(u)\) denote the amplification functions, \(b_{i}(u)\) are the self-signal functions, \(f_{j}(u)\) are the activation functions, \(c_{ij}\) represent the strengths of connectivity between neurons \(i\) and \(j\), \(I_{i}\) denote the inputs from outside of the system. Differential equations modelling neural networks should include time delays due to synaptic transmission time across neurons or, in artificial neural networks, communication time among amplifiers in order to be more realistic. Since Cohen and Grossberg first proposed the CGNN model [5], the dynamical properties of CGNNs such as stability, instability, and periodic oscillation have been extensively studied for theoretical and application considerations. Some studies have already accomplished several positive results such as [1], [4], [19], [17], [6] and etc., most of the results in the literature require either the boundedness of the activation functions or the boundedness of delays. For example, [3] investigated the global exponential stability of the periodic solutions of delayed CGNNs but in the case of discontinuous activation functions. Besides that the existence, uniqueness and stability of almost periodic solutions for a class of NNs have been studied in [31]. Meanwhile, [24], [38], and [23] started studying the existence and exponential stability of high-order CGNNs depending on many techniques. For example, [24] and [38] used some differential inequality techniques, and [23] depended on using a proper Lyapunov function and the properties of M-matrix. Therefore, the present work is meaningful and the conclusion is novel. Since as far as we know, there are few results on high-order CGNNs without using the Lyapunov technique, neither assuming the boundedness nor the discontinuity of the activation functions. Motivated by the proceeding studies, we consider a generalized high-order CGNN model with discrete time-varying and distributed delays to study the existence of periodic solutions and global exponential stability without using the Lyapunov technique nor the boundedness of the activation functions. In this paper, we use the continuation theorem of coincidence degree theory to show the existence of a periodic solution of a generalized system of high-order CGNNs, and then we present sufficient conditions to guarantee the global exponential stability of that system. The remainder of this work is organized as follows. Section 2 is a preliminary section where we introduce our notations and our hypotheses. Section 4 introduces the global exponential stability of general neural network models. In Section 3, we investigate the existence and global exponential stability of the periodic solution of that generalized high-order CGNNs system under certain assumptions. In section 5, We show numerical simulations to demonstrate the efficacy of the results we have obtained. Preliminaries and model description In the present paper, for \(n\in\mathbb{N}\), we consider the n-dimensional vector space \(\mathbb{R}^{n}\) equipped with the norm \(|x|=\max\{|x_{i}|,\,i=1,\ldots,n\}\). For a positive real number \(\epsilon\), we consider the Banach space \[UC^{n}_{\epsilon}=\left\{\phi\in C((-\infty,0];\mathbb{R}^{n}):\sup_{s\leq 0} \frac{|\phi(s)|}{\mathrm{e}^{-\epsilon s}}<+\infty,\,\frac{\phi(s)}{\mathrm{e} ^{-\epsilon s}}\mbox{ is uniformly continuous on }(-\infty,0]\right\},\] equipped with the norm \(\|\phi\|_{\epsilon}=\sup_{s\leq 0}\frac{|\phi(s)|}{\mathrm{e}^{-\epsilon s}}\). In [16], a basic theory about the existence, uniqueness, and continuation solutions is established for the general functional differential equation in the phase space \(UC^{n}_{\epsilon}\) \[x^{\prime}(t)=f(t,x_{t}),\hskip 14.226378ptt\geq 0, \tag{2.1}\] where, for an open set \(D\subseteq UC^{n}_{\epsilon}\), the function \(f:[0,+\infty)\times D\to\mathbb{R}^{n}\) is continuous and \(x_{t}\) denotes the function \(x_{t}:(-\infty,0]\to\mathbb{R}^{n}\) defined by \(x_{t}(s)=x(t+s)\) for \(s\leq 0\). We denote by \(x(t,t_{0},\phi)\) a solution of (2.1) with initial condition \(x_{t_{0}}=\phi\mbox{ for }t_{0}\geq 0\mbox{ and }\phi\in D\). For \(x\in\mathbb{R}^{n}\), we also use \(x\) to denote the constant function \(\phi(s)=x\) in \(UC^{n}_{\epsilon}\). A vector \(x\in\mathbb{R}^{n}\) is said to be positive if \(x_{i}>0\) for all \(i=1,\ldots,n\) and we denote it by \(x>0\). Now, we introduce the Banach space \(BC\) of all continuous bounded functions \(\phi:(-\infty,0]\to\mathbb{R}^{n}\) equipped with the norm \(\|\phi\|=\sup_{s\leq 0}|\phi(s)|\). It is clear that \(BC\subseteq UC^{n}_{\epsilon}\) and we have \(\|\phi\|_{\epsilon}\leq\|\phi\|\) for all \(\phi\in BC\). In the phase space \(UC^{n}_{\epsilon}\), for \(n\in\mathbb{N}\) and \(\epsilon>0\), we consider the following general nonautonomous differential system with infinite delays, \[x^{\prime}_{i}(t)=a_{i}(t,x_{i}(t))\big{[}-b_{i}(t,x_{i}(t))+f_{i}(t,x_{t}) \big{]},\hskip 14.226378ptt\geq 0,\,i=1,\ldots,n, \tag{2.2}\] where \(a_{i}:[0,+\infty)\times\mathbb{R}\to(0,\infty)\), \(b_{i}:[0,+\infty)\times\mathbb{R}\to\mathbb{R}\), and \(f_{i}:[0,+\infty)\times UC^{n}_{\epsilon}\to\mathbb{R}\) are continuous functions. The goal is to apply the results to Cohen-Grossberg neural network-type models, thus we only consider bounded initial conditions. i.e. \[x_{t_{0}}=\phi,\hskip 14.226378pt\mbox{ for }\phi\in BC\mbox{ and }t_{0}\geq 0. \tag{2.3}\] The continuity of \(a_{i}\), \(b_{i}\), and \(f_{i}\) functions assures that the initial value problem (2.2)-(2.3) has a solution (see [12, Theorem 2.1]). As we always consider bounded initial conditions, in this paper we consider the following definition of global exponential stability. **Definition 2.1**.: _The system (2.2) is said to be globally exponentially stable if there are \(\delta>0\) and \(C\geq 1\) such that_ \[|x(t,t_{0},\phi)-x(t,t_{0},\psi)|\leq C\mathrm{e}^{-\delta(t-t_{0})}\|\phi- \psi\|,\hskip 14.226378pt\forall t_{0}\geq 0,\,\forall t\geq t_{0},\, \forall\phi,\psi\in BC.\] It should be emphasized that the preceding definition of global exponential stability is the usually used one in the literature on neural networks ( [40], [37], [37]). ## 3 Global exponential stability In this section, we obtain sufficient conditions for the global exponential stability of (2.2). To do that in this section we assume the following hypotheses. For each \(i=1,\ldots,n\): 1. there are \(\underline{a}_{i},\overline{a}_{i}>0\) such that \[\underline{a}_{i}<a_{i}(t,u)<\overline{a}_{i},\ \ \ \forall t\geq 0,\,\forall u\in\mathbb{R};\] 2. there exists a continuous function \(D_{i}:[0,+\infty)\rightarrow\mathbb{R}\) such that \[D_{i}(t)a_{i}^{2}(t,u)\leq\frac{\partial a_{i}}{\partial t}(t,u),\ \ \ \forall t>0,\,\forall u\in\mathbb{R};\] 3. there exists a function \(\beta_{i}:[0,+\infty)\rightarrow(0,+\infty)\) such that \[\frac{b_{i}(t,u)-b_{i}(t,v)}{u-v}\geq\beta_{i}(t),\ \ \ \ \forall t\geq 0,\,\forall u,v\in\mathbb{R},\,u\neq v;\] 4. the function \(f_{i}:[0,+\infty)\times UC_{\epsilon}^{n}\rightarrow\mathbb{R}\) is Lipschitz on its second variable i.e., there is a continuous function \(\mathcal{L}_{i}:[0,+\infty)\rightarrow[0,+\infty)\) such that \[|f_{i}(t,\phi)-f_{i}(t,\psi)|\leq\mathcal{L}_{i}(t)||\phi-\psi||_{\epsilon}, \ \ \ \forall t\geq 0,\,\forall\phi,\psi\in UC_{\epsilon}^{n};\] 5. for all \(t\geq 0\), \[\underline{a}_{i}\Big{(}\beta_{i}(t)+D_{i}(t)\Big{)}-\overline{a}_{i}\mathcal{ L}_{i}(t)>\epsilon.\] (3.1) By the generalized Gronwall's inequality [13, Lemma 6.2] and the Continuation Theorem [12, Theorem 2.4], we can assure that the solutions of the initial value problem (2.2)-(2.3) are defined on \(\mathbb{R}\). Now, we are in a position the obtain the main stability criterion for system (2.2). **Theorem 3.1**.: _If (H1)-(H5) hold, then the system (2.2) is globally exponentially stable._ Proof.: Let \(t_{0}>0\), \(\phi=(\phi_{1},\ldots,\phi_{n})\in BC\), \(\psi=(\psi_{1},\ldots,\psi_{n})\in BC\), and consider two solutions, \(x(t)=x(t,t_{0},\phi)\) and \(y(t)=x(t,t_{0},\psi)\), of (2.2). For each \(t\geq t_{0}\), define \(V(t)=V(t,t_{0},x(\cdot),y(\cdot))=(V_{i}(t),\ldots,V_{n}(t))\in\mathbb{R}^{n}\) by \[V_{i}(t):=\mathrm{e}^{\epsilon(t-t_{0})}sign\Big{(}x_{i}(t)-y_{i}(t)\Big{)} \int_{y_{i}(t)}^{x_{i}(t)}\frac{1}{a_{i}(t,u)}du,\ \ \ i=1,\ldots,n. \tag{3.2}\] From (H1), we conclude that \[{\rm e}^{-\epsilon(t-t_{0})}\underline{a}_{i}V_{i}(t)\leq|x_{i}(t)-y_{i}(t)|\leq{ \rm e}^{-\epsilon(t-t_{0})}\overline{a}_{i}V_{i}(t),\ \ \ \ \forall t\geq t_{0},\,i=1,\ldots,n. \tag{3.3}\] Firstly, we show that \[|V(t)|\leq\max_{i}\{\underline{a}_{i}^{-1}\}\|\phi-\psi\|,\ \ \ \ \forall t\geq t_{0}. \tag{3.4}\] Obviously, from (3.3), we have \[|V(t_{0})|\leq\max_{i}\left\{\underline{a}_{i}^{-1}|x_{i}(t_{0})-y_{i}(t_{0})| \right\}\leq\max_{i}\{\underline{a}_{i}^{-1}\}\|\phi-\psi\|.\] Now, to obtain a contradiction, we assume that inequality (3.4) is false. Consequently, there exists \(t_{1}>t_{0}\) such that \[|V(t_{1})|>\max_{i}\{\underline{a}_{i}^{-1}\}\|\phi-\psi\|.\] Define \[T:=\min\left\{t\in[t_{0},t_{1}]:V(t)=\max_{s\in[t_{0},t_{1}]}|V(s)|\right\}.\] Choosing \(i\in\{1,\ldots,n\}\) such that \(V_{i}(T)=|V(T)|\), we have \[V_{i}(T)>0,\ \ \ \ V_{i}^{\prime}(T)\geq 0,\ \ \ \ {\rm and}\ \ \ \ V_{i}(T)>|V(t)|,\ \forall t<T. \tag{3.5}\] From (2.2), and (H2), (H3), and (H4), we obtain \[V_{i}^{\prime}(T) = \epsilon V_{i}(T)+{\rm e}^{\epsilon(T-t_{0})}sign\Big{(}x_{i}(T)- y_{i}(T)\Big{)}\left[\frac{1}{a_{i}(T,x_{i}(T))}x_{i}^{\prime}(T)\right.\] \[\left.-\frac{1}{a_{i}(T,y_{i}(T))}y_{i}^{\prime}(T)+\int_{y_{i}(T )}^{x_{i}(T)}-\frac{\partial a_{i}(T,u)}{a_{i}^{2}(T,u)}du\right]\] \[= \epsilon V_{i}(T)+{\rm e}^{\epsilon(T-t_{0})}sign\Big{(}x_{i}(T) -y_{i}(T)\Big{)}\bigg{[}b_{i}(T,y_{i}(T))-b_{i}(T,x_{i}(T))\] \[+f_{i}(T,x_{T})-f_{i}(T,y_{T})+\int_{y_{i}(T)}^{x_{i}(T)}-\frac{ \partial_{t}a_{i}(T,u)}{a_{i}^{2}(T,u)}du\bigg{]}\] \[\leq \epsilon V_{i}(T)+{\rm e}^{\epsilon(T-t_{0})}\Big{[}-\beta_{i}(T)|x_{i}(T )-y_{i}(T))|+{\cal L}_{i}(T)||x_{T}-y_{T}||_{\epsilon}\] \[-D_{i}(T)|x_{i}(T)-y_{i}(T)|\Big{]}.\] Hypothesis (H5) implies \(\beta_{i}(T)+D_{i}(T)>0\), and from (3.3), we obtain \[V_{i}^{\prime}(T) \leq \epsilon V_{i}(T)-\underline{a}_{i}\Big{[}\beta_{i}(T)+D_{i}(T) \Big{]}V_{i}(T)\] \[+{\rm e}^{\epsilon(T-t_{0})}{\cal L}_{i}(T)\max\left\{\sup_{s\leq t _{0}-T}|x(T+s)-y(T+s)|{\rm e}^{\epsilon s},\sup_{t_{0}-T<s\leq 0}|x(T+s)-y(T+s)|{ \rm e}^{\epsilon s}\right\}\] \[\leq \epsilon V_{i}(T)-\underline{a}_{i}\Big{[}\beta_{i}(T)+D_{i}(T) \Big{]}V_{i}(T)\] \[+{\rm e}^{\epsilon(T-t_{0})}{\cal L}_{i}(T)\max\Big{\{}\|\phi- \psi\|{\rm e}^{\epsilon(t_{0}-T)},\sup_{t_{0}-T<s\leq 0}|x(T+s)-y(T+s)|e^{ \epsilon s}\Big{\}}.\] By (3.3), we obtain \[V_{i}^{\prime}(T) \leq \epsilon V_{i}(T)-\underline{a}_{i}\Big{[}\beta_{i}(T)+D_{i}(T) \Big{]}V_{i}(T)\] \[+\mathrm{e}^{\epsilon(T-t_{0})}\mathcal{L}_{i}(T)\max\Big{\{}\| \phi-\psi\|\mathrm{e}^{\epsilon(t_{0}-T)},\sup_{t_{0}-T<s\leq 0}\mathrm{e}^{- \epsilon(T+s-t_{0})+\epsilon s}\overline{a}_{i}V_{i}(T+s)\Big{\}}\] \[= \epsilon V_{i}(T)-\underline{a}_{i}\Big{[}\beta_{i}(T)+D_{i}(T) \Big{]}V_{i}(T)+\overline{a}_{i}\mathcal{L}_{i}(T)\max\left\{\frac{\|\phi- \psi\|}{\overline{a}_{i}},\sup_{t_{0}-T<s\leq 0}V_{i}(T+s)\right\}.\] By (H1), the definition of \(T\), and (3.5), we have \[V_{i}^{\prime}(T)\leq\epsilon V_{i}(T)-\underline{a}_{i}\Big{[}\beta_{i}(T)+D _{i}(T)\Big{]}V_{i}(T)+\overline{a}_{i}\mathcal{L}_{i}(T)V_{i}(T).\] From (3.5) and (H5), we conclude that \[V_{i}^{\prime}(T)\leq\left[\epsilon-\underline{a}_{i}\Big{(}\beta_{i}(T)+D_{ i}(T)\Big{)}+\overline{a}_{i}\mathcal{L}_{i}(T)\right]V_{i}(T)<0,\] which contradicts (3.5) and hence (3.4) holds. From (3.3) and (3.4), we obtain \[|x(t)-y(t)|\mathrm{e}^{\epsilon(t-t_{0})}\min\left\{\overline{a}_{i}^{-1} \right\}\leq|V(t)|\leq\max_{i}\{\underline{a}_{i}^{-1}\}\|\phi-\psi\|,\] thus \[|x(t)-y(t)|\leq Ce^{-\epsilon(t-t_{0})}\|\phi-\psi\|,\quad\forall t\geq t_{0},\] with \(C=\frac{\max_{i}\{\underline{a}_{i}^{-1}\}}{\min_{i}\{\overline{a}_{i}^{-1}\} }=\frac{\max_{i}\{\overline{a}_{i}\}}{\min_{i}\{\underline{a}_{i}\}}\geq 1\), which shows that the system (2.2) is globally exponentially stable. We remark that hypothesis (H2) trivially holds (with \(D_{i}(t)=0\) for all \(t>0\)) in case of all functions \(a_{i}\) do not explicitly depend on time \(t\), i.e. \(a_{i}(t,u)=a_{i}(u)\) for all \(i=1,\ldots,n\) and \(u\in\mathbb{R}\). Thus, under the assumption **(h5)**: For all \(t\geq 0\) and \(i=1,\ldots,n\), we have \(\underline{a}_{i}\beta_{i}(t)-\overline{a}_{i}\mathcal{L}_{i}(t)>\epsilon\), we have the following result for system \[x_{i}^{\prime}(t)=a_{i}(x_{i}(t))\Big{[}-b_{i}(t,x_{i}(t))+f_{i}(t,x_{t}) \Big{]},\quad\ t\geq 0,\,i=1,\ldots,n. \tag{3.6}\] **Corollary 3.2**.: _Assume (H1), (H3), (H4), and (h5) hold. Then, system (3.6) is globally exponentially stable._ Proof.: Hypothesis (H2) holds with \(D(t)=0\), thus the result comes from Theorem 3.1. Now consider the model studied in [29] \[x^{\prime}(t)=a_{i}(t,x_{i}(t))\left[-b_{i}(t,x_{i}(t))+\sum_{k=1}^{K}\sum_{j =1}^{n}f_{ijk}(t,x_{jt})\right],\quad\ t\geq 0,\,i=1,\ldots,n, \tag{3.7}\] where \(n,K\in\mathbb{N}\), \(a_{i}\) and \(b_{i}\) are functions as in system (2.2) and \(f_{ijk}:[0,+\infty)\times UC_{\epsilon}^{1}\to\mathbb{R}\) are continuous functions for \(i,j=1,\ldots,n\) and \(k=1,\ldots,K\). We will also assume the following conditions: **(h4)**: for each \(i,j=1,\ldots,n\) and \(k=1,\ldots,K\), there exists a continuous function \(\mathcal{F}_{ijk}:[0,+\infty)\to[0,+\infty)\) such that \[|f_{ijk}(t,\varphi)-f_{ijk}(t,\psi)|\leq\mathcal{F}_{ijk}(t)\|\varphi-\psi\|_{ \epsilon},\ \ \ \ \forall t\geq 0,\,\varphi,\phi\in UC_{\epsilon}^{1}.\] **(h5')**: for all \(t\geq 0\) and \(i=1,\ldots,n\), we have \[\underline{a}_{i}\Big{(}\beta_{i}(t)+D_{i}(t)\Big{)}-\overline{a}_{i}\sum_{k= 1}^{K}\sum_{j=1}^{n}\mathcal{F}_{ijk}(t)>\epsilon.\] As system (3.7) is a particular situation of (2.2), the following stability criterion holds. **Corollary 3.3**.: _Assume that (H1), (H2), (H3), (h4) and (h5') hold. Then system (3.7) is globally exponentially stable._ Proof.: System (3.7) is a particular situation of (2.2) with \[f_{i}(t,\varphi)=\sum_{k=1}^{K}\sum_{j=1}^{n}f_{ijk}(t,\varphi_{j}),\ \ \ \ \forall t\geq 0,\,\varphi=(\varphi_{1},\ldots,\varphi_{n})\in UC_{ \epsilon}^{n}.\] From (h4), we know that (H4) holds with \[\mathcal{L}_{i}(t)=\sum_{k=1}^{K}\sum_{j=1}^{n}\mathcal{F}_{ijk}(t),\ \ \ \ \forall t\geq 0,\,i=1,\ldots,n.\] Moreover, (H5) reads as (h5'). Thus the results comes from Theorem 3.1. **Remark 3.4**.: _We remark that the exponential stability of (3.7) was proved in [29] under the assumptions (H1), (H2), (H3), (h4), and a condition equivalent to_ **(h5")**: _for all_ \(t\geq 0\) _and_ \(i=1,\ldots,n\)_, we have_ \[\underline{a}_{i}\Big{(}\beta_{i}(t)+D_{i}(t)\Big{)}-\sum_{k=1}^{K}\sum_{j=1} ^{n}\overline{a}_{j}\mathcal{F}_{ijk}(t)>\epsilon. \tag{3.8}\] _We emphasize that conditions (h5') and (3.8) are different, thus Corollary 3.3 presents a new exponential stability criterion for the system (3.7)._ ## 4 Existence of periodic solution In this section, we assume that (2.2) is a periodic system and we establish sufficient conditions for the existence of a periodic solutions. The existence of a periodic solutions will be proved through Mawhin's Continuation Theorem. Before stating the referred theorem, we need to recall some definitions and facts. **Definition 4.1**.: _Let \(X\) and \(Z\) two Banach spaces. A linear mapping \(L:\mathrm{Dom}\;L\subseteq X\to Z\) is called a Fredholm mapping of index zero if \(\dim\mathrm{Ker}_{L}=\mathrm{codim}\,\mathrm{Im}_{L}<\infty\) and \(\mathrm{Im}_{L}\) is closed in \(Z\)._ Given a Fredholm mapping of index zero, \(L:\mathrm{Dom}\;L\subseteq X\to Z\), it is well known that there are continuous projectors \(P:X\to X\) and \(Q:Z\to Z\) such that \(\mathrm{Im}_{P}=\mathrm{Ker}_{L}\), \(\mathrm{Ker}_{Q}=\mathrm{Im}_{L}=\mathrm{Im}_{I-Q}\), \(X=\mathrm{Ker}_{L}\oplus\mathrm{Ker}_{P}\) and \(Z=\mathrm{Im}_{L}\oplus\mathrm{Im}_{Q}\). It follows that \(L|_{\mathrm{Dom}\;L\cap\mathrm{ker}_{P}}:\mathrm{Dom}\;L\cap\mathrm{ker}_{P} \rightarrow\mathrm{Im}_{L}\) is invertible. We denote the inverse of that map by \(K_{P}\). **Definition 4.2**.: _Let \(U\) be an open bounded subset of \(X\). We say that a continuous mapping \(N:\overline{U}\subseteq X\to Z\) is L-compact on \(\overline{U}\) if the set \(QN(\overline{U})\) is bounded and the mapping \(K_{P}(I-Q)N:\overline{U}\subseteq X\to X\) is compact._ **Theorem 4.1** (Mawhin's Continuation Theorem).: _Let \(X\) be a Banach space and \(\Omega\subseteq X\) an open bounded set. Suppose \(L:\mathrm{Dom}\;L\subset X\to X\) is a Fredholm operator with zero index and that \(N:\overline{\Omega}\to X\) is L-compact on \(\overline{\Omega}\). Moreover, assume that all the following conditions are satisfied:_ 1. \(Lx\neq\lambda Nx,\quad\forall x\in\partial\Omega\cap\mathrm{Dom}\;L,\,\lambda \in(0,1)\)_;_ 2. \(QNx\neq 0,\quad\forall x\in\partial\Omega\cap\mathrm{Ker}\;L\)_;_ 3. \(\deg_{B}\{QN,\Omega\cap\mathrm{Ker}\;L,0\}\neq 0\)_, where_ \(\deg_{B}\) _denotes the Brouwer degree._ _Then, the equation \(Lx=Nx\) has at least one solution in \(\overline{\Omega}\)._ For studying the system (2.2) in case of being periodic, the following hypotheses will be considered: (H1*) For each \(i=1,\ldots,n\), there exist \(\overline{a}_{i}\,,\underline{a}_{i}>0\) such that \[\underline{a}_{i}<a_{i}(t,u)<\overline{a}_{i}\text{ for all }t\geq 0,\,u\in \mathbb{R};\] **(H2*)**: There is \(\omega>0\) such that, for each \(i=1,\ldots,n\), \[a_{i}(t,u)=a_{i}(t+\omega,u),\quad b_{i}(t,u)=b_{i}(t+\omega,u),\quad f_{i}(t, \phi)=f_{i}(t+\omega,\phi)\] for all \(t\geq 0\), \(u\in\mathbb{R}\), and \(\phi\in BC\); **(H3*)**: For each \(i=1,\ldots,n\), there exist \(\omega-\)periodic continuous functions \(\beta_{i},\beta_{i}^{*}:[0,+\infty)\rightarrow(0,+\infty)\) such that \[\beta_{i}(t)\leq\frac{b_{i}(t,u)-b_{i}(t,v)}{u-v}\leq\beta_{i}^{*}(t),\quad \forall t\in[0,\omega],\,\forall u,v\in\mathbb{R},\,u\neq v;\] **(H4*)**: For each \(i=1,\ldots,n\), there exists a \(\omega-\)periodic continuous function \(\mathcal{L}_{i}:[0,+\infty)\rightarrow[0,+\infty)\) such that \[|f_{i}(t,\phi)-f_{i}(t,\psi)|\leq\mathcal{L}_{i}(t)\|\phi-\psi\|,\quad\forall t \in[0,\omega],\,\forall\phi,\psi\in BC;\] **(H5*)**: For each \(i=1,\ldots,n\), \[\beta_{i}(t)>\mathcal{L}_{i}(t),\quad\forall t\in[0,\omega].\] From (H2*), we conclude that the continuous functions \(t\mapsto b_{i}(t,0)\) and \(t\mapsto f_{i}(t,0)\) are \(\omega\)-periodic and therefore bounded. From (H3*), we also conclude that \(\beta_{i}\) are bounded away from zero and \(\beta_{i}^{*}\) are bounded. Defining \[\underline{\beta}_{i}:=\min_{t\in[0,\omega]}\beta_{i}(t),\ \overline{\beta}_{i}^{*}:=\max_{t\in[0, \omega]}\beta_{i}^{*}(t),\ \overline{b}_{i}:=\max_{t\in[0,\omega]}|b_{i}(t,0)|,\ \text{and}\ \overline{f}_{i}:=\max_{t\in[0, \omega]}|f_{i}(t,0)|, \tag{4.1}\] so that we have \(0<\underline{\beta}_{i},\overline{\beta}_{i}^{*}\), and \(0\leq\overline{b}_{i},\overline{f}_{i}\). We denote by \(X\) the Banach space \[X=\Big{\{}\phi\in C(\mathbb{R}:\mathbb{R}^{n}):\phi\ \text{is}\ \omega-\text{ periodic}\},\] with the norm \(\|\phi\|=\sup_{t\in[0,\omega]}|\phi(t)|\), for \(\phi\in X\). For \(\text{Dom}_{L}=\{\phi\in X:\phi^{\prime}\in X\}\subseteq X\), define the linear operator \(L:\text{Dom}_{L}\to X\) by \[L\phi=\phi^{\prime} \tag{4.2}\] i.e., for all \(t\in\mathbb{R}\) and \(\phi(t)=(\phi_{1}(t),\ldots,\phi_{n}(t))\in\text{Dom}_{L}\), we have \(\Big{(}L\phi\Big{)}(t)=(\phi_{1}^{\prime}(t),\ldots,\phi_{n}^{\prime}(t))\). It is not difficult to show that \(\text{Ker}_{L}\cong\mathbb{R}^{n}\) and \[\text{Im}_{L}=\left\{\phi=(\phi_{1},\ldots,\phi_{n})\in X:\int_{0}^{\omega} \phi_{1}(t)dt=\cdots=\int_{0}^{\omega}\phi_{n}(t)dt=0\right\}, \tag{4.3}\] with \(\text{Im}_{L}\) closed in \(X\) and \(\dim\text{Ker}_{L}=\text{codim}\,\text{Im}_{L}=n\), thus \(L\) is a Fredholm operator with zero index. Now, we consider the projection \(P:X\to X\) defined by \[P\phi=\frac{1}{\omega}\int_{0}^{\omega}\phi(t)dt=\frac{1}{\omega}\left(\int_{ 0}^{\omega}\phi_{1}(t)dt,\ldots,\int_{0}^{\omega}\phi_{n}(t)dt\right),\ \ \ \ \forall\phi=(\phi_{1},\ldots,\phi_{n})\in X. \tag{4.4}\] The projection \(P\) is continuous and, considering \(Q\phi=P\phi\), we have \(\text{Im}_{P}=\text{Ker}_{L}\), \(\text{Ker}_{Q}=\text{Im}_{L}\), and the operator \(L_{|_{\text{Dom}_{L}\cap\text{Ker}_{P}}}:\text{Dom}_{L}\cap\text{Ker}_{P}\to \text{Im}_{L}\) is invertible and we denote the inverse by \(K_{P}\). By (4.2) and (4.3), we obtain that \(K_{p}\phi=\Big{(}(K_{P}\phi)_{1},\cdots,(K_{P}\phi)_{n}\Big{)}\) with \[(K_{P}\phi)_{i}(t)=\int_{0}^{t}\phi_{i}(u)du-\frac{1}{\omega}\int_{0}^{\omega} \int_{0}^{u}\phi_{i}(s)dsdu,\ \forall\phi=(\phi_{1},\ldots,\phi_{n})\in\text{Im}_{L}, \tag{4.5}\] for \(i=1,\ldots,n\). For a convenient bounded open set \(\Omega\subseteq X\), define the function \(N:\overline{\Omega}\to X\) by \(N\phi=\Big{(}(N\phi)_{1},\ldots,(N\phi)_{n}\Big{)}\), where \[(N\phi)_{i}(t)=a_{i}(t,\phi_{i}(t))\bigg{[}-b_{i}(t,\phi_{i}(t))+f_{i}(t,\phi_ {t})\bigg{]}, \tag{4.6}\] for all \(t\in\mathbb{R}\), \(\phi=(\phi_{1},\ldots,\phi_{n})\in X\), and \(i=1,\ldots,n\). We claim that, from the continuity of \(a_{i}\), \(b_{i}\), and \(f_{i}\), (4.5) and (4.6), we can conclude that, for any \(\alpha>0\), the mapping \(N\) is \(L\)-compact in the set \(\Omega=\{\phi\in X:\|\phi\|<\alpha\}\). In fact, for any \(t\in\mathbb{R}\) and any \(x\in X\), we have \(\|QNx\|\leq\max\limits_{i}\overline{a}_{i}[2\overline{\beta}_{i}^{*}\alpha+ \overline{b}_{i}+\overline{f}_{i}]\), and we conclude that \(QN(X)\) is bounded, implying that \(QN(\overline{\Omega})\) is bounded. Additionally, we also need to show that the mapping \(K_{P}(I-Q)N\) is compact. To achieve this, we show that for any bounded \(V\subseteq\overline{\Omega}\), the set \(\overline{K_{P}(I-Q)N(V)}\) is compact. It is easy to verify that, for any sequence, \((\phi_{n})\), with \(\phi_{n}\in V\), \(n\in\mathbb{N}\), such that \(\phi_{n}\to\phi\), we have, for any \(t,t_{0}\in\mathbb{R}\), \[\begin{split}\lim_{n\to+\infty}&|K_{P}(I-Q)N(\phi_ {n})(t)-K_{P}(I-Q)N(\phi_{n})(t_{0})|\\ &\leq 3\max\limits_{i}[\overline{a}_{i}(2\overline{\beta}_{i}^{*} \alpha+\overline{b}_{i}+\overline{f}_{i})]\,(t-t_{0}).\end{split} \tag{4.7}\] and \[\lim_{n\to+\infty}\|K_{P}(I-Q)N(\phi_{n})\|\leq 3\omega\max\limits_{i}[ \overline{a}_{i}(2\overline{\beta}_{i}^{*}\alpha+\overline{b}_{i}+\overline{ f}_{i})]. \tag{4.8}\] Inequality (4.7) shows that the family of functions \(\overline{K_{P}(I-Q)N(V)}\) is equicontinuous and inequality (4.8) shows that the norms of all the functions in the referred family of functions are bounded by the same constant. Ascoli-Arzela theorem allows us to conclude that the set \(\overline{K_{P}(I-Q)N(V)}\) is compact. Thus the mapping \(K_{P}(I-Q)N\) is compact and the claim is proved. Notice that equation (4.8) only allows us to conclude that \[\lim_{n\to+\infty}|K_{P}(I-Q)N(\phi_{n})(t)|\leq 3\omega\max\limits_{i}[ \overline{a}_{i}(\overline{b}_{i}+\overline{f}_{i})],\text{ for any }t\in[0,\omega].\] Thus we are not able to apply directly Ascoli-Arzela's theorem to functions in \[\overline{K_{P}(I-Q)N(\overline{\Omega})}.\] Instead, we must consider the space \(\widetilde{\Omega}=\{\phi\in C([0,\omega]:\mathbb{R}^{n}):\|\phi\|<\alpha\}\) instead of \(\Omega\), with the norm defined in the same way. This is not a problem since once we show the compactness property for \(\widetilde{\Omega}\), the same property holds for \(\Omega\), because the functions on \(\Omega\) are \(\omega-\)periodic. In view of (4.6) and (4.2), for \(\lambda\in(0,1)\) and \(x(t)=(x_{1}(t),\ldots,x_{n}(t))\in X\), the operator equation \(Lx=\lambda Nx\) is equivalent to the following equation: \[x_{i}^{\prime}(t)=\lambda a_{i}(t,x_{i}(t))\bigg{[}-b_{i}(t,x_{i}(t))+f_{i}(t, x_{t})\bigg{]},\quad\forall\lambda\in(0,1),\,i=1,\ldots,n. \tag{4.9}\] Now we are in a position to prove the existence of a periodic solution of the general differential system (2.2). **Theorem 4.2**.: _Suppose that (H1*), (H2*), (H3*), (H4*), and (H5*) hold. Then, system (2.2) has at least one \(\omega-\)periodic solution._ Proof.: Our objective is to apply Theorem 4.1. To accomplish this, it is needed to define a bounded open set \(\Omega\subseteq X\) for which the conditions 1., 2., and 3. in Theorem 4.1 hold. Let \(x=x(t)=(x_{1}(t),\ldots,x_{n}(t))^{T}\) be an arbitrary \(\omega-\)periodic solution of equation (4.9). The components \(x_{i}(t)\) of \(x(t)\) are all continuously differentiable, thus, for each \(i=1,\ldots,n\), there is \(t_{i}\in[0,\omega]\) such that \[|x_{i}(t_{i})|=\max_{t\in[0,\omega]}|x_{i}(t)|.\] Hence \(x_{i}^{\prime}(t_{i})=0\) for all \(i=1,\ldots,n\). Choose \(i\in\{1,\ldots,n\}\) such that \(|x_{i}(t_{i})|=\max_{t\in[0,\omega]}|x(t)|\). Consequently, from (4.9), we have \[b_{i}(t_{i},x_{i}(t_{i}))=f_{i}(t_{i},x_{t_{i}}), \tag{4.10}\] thus \[b_{i}(t_{i},x_{i}(t_{i}))-b_{i}(t_{i},0)+b_{i}(t_{i},0)=f_{i}(t_{i},x_{t_{i}}) -f_{i}(t_{i},0)+f_{i}(t_{i},0).\] By (H3*), (H4*), and (4.1) we obtain \[\beta_{i}(t_{i})|x_{i}(t_{i})|-\overline{b}_{i}\leq\mathcal{L}_{i}(t_{i})\|x_ {t_{i}}\|+\overline{f}_{i},\] and, as \(\|x_{t_{i}}\|=|x(t_{i})|=|x_{i}(t_{i})|\), we get \[|x_{i}(t_{i})|\left(1-\frac{\mathcal{L}_{i}(t_{i})}{\beta_{i}(t_{i})}\right) \leq\frac{\overline{f}_{i}+\overline{b}_{i}}{\beta_{i}(t_{i})},\] From (H2*), (H5*), and (4.1), we can define \[\overline{\xi}=\max_{j,t}\left\{\left(1-\frac{\mathcal{L}_{j}(t)}{\beta_{j}( t)}\right)^{-1}\frac{\overline{f}+\overline{b}}{\underline{\beta}}\right\}+1>0, \tag{4.11}\] where \(\overline{b}=\max_{i}\overline{b}_{i}\), \(\overline{f}=\max_{i}\overline{f}_{i}\), and \(\underline{\beta}=\min_{i}\underline{\beta}_{i}\), thus we conclude that \[|x_{i}(t_{i})|<\overline{\xi}. \tag{4.12}\] Consequently, \(\|x\|<\overline{\xi}\), and taking \[\Omega=\big{\{}\phi\in X:\|\phi\|<\overline{\xi}\big{\}}, \tag{4.13}\] we conclude that the first condition of Theorem 4.1 is satisfied. Now, we prove that the second condition of Theorem 4.1 holds. Let \(x=x(t)=(x_{1}(t),\ldots,x_{n}(t))^{T}\in\partial\Omega\cap\mathrm{Ker}_{L}\). As \(\mathrm{Ker}_{L}\cong\mathbb{R}^{n}\), then \(x(t)\) is a constant vector in \(\mathbb{R}^{n}\), i.e. \(x(t)=(x_{1},\ldots,x_{n})\), and by (4.13), we conclude that there is \(i\in\{1,\ldots,n\}\) such that \(|x_{i}|=\overline{\xi}\). By (4.4) and (4.6), we have \[(QNx)_{i}(t)=(QNx)_{i}=\frac{1}{\omega}\int_{0}^{\omega}a_{i}(u,x_{i})\left[- b_{i}(u,x_{i})+f_{i}(u,x)\right]du.\] We claim that \[|(QNx)_{i}|>0. \tag{4.14}\] By contradiction, we assume that \(|(QNx)_{i}|=0.\) Then there is \(t_{i}^{*}\in[0,\omega]\) such that \[b_{i}(t_{i}^{*},x_{i})=f_{i}(t_{i}^{*},x).\] Reproducing the same computations above (see how (4.10) implies (4.12)), we conclude that \[\overline{\xi}=|x_{i}|<\overline{\xi},\] which is a contradiction. Consequently, (4.14) holds and the second condition of Theorem 4.1 is proved. In order to prove the last condition of Theorem 4.1, we consider the continuous function \(\Psi:(\Omega\cap\mathrm{Ker}_{L})\times[0,1]\to X\) defined by \(\Psi(x,\mu)=(\Psi(x,\mu)_{1},\ldots,\Psi(x,\mu)_{n})\) with \[\Psi(x,\mu)_{i}=-\mu\overline{a}_{i}\overline{\beta}_{i}^{*}x_{i}+(1-\mu)(QNx )_{i},\] for all \(x=(x_{1},\ldots,x_{n})\in\Omega\cap\mathrm{Ker}_{L}\cong\Omega\cap\mathbb{R}^ {n}\), \(\mu\in(0,1)\), and \(i=1,\ldots,n\). We claim that \[|\Psi(x,\mu)|\neq 0,\ \ \ \forall x\in(\partial\Omega)\cap\mathrm{Ker}_{L}, \,\mu\in[0,1]. \tag{4.15}\] Consequently, defining \(\Phi:\mathbb{R}^{n}\to\mathbb{R}^{n}\) by \[\Phi x=\left(-\overline{a}_{1}\overline{\beta}_{1}^{*}x_{1},\ldots,-\overline {a}_{n}\overline{\beta}_{n}^{*}x_{n}\right),\ \ \ \ \forall x=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n},\] the homotopy invariance theorem [26] implies that \[\deg_{B}\left\{QN,\Omega\cap\mathrm{Ker}_{L},0\right\}=\deg_{B}\left\{\Phi, \Omega\cap\mathrm{Ker}_{L},0\right\}\neq 0.\] Now, it remains to prove that (4.15) holds to conclude the proof. Let \(x=(x_{1},\ldots,x_{n})\in(\partial\Omega)\cap\mathrm{Ker}_{L}\) and \(\mu\in[0,1]\). The function \(x\) is constant because \(\mathrm{Ker}\cong\mathbb{R}^{n}\) and, by (4.13), we conclude that there is \(i\in\{1,\ldots,n\}\) such that \(|x|=|x_{i}|=\overline{\xi}\). We claim that \[|\Psi(x,\mu)_{i}|\neq 0.\] By contradiction assume that \[|\Psi(x,\mu)_{i}|=0. \tag{4.16}\] From (4.4), (4.6), and (4.16), we have \[-\mu\overline{a}_{i}\overline{\beta}_{i}^{*}x_{i}+\frac{1-\mu}{\omega}\int_{0 }^{\omega}a_{i}(t,x_{i})\Big{[}-b_{i}(t,x_{i})+f_{i}(t,x)\Big{]}dt=0,\] thus there exists \(t_{i}^{**}\in[0,\omega]\) such that \[-\mu\overline{a}_{i}\overline{\beta}_{i}^{*}x_{i}+(1-\mu)a_{i}(t_{i}^{**},x_{ i})\Big{[}-b_{i}(t_{i}^{**},x_{i})+f_{i}(t_{i}^{**},x)\Big{]}=0. \tag{4.17}\] Now, we assume that \(|x|=x_{i}=\overline{\xi}>0\) (the situation \(|x|=-x_{i}=\overline{\xi}\) is analogous). By condition (H1*) and (H3*), we have \[a_{i}(t_{i}^{**},x_{i})b_{i}(t_{i}^{**},x_{i}) =a_{i}(t_{i}^{**},x_{i})\Big{[}b_{i}(t_{i}^{**},x_{i})-b_{i}(t_{i}^ {**},0)\Big{]}+a_{i}(t_{i}^{**},x_{i})b_{i}(t_{i}^{**},0)\] \[\leq\overline{a_{i}}\overline{\beta}_{i}^{*}x_{i}+a_{i}(t_{i}^{** },x_{i})b_{i}(t_{i}^{**},0),\] then \[a_{i}(t_{i}^{**},x_{i})b_{i}(t_{i}^{**},x_{i})-\overline{a}_{i}\overline{\beta }_{i}^{*}x_{i}-a_{i}(t_{i}^{**},x_{i})b_{i}(t_{i}^{**},0)\leq 0.\] Consequently, from (4.17), we have \[- a_{i}(t_{i}^{**},x_{i})b_{i}(t_{i}^{**},x_{i})+(1-\mu)a_{i}(t_{i }^{**},x_{i})f_{i}(t_{i}^{**},x)\] \[\geq \mu\left[a_{i}(t_{i}^{**},x_{i})b_{i}(t_{i}^{**},x_{i})-\overline {a}_{i}\overline{\beta}_{i}^{*}x_{i}-a_{i}(t_{i}^{**},x_{i})b_{i}(t_{i}^{**},0 )\right]-a_{i}(t_{i}^{**},x_{i})b_{i}(t_{i}^{**},x_{i})\] \[+(1-\mu)a_{i}(t_{i}^{**},x_{i})f_{i}(t_{i}^{**},x)\] \[= -\mu\overline{a}_{i}\overline{\beta}_{i}^{*}x_{i}+(1-\mu)a_{i}(t _{i}^{**},x_{i})\left[-b_{i}(t_{i}^{**},x_{i})+f_{i}(t_{i}^{**},x)\right]-\mu a _{i}(t_{i}^{**},x_{i})b_{i}(t_{i}^{**},0)\] \[= -\mu a_{i}(t_{i}^{**},x_{i})b_{i}(t_{i}^{**},0)\] \[\geq a_{i}(t_{i}^{**},x_{i})\min\left\{0,-b_{i}(t_{i}^{**},0)\right\}\] and by (H1*), we obtain \[-b_{i}(t_{i}^{**},x_{i})+(1-\mu)f_{i}(t_{i}^{**},x)\geq\min\Big{\{}0,-b_{i}(t_ {i}^{**},0)\Big{\}}.\] Consequently, \[b_{i}(t_{i}^{**},x_{i})-b_{i}(t_{i}^{**},0)\leq|f_{i}(t_{i}^{**},x)-f_{i}(t_{i} ^{**},0)|+\overline{b}_{i}+\overline{f}_{i},\] recalling that \(x_{i}>0\), and \(\|x\|=|x|\), from (H3*), (H4*), and (4.1) we have \[x_{i}\leq\frac{\mathcal{L}_{i}(t_{i}^{**})}{\beta_{i}(t_{i}^{**})}|x|+\frac{ \overline{b}_{i}+\overline{f}_{i}}{\underline{\beta}_{i}}.\] As \(|x|=x_{i}=\overline{\xi}>0\), we obtain \[\overline{\xi}=x_{i}\leq\left(1-\frac{\mathcal{L}_{i}(t_{i}^{**})}{\beta_{i}( t_{i}^{**})}\right)^{-1}\frac{\overline{b}_{i}+\overline{f}_{i}}{\underline{ \beta}_{i}},\] and by (4.11) we conclude that \[\overline{\xi}=x_{i}\leq\left(1-\frac{\mathcal{L}_{i}(t_{i}^{**})}{\beta_{i}( t_{i}^{**})}\right)^{-1}\frac{\overline{b}_{i}+\overline{f}_{i}}{\underline{\beta}_{i}}< \overline{\xi},\] which is a contradiction. The case when \(x_{i}<0\) is very similar to the previous one and we present it briefly. From (H1*), (H3*), and (4.1), we obtain \[a_{i}(t_{i}^{**},x_{i})b_{i}(t_{i}^{**},x_{i})-\overline{a}_{i}\overline{\beta }_{i}^{*}x_{i}-a_{i}(t_{i}^{**},x_{i})b_{i}(t_{i}^{**},0)\geq 0\] and, from (H1*), (H5*), and (4.11), we obtain \[-b_{i}(t_{i}^{**},x_{i})+(1-\mu)f_{i}(t_{i}^{**},x)\leq\max\Big{\{}0,-b_{i}(t_{i} ^{**},0)\Big{\}}.\] Therefore, \[b_{i}(t_{i}^{**},x_{i})-b_{i}(t_{i}^{**},0)\geq-|f_{i}(t_{i}^{**},x)-f_{i}(t_{i} ^{**},0)|-\overline{b}_{i}-\overline{f}_{i}.\] Since \(x_{i}<0\) and \(\|x\|_{\epsilon}=|x|\), from (H3*), (H4*), (4.1) and taking into account that \(|x|=-x_{i}=-\overline{\xi}<0\), we obtain \[x_{i}\geq\frac{\mathcal{L}_{i}(t_{i}^{**})}{\beta_{i}(t_{i}^{**})}x_{i}-\frac{ \overline{b}_{i}+\overline{f}_{i}}{\underline{\beta}_{i}}.\] Using this last equation and (4.11), we conclude that \[-\overline{\xi}=x_{i}\geq-\left(1-\frac{\mathcal{L}_{i}(t_{i}^{**})}{\beta_{ i}(t_{i}^{**})}\right)^{-1}\frac{\overline{b}_{i}+\overline{f}_{i}}{ \underline{\beta}_{i}}>-\overline{\xi},\] and we obtain again a contradiction. By the stability criteria established in the previous Section, now we are in a position to present the following results. From Theorems 3.1 and 4.2, we have the following result. **Theorem 4.3**.: _Assume (H1*), (H2*), (H2), (H3*), (H4) with \(\mathcal{L}_{i}\)\(\omega-\)periodic continuous functions, (H5*), and (H5). Then the system (2.2) has an \(\omega-\)periodic solution which is globally exponentially stable._ In the case of \(D_{i}(t)\leq 0\), for all \(t\geq 0\) and \(i=1,\ldots,n\), hypothesis (H5) implies (H5*), thus the following result is an immediate consequence of Theorem 4.3. **Corollary 4.4**.: _If (H1*), (H2*), (H2) with \(D_{i}(t)\leq 0\), for all \(t\geq 0\) and \(i=1,\ldots,n\), (H3*), (H4) with \(\mathcal{L}_{i}\)\(\omega-\)periodic continuous functions, and (H5) hold, then the system (2.2) has an \(\omega-\)periodic solution which is globally exponentially stable._ In the particular case of functions \(a_{i}\) that do not explicitly depend on time \(t\), from the Corollary 4.4, we have the following result. **Corollary 4.5**.: _If (H1*), (H3*), (H4) with \(\mathcal{L}_{i}\)\(\omega-\)periodic continuous functions, and (H5) hold, then system (3.6) has an \(\omega-\)periodic solution which is globally exponentially stable._ Now, we assume that the system (3.7) is \(\omega-\)periodic, i.e. the following hypothesis holds: **(h1*)**: There is \(\omega>0\) such that, for each \(i,j=1,\ldots,n\) and \(k=1,\ldots,K\), \[a_{i}(t,u)=a_{i}(t+\omega,u),\hskip 14.226378ptb_{i}(t,u)=b_{i}(t+\omega,u), \hskip 14.226378ptf_{ijk}(t,\phi)=f_{ijk}(t+\omega,\phi),\] for all \(t\geq 0\), \(u\in\mathbb{R}\), and \(\phi\in BC\). From Corollary 3.3, Remark 3.4, and Theorem 4.2, we obtain the next result. **Theorem 4.6**.: _Assume (h1*), (H1*), (H2), (H3*), (h4) with \(\mathcal{F}_{ijk}\)\(\omega-\)periodic continuous functions, and_ \[\beta_{i}(t)>\sum_{k=1}^{K}\sum_{j=1}^{n}\mathcal{F}_{ijk}(t),\ \ \ \ \forall t\in[0,\omega],\,i=1,\ldots,n.\] _If one of the conditions (h5') or (h5") holds, then the system (3.7) has an \(\omega-\)periodic solution which is globally exponentially stable._ ## 5 Applications to Cohen-Grossberg neural network models In this section, we apply the results in Sections 3 and 4 to Cohen-Grossberg type models. As we want to apply it to low-order and high-order models, we consider the following general Cohen-Grossberg model with discrete-time varying and distributed delays. \[x_{i}^{\prime}(t) = a_{i}(t,x_{i}(t))\bigg{[}-b_{i}(t,x_{i}(t))+F_{i}\bigg{(}\sum_{p =1}^{P}\sum_{j,l=1}^{n}c_{ijlp}(t)h_{ijlp}\Big{(}x_{j}(t-\tau_{ijp}(t)),x_{l}( t-\widetilde{\tau}_{ilp}(t))\Big{)} \tag{5.1}\] \[+G_{i}\bigg{(}\sum_{q=1}^{Q}\sum_{j,l=1}^{n}d_{ijlq}(t)f_{ijlq} \left(\int_{-\infty}^{0}g_{ijq}(x_{j}(t+s))d\eta_{ijq}(s),\int_{-\infty}^{0} \widetilde{g}_{ilq}(x_{l}(t+s))d\widetilde{\eta}_{ilq}(s)\right)\bigg{)}\] \[+I_{i}(t)\bigg{]},\ \ \ \ t\geq 0,\ \ \ \ i=1,\ldots,n,\] where \(n,P,Q\in\mathbb{N}\) and \(a_{i}:[0,+\infty)\times\mathbb{R}\rightarrow(0,+\infty)\), \(b_{i}:[0,+\infty)\times\mathbb{R}\rightarrow\mathbb{R}\), \(c_{ijlp},d_{ijlq},I_{i}:[0,+\infty)\rightarrow\mathbb{R}\), \(\tau_{ijp},\widetilde{\tau}_{ilp}:[0,+\infty)\rightarrow[0,+\infty)\), \(h_{ijlp}\,,f_{ijlq}:\mathbb{R}^{2}\rightarrow\mathbb{R}\), \(F_{i},G_{i},g_{ijq},\widetilde{g}_{ilq}:\mathbb{R}\rightarrow\mathbb{R}\) are continuous functions, and \(\eta_{ijq},\,\widetilde{\eta}_{ilq}:(-\infty,0]\rightarrow\mathbb{R}\) are non-decreasing bounded functions such that \(\eta_{ijq}(0)-\eta_{ijq}(-\infty)=1\) and \(\widetilde{\eta}_{ilq}(0)-\widetilde{\eta}_{ilq}(-\infty)=1\), for each \(i,j,l=1,\ldots,n\), \(p=1,\ldots,P\), and \(q=1,\ldots,Q\). Here, we assume the next Lipschitz conditions: **(H4**)**: For each \(i,j,l=1,\ldots,n\), \(p=1,\ldots,P\), and \(q=1,\ldots,Q\), there are positive numbers \(\gamma_{ijlp}^{(1)}\), \(\gamma_{ijlp}^{(2)}\), \(\mu_{ijlq}^{(1)}\), \(\mu_{ijlq}^{(2)}\), \(\xi_{ijq}\), \(\widetilde{\xi}_{ilq}\), \(\zeta_{i}\), and \(\varsigma_{i}\) such that \[|h_{ijlp}(u_{1},u_{2})-h_{ijlp}(v_{1},v_{2})|\leq\gamma_{ijlp}^{( 1)}|u_{1}-v_{1}|+\gamma_{ijlp}^{(2)}|u_{2}-v_{2}|\] \[|f_{ijlq}(u_{1},u_{2})-f_{ijlq}(v_{1},v_{2})|\leq\mu_{ijlq}^{(1)} |u_{1}-v_{1}|+\mu_{ijlq}^{(2)}|u_{2}-v_{2}|\] for all \(u_{1},u_{2},v_{1},v_{2}\in\mathbb{R}\), and \[|g_{ijq}(u)-g_{ijq}(v)|\leq\xi_{ijq}|u-v|, |\widetilde{g}_{ilq}(u)-\widetilde{g}_{ilq}(v)|\leq\widetilde{ \xi}_{ilq}|u-v|,\] \[|F_{i}(u)-F_{i}(v)|\leq\zeta_{i}|u-v|, |G_{i}(u)-G_{i}(v)|\leq\varsigma_{i}|u-v|,\] for all \(u,v\in\mathbb{R}\). Now, we state our main stability criterion for model (5.1). **Theorem 5.1**.: _Assume that (H1)-(H3), (H4**), the functions \(\tau_{ijp},\tilde{\tau}_{ijp}\) are bounded, and there exists \(\vartheta>0\) such that_ \[\int_{-\infty}^{0}\mathrm{e}^{-\vartheta s}d\eta_{ijq}(s)<+\infty,\quad\int_{- \infty}^{0}\mathrm{e}^{-\vartheta s}d\tilde{\eta}_{ilq}(s)<+\infty. \tag{5.2}\] _If there exist \(\varepsilon>0\) and \(w=(w_{1},\ldots,w_{n})>0\) such that for all \(t\geq 0\), and \(i=1,\ldots,n\),_ \[\underline{a}_{i}\Big{(}\beta_{i}(t)+D_{i}(t)\Big{)}-\overline{a }_{i}\sum_{j,l=1}^{n}\left[\sum_{p=1}^{P}\zeta_{i}|c_{ijlp}(t)|\left(\frac{w_{ j}}{w_{i}}\gamma_{ijlp}^{(1)}+\frac{w_{l}}{w_{i}}\gamma_{ijlp}^{(2)}\right)\right.\] \[\qquad\qquad\qquad\qquad\qquad\left.+\sum_{q=1}^{Q}\zeta_{i}|d_{ ijlq}(t)|\left(\frac{w_{j}}{w_{i}}\mu_{ijlq}^{(1)}\xi_{ijq}+\frac{w_{l}}{w_{i}} \mu_{ijlq}^{(2)}\widetilde{\xi}_{ilq}\right)\right]>\varepsilon, \tag{5.3}\] _then the model (5.1) is globally exponentially stable._ Proof.: With the change of variables \(y_{i}(t)=w_{i}^{-1}x_{i}(t)\), model (5.1) is transformed into \[y_{i}^{\prime}(t) = a_{i}(t,w_{i}y_{i}(t))w_{i}^{-1}\bigg{[}-b_{i}(t,w_{i}y_{i}(t))+ I_{i}(t) \tag{5.4}\] \[+F_{i}\bigg{(}\sum_{p=1}^{P}\sum_{j,l=1}^{n}c_{ijlp}(t)h_{ijlp} \Big{(}w_{j}y_{j}(t-\tau_{ijp}(t)),w_{l}y_{l}(t-\widetilde{\tau}_{ilp}(t) \Big{)}\bigg{)}+G_{i}\bigg{(}\sum_{q=1}^{Q}\sum_{j,l=1}^{n}d_{ijlq}(t)\] \[\cdot f_{ijlq}\left(\int_{-\infty}^{0}g_{ijq}(w_{j}y_{i}(t+s))d \eta_{ijq}(s),\int_{-\infty}^{0}\widetilde{g}_{ilq}(w_{l}y_{l}(t+s))d\widetilde {\eta}_{ilq}(s)\right)\bigg{)}\bigg{]},\] for \(t\geq 0\), and \(i=1,\ldots,n\). From (5.3), there exists \(\nu>0\) such that \[\underline{a}_{i}\Big{(}\beta_{i}(t)+D_{i}(t)\Big{)}-\overline{a }_{i}\sum_{j,l=1}^{n}\left[\sum_{p=1}^{P}\zeta_{i}|c_{ijlp}(t)|\left(\frac{w_ {j}}{w_{i}}\gamma_{ijlp}^{(1)}+\frac{w_{l}}{w_{i}}\gamma_{ijlp}^{(2)}\right)\right.\] \[\qquad\qquad\qquad\qquad\left.+\sum_{q=1}^{Q}\varsigma_{i}|d_{ ijlq}(t)|\left(\frac{w_{j}}{w_{i}}\mu_{ijlq}^{(1)}\xi_{ijq}+\frac{w_{l}}{w_{i}} \mu_{ijlq}^{(2)}\widetilde{\xi}_{ilq}\right)\right](1+\nu)>\nu, \tag{5.5}\] for all \(t\geq 0\) and \(i=1,\ldots,n\). As \(\tau_{ijp}\) and \(\widetilde{\tau}_{ilp}\) are bounded functions, it is possible to define the non-negative real number \[\tau:=\max_{i,j,p}\left(\sup_{t\geq 0}\left\{\tau_{ijp}(t),\widetilde{\tau}_{ ijp}(t)\right\}\right).\] As in the proof of [8, Theorem 4.3], from (5.2), we can conclude that there exists \(\alpha\in(0,\vartheta)\) such that \[\int_{-\infty}^{0}\mathrm{e}^{-\alpha s}d\eta_{ijq}(s)<1+\nu\quad\text{ and }\quad\int_{-\infty}^{0}\mathrm{e}^{-\alpha s}d\tilde{\eta}_{ijq}(s)<1+\nu, \tag{5.6}\] for all \(i,j=1,\ldots,n\) and \(q=1,\ldots,Q\). Let \(\epsilon:=\min\{\nu,\alpha,\frac{\log(1+\nu)}{\tau+1}\}\) and consider the system (5.4) in the phase space \(UC^{n}_{\epsilon}\). Defining, for each \(i=1,\ldots,n\), \(\widetilde{a}_{i}(t,u):=a_{i}(t,w_{i}u)\), \(\widetilde{b}_{i}(t,u)=w_{i}^{-1}b_{i}(t,w_{i}u)\), and \[\widetilde{f}_{i}(t,\phi) :=w_{i}^{-1}F_{i}\bigg{(}\sum_{p=1}^{P}\sum_{j,l=1}^{n}c_{ijlp}(t) h_{ijlp}\Big{(}w_{j}\phi_{j}(-\tau_{ijp}(t)),w_{l}\phi_{l}(-\widetilde{\tau}_{ ilp}(t))\Big{)}\bigg{)}+w_{i}^{-1}I_{i}(t)\] \[+w_{i}^{-1}G_{i}\Bigg{(}\sum_{q=1}^{Q}\sum_{j,l=1}^{n}d_{ijlq}(t) f_{ijlq}\bigg{(}\int_{-\infty}^{0}g_{ijq}(w_{j}\phi_{j}(s))d\eta_{ijq}(s),\int_{- \infty}^{0}\widetilde{g}_{ilq}(w_{l}\phi_{l}(s))d\widetilde{\eta}_{ilq}(s) \bigg{)}\Bigg{)}\] for all \(u\in\mathbb{R}\) and \(\phi=(\phi_{1},\ldots,\phi_{n})\in UC^{n}_{\epsilon}\), model (5.4) has the form \[y_{i}^{\prime}(t)=\widetilde{a}_{i}(t,y_{i}(t))\Big{[}-\widetilde{b}_{i}(t,y_ {i}(t))+\widetilde{f}_{i}(t,y_{t})\Big{]},\quad\,t\geq 0,\,i=1,\ldots,n. \tag{5.7}\] For model (5.7), the hypotheses (H1), (H2), and (H3) hold with same constants \(\underline{a}_{i},\overline{a}_{i}\) and same functions \(D_{i}(t),\beta_{i}(t)\). From Theorem 3.1, the proof is concluded if hypotheses (H4) and (H5) hold. For \(\phi=(\phi_{1},\ldots,\phi_{n}),\psi=(\psi_{1},\ldots,\psi_{n})\in UC^{n}_{ \epsilon}\), \(t\geq 0\), and \(i=1,\ldots,n\), from (H4**) we have \[|\widetilde{f}_{i}(t,\phi) -\widetilde{f}_{i}(t,\psi)|\leq w_{i}^{-1}\sum_{j,l=1}^{n}\Bigg{[}\zeta_{i}\sum_{p=1}^{P}|c_{ ijlp}(t)|\] \[\cdot\Big{|}h_{ijlp}\Big{(}w_{j}\phi_{j}(-\tau_{ijp}(t)),w_{l} \phi_{l}(-\widetilde{\tau}_{ilp}(t))\Big{)}-h_{ijlp}\Big{(}w_{j}\psi_{j}(- \tau_{ijp}(t)),w_{l}\psi_{l}(-\widetilde{\tau}_{ilp}(t))\Big{)}\Big{|}\] \[+\varsigma_{i}\sum_{q=1}^{Q}|d_{ijlq}(t)|\left|f_{ijlq}\bigg{(} \int_{-\infty}^{0}g_{ijq}(w_{j}\phi_{j}(s))d\eta_{ijq}(s),\int_{-\infty}^{0} \widetilde{g}_{ilq}(w_{l}\phi_{l}(s))d\widetilde{\eta}_{ilq}(s)\bigg{)}\right.\] \[-\left.\left.f_{ijlq}\bigg{(}\int_{-\infty}^{0}g_{ijq}(w_{j}\psi_ {j}(s))d\eta_{ijq}(s),\int_{-\infty}^{0}\widetilde{g}_{ilq}(w_{l}\psi_{l}(s)) d\widetilde{\eta}_{ilq}(s)\bigg{)}\right|\right]\] \[\leq w_{i}^{-1}\sum_{j,l=1}^{n}\Bigg{[}\zeta_{i}\sum_{p=1}^{P}|c_{ijlp} (t)|\] \[\cdot\Bigg{(}\gamma_{ijlp}^{(1)}w_{j}\Big{|}\phi_{j}(-\tau_{ijp}( t))-\psi_{j}(-\tau_{ijp}(t))\Big{|}+\gamma_{ijlp}^{(2)}w_{l}\Big{|}\phi_{l}(- \widetilde{\tau}_{ilp}(t))-\psi_{l}(-\widetilde{\tau}_{ilp}(t))\Big{|}\Bigg{)}\] \[+\varsigma_{i}\sum_{q=1}^{Q}|d_{ijlq}(t)|\bigg{(}\mu_{ijlq}^{(1)} \Big{|}\int_{-\infty}^{0}g_{ijq}(w_{j}\phi_{j}(s))-g_{ijq}(w_{j}\psi_{j}(s))d \eta_{ijq}(s)\bigg{|}\] \[+\mu_{ijlq}^{(2)}\Big{|}\int_{-\infty}^{0}\widetilde{g}_{ilq}(w_{l }\phi_{l}(s))-\widetilde{g}_{ilq}(w_{l}\psi_{l}(s))d\widetilde{\eta}_{ilq}(s) \Big{|}\Bigg{)}\Bigg{]}.\] Again from (H4**) and by the monotony of \(\eta_{ijq}\) and \(\widetilde{\eta}_{ijq}\) we obtain, \[|\widetilde{f}_{i}(t,\phi)-\widetilde{f}_{i}(t,\psi)| \leq\sum_{j,l=1}^{n}\left[\zeta_{i}\sum_{p=1}^{P}|c_{ijlp}(t)|\bigg{(} \gamma_{ijlp}^{(1)}\frac{w_{j}}{w_{i}}\Big{|}\phi_{j}(-\tau_{ijp}(t))-\psi_{j}( -\tau_{ijp}(t))\Big{|}\right.\] \[\quad+\left.\gamma_{ijlp}^{(2)}\frac{w_{l}}{w_{i}}\Big{|}\phi_{l} (-\widetilde{\tau}_{ilp}(t))-\psi_{l}(-\widetilde{\tau}_{ilp}(t))\Big{|}\right)\] \[\quad+\varsigma_{i}\sum_{q=1}^{Q}|d_{ijlq}(t)|\bigg{(}\mu_{ijlq}^ {(1)}\int_{-\infty}^{0}\xi_{ijq}\frac{w_{j}}{w_{i}}|\phi_{j}(s)-\psi_{j}(s)|d \eta_{ijq}(s)\] \[\quad+\left.\mu_{ijlq}^{(2)}\int_{-\infty}^{0}\widetilde{\xi}_{ ilq}\frac{w_{l}}{w_{i}}|\phi_{l}(s)-\psi_{l}(s)|d\widetilde{\eta}_{ilq}(s) \right)\right] \tag{5.8}\] and consequently \[|\widetilde{f}_{i}(t,\phi)-\widetilde{f}_{i}(t,\psi)| \leq\sum_{j,l=1}^{n}\left[\zeta_{i}\sum_{p=1}^{P}|c_{ijlp}(t)| \bigg{(}\gamma_{ijlp}^{(1)}\frac{w_{j}}{w_{i}}\frac{\big{|}(\phi_{j}-\psi_{j}) (-\tau_{ijp}(t))\big{|}}{\mathrm{e}^{-\epsilon(-\tau_{ijp}(t))}}\mathrm{e}^{ \epsilon\tau_{ijp}(t)}\right.\] \[\quad+\left.\gamma_{ijlp}^{(2)}\frac{w_{l}}{w_{i}}\frac{\big{|}( \phi_{l}-\psi_{l})\big{(}-\widetilde{\tau}_{ilp}(t)\big{)}\big{|}}{\mathrm{e} ^{-\epsilon\big{(}-\widetilde{\tau}_{ilp}(t)\big{)}}}\mathrm{e}^{\epsilon \widetilde{\tau}_{ilp}(t)}\right)\] \[\quad+\varsigma_{i}\sum_{q=1}^{Q}|d_{ijlq}(t)|\bigg{(}\mu_{ijlq}^{ (1)}\int_{-\infty}^{0}\xi_{ijq}\frac{w_{j}}{w_{i}}\frac{\big{|}(\phi_{j}-\psi_{ j})(s)\big{|}}{\mathrm{e}^{-\epsilon s}}d\eta_{ijq}(s)\] \[\quad+\left.\mu_{ijlq}^{(2)}\int_{-\infty}^{0}\widetilde{\xi}_{ ilq}\frac{w_{l}}{w_{i}}\frac{\big{|}(\phi_{l}-\psi_{l})(s)\big{|}}{\mathrm{e}^{- \epsilon s}}d\widetilde{\eta}_{ilq}(s)\right)\right]\] \[\leq\sum_{j,l=1}^{n}\left[\zeta_{i}\sum_{p=1}^{P}|c_{ijlp}(t)| \left(\gamma_{ijlp}^{(1)}\frac{w_{j}}{w_{i}}\|\phi-\psi\|_{\mathrm{e}}\mathrm{ e}^{\epsilon\tau_{ijp}(t)}+\gamma_{ijlp}^{(2)}\frac{w_{l}}{w_{i}}\|\phi-\psi\|_{ \mathrm{e}}\mathrm{e}^{\epsilon\widetilde{\tau}_{ilp}(t)}\right)\right.\] \[\quad+\left.\mu_{ijlq}^{(2)}\int_{-\infty}^{0}\widetilde{\xi}_{ ilq}\frac{w_{l}}{w_{i}}\|\phi-\psi\|_{\mathrm{e}}\mathrm{e}^{-\epsilon s}d\widetilde{ \eta}_{ilq}(s)\right)\right]\] \[\leq\] \[\quad+\left.\varsigma_{i}\sum_{q=1}^{Q}|d_{ijlq}(t)|\bigg{(}\mu_{ ijlq}^{(1)}\xi_{ijq}\frac{w_{j}}{w_{i}}\int_{-\infty}^{0}\mathrm{e}^{-\epsilon s}d \eta_{ijq}(s)+\mu_{ijlq}^{(2)}\widetilde{\xi}_{ilq}\frac{w_{l}}{w_{i}}\int_{- \infty}^{0}\mathrm{e}^{-\epsilon s}d\widetilde{\eta}_{ilq}(s)\right)\right].\] As \(\epsilon\leq\alpha\) from (5.6) we have \[\int_{-\infty}^{0}\mathrm{e}^{-\epsilon s}d\eta_{ijq}(s)<1+\nu\quad\quad\text{ and}\quad\int_{-\infty}^{0}\mathrm{e}^{-\epsilon s}d\widetilde{\eta}_{ijq}(s)<1+\nu,\] for all \(i,j=1,\ldots,n\). As \(\epsilon\leq\frac{\log(1+\nu)}{\tau+1}\), then we also have \[\mathrm{e}^{\epsilon\tau}<\mathrm{e}^{\epsilon(\tau+1)}\leq 1+\nu.\] Consequently \[|\widetilde{f}_{i}(t,\phi)-\widetilde{f}_{i}(t,\psi)|\leq \left[\sum_{j,l=1}^{n}\left(\zeta_{i}\sum_{p=1}^{P}|c_{ijlp}(t)| \left(\gamma_{ijlp}^{(1)}\frac{w_{j}}{w_{i}}+\gamma_{ijlp}^{(2)}\frac{w_{l}}{w_{ i}}\right)\right.\right.\] \[+\left.\left.\varsigma_{i}\sum_{q=1}^{Q}|d_{ijlq}(t)|\left(\mu_{ ijlq}^{(1)}\xi_{ijq}\frac{w_{j}}{w_{i}}+\mu_{ijlq}^{(2)}\widetilde{\xi}_{ilq} \frac{w_{l}}{w_{i}}\right)\right)(1+\nu)\right]\|\phi-\psi\|_{\epsilon},\] and hypothesis (H4) holds with \[\mathcal{L}_{i}(t)=\sum_{j,l=1}^{n}\left(\zeta_{i}\sum_{p=1}^{P}|c_{ijlp}(t)| \left(\gamma_{ijlp}^{(1)}\frac{w_{j}}{w_{i}}+\gamma_{ijlp}^{(2)}\frac{w_{l}}{ w_{i}}\right)+\varsigma_{i}\sum_{q=1}^{Q}|d_{ijlq}(t)|\left(\mu_{ijlq}^{(1)}\xi_{ijq} \frac{w_{j}}{w_{i}}+\mu_{ijlq}^{(2)}\widetilde{\xi}_{ilq}\frac{w_{l}}{w_{i}} \right)\right)(1+\nu)\] for all \(i=1,\ldots,n\). As \(\epsilon\leq\nu\) and from (5.5), the hypothesis (H5) also holds and the proof is concluded. Now, we assume that the model (5.1) is periodic, i.e. there is \(\omega>0\) such that **(H2**)**: There is \(\omega>0\) such that, for each \(i,j,l=1,\ldots,n\), \(p=1,\ldots,P\), and \(q=1,\ldots,Q\), \[a_{i}(t,u)=a_{i}(t+\omega,u),\ \ c_{ijlp}(t)=c_{ijlp}(t+\omega),\ \ \tau_{ijp}(t)=\tau_{ijp}(t+\omega),\] \[b_{i}(t,u)=b_{i}(t+\omega,u),\ \ d_{ijlq}(t)=d_{ijlq}(t+\omega),\ \ \widetilde{\tau}_{ijp}(t)=\widetilde{\tau}_{ijp}(t+\omega),\ \text{and}\] \[I_{i}(t)=I_{i}(t+\omega)\] for all \(t\geq 0\) and \(u\in\mathbb{R}\). **Theorem 5.2**.: _Assume the hypotheses (H2**), (H1), (H3*), and (H4**)._ _If there exists \(w=(w_{1},\ldots,w_{n})>0\) such that for all \(t\in[0,\omega]\), and \(i=1,\ldots,n\),_ \[\beta_{i}(t)>\sum_{j,l=1}^{n}\left[\sum_{p=1}^{P}\zeta_{i}|c_{ijlp} (t)|\left(\frac{w_{j}}{w_{i}}\gamma_{ijlp}^{(1)}+\frac{w_{l}}{w_{i}}\gamma_{ij lp}^{(2)}\right)\right.\] \[\ and from the properties of \(\eta_{ijq}\) and \(\tilde{\eta}_{ijq}\) we obtain \[|\widetilde{f}_{i}(t,\phi)-\widetilde{f}_{i}(t,\psi)|\leq\mathcal{L}_{i}(t)\| \phi-\psi\|,\] with \[\mathcal{L}_{i}(t)=\sum_{j,l=1}^{n}\left[\zeta_{i}\sum_{p=1}^{P}|c_{ijlp}(t)| \bigg{(}\gamma_{ijlp}^{(1)}\frac{w_{j}}{w_{i}}+\gamma_{ijlp}^{(2)}\frac{w_{l}}{ w_{i}}\bigg{)}+\varsigma_{i}\sum_{q=1}^{Q}|d_{ijlq}(t)|\bigg{(}\mu_{ijlq}^{(1)} \xi_{ijq}\frac{w_{j}}{w_{i}}+\mu_{ijlq}^{(2)}\widetilde{\xi}_{ilq}\frac{w_{l}} {w_{i}}\bigg{)}\right],\] thus (H4*) holds for model (5.7). By hypothesis (5.9), (H5*) also holds and the conclusion follows from Theorem 4.2. Immediately from Theorems 5.1 and 5.2, we have the following result. **Corollary 5.3**.: _Assume (H1), (H2) with \(D_{i}\) an \(\omega-\)periodic continuous function, (H2**), (H3*), (H4**), and (5.2)._ _If there exists \(w=(w_{1},\ldots,w_{n})>0\) such that, for all \(t\in[0,\omega]\) and \(i=1,\ldots,n\) inequality (5.9) holds and_ \[\underline{a}_{i}\Big{(}\beta_{i}(t)+D_{i}(t)\Big{)}>\overline{a} _{i}\sum_{j,l=1}^{n}\left[\sum_{p=1}^{P}\zeta_{i}|c_{ijlp}(t)|\left(\frac{w_{j }}{w_{i}}\gamma_{ijlp}^{(1)}+\frac{w_{l}}{w_{i}}\gamma_{ijlp}^{(2)}\right)\right.\] \[\qquad\qquad\qquad\qquad\qquad\left.+\sum_{q=1}^{Q}\varsigma_{i} |d_{ijlq}(t)|\left(\frac{w_{j}}{w_{i}}\mu_{ijlq}^{(1)}\xi_{ijq}+\frac{w_{l}}{ w_{i}}\mu_{ijlq}^{(2)}\widetilde{\xi}_{ilq}\right)\right], \tag{5.10}\] _then the model (5.1) has an \(\omega-\)periodic solution which is globally exponentially stable._ Proof.: From (H2**) functions \(\tau_{ipp}\), \(\widetilde{\tau}_{ijp}\) are bounded. Moreover, from (H2**) and (H3*) we know that \(\beta_{i}\), \(c_{ijlp}\), and \(d_{ijlq}\) are \(\omega-\)periodic functions. As \(D_{i}\) are also \(\omega-\)periodic, then there is \(\varepsilon>0\) such that inequality (5.3) holds and the conclusion comes from Theorems 5.1 and 5.2. Now, we consider model (5.1) with amplifications functions, \(a_{i}\), do not explicitly depend on time \(t\), i.e. \[x_{i}^{\prime}(t) = a_{i}(x_{i}(t))\bigg{[}-b_{i}(t,x_{i}(t))+F_{i}\bigg{(}\sum_{p=1 }^{P}\sum_{j,l=1}^{n}c_{ijlp}(t)h_{ijlp}\Big{(}x_{j}(t-\tau_{ijp}(t)),x_{l}(t- \widetilde{\tau}_{ilp}(t))\bigg{)} \tag{5.11}\] \[+G_{i}\bigg{(}\sum_{q=1}^{Q}\sum_{j,l=1}^{n}d_{ijlq}(t)f_{ijlq} \left(\int_{-\infty}^{0}g_{ijq}(x_{j}(t+s))d\eta_{ijq}(s),\int_{-\infty}^{0} \widetilde{g}_{ilq}(x_{l}(t+s))d\widetilde{\eta}_{ilq}(s)\right)\bigg{)}\] \[+I_{i}(t)\bigg{]},\ \ \ t\geq 0,\ \ \ i=1,\ldots,n,\] From Corollary 5.3 we have the following result. **Corollary 5.4**.: _Assume (H1), (H2**), (H3*), (H4**), and (5.2). If there exists \(w=(w_{1},\ldots,w_{n})>0\) such that, for all \(t\in[0,\omega]\) and \(i=1,\ldots,n\),_ \[\underline{a}_{i}\beta_{i}(t)>\overline{a}_{i}\sum_{j,l=1}^{n} \left[\sum_{p=1}^{P}\zeta_{i}|c_{ijlp}(t)|\left(\frac{w_{j}}{w_{i}}\gamma_{ijlp }^{(1)}+\frac{w_{l}}{w_{i}}\gamma_{ijlp}^{(2)}\right)\right.\] \[\left.+\sum_{q=1}^{Q}\varsigma_{i}|d_{ijlq}(t)|\left(\frac{w_{j}} {w_{i}}\mu_{ijlq}^{(1)}\xi_{ijq}+\frac{w_{l}}{w_{i}}\mu_{ijlq}^{(2)}\widetilde {\xi}_{ilq}\right)\right], \tag{5.12}\] _then the model (5.11) has an \(\omega-\)periodic solution which is globally exponentially stable._ Proof.: Noting that \(a_{i}(t,u)=a_{i}(u)\) for all \(t,u\in\mathbb{R}\) and \(i=1,\ldots,n\), the hypothesis (H2) trivially holds with \(D_{i}(t)=0\). Consequently inequality (5.12) implies (5.9) and the result comes from Corollary 5.3. For model (5.11) under the hypotheses (H2**), (H1), (H3*), and (H4**), consider the constants \[\underline{\beta}_{i}:=\min_{t\in[0,\omega]}\beta_{i}(t),\quad \overline{c}_{ijlp}:=\max_{t\in[0,\omega]}c_{ijlp}(t),\quad\text{ and }\quad\overline{d}_{ijlq}:=\max_{t\in[0,\omega]}d_{ijlq}(t), \tag{5.13}\] for each \(i,j=1,\ldots,n\), \(p=1,\ldots,P\), \(q=1,\ldots,Q\), and the square real matrix \(\mathcal{M}\) defined by \[\mathcal{M}:=diag\big{(}\underline{a}_{1}\underline{\beta}_{1}, \ldots,\underline{a}_{n}\underline{\beta}_{n}\big{)}-\big{[}\mathfrak{m}_{ij} \big{]}_{i,j=1}^{n}, \tag{5.14}\] where, for each \(i,j=1,\ldots,n\), \[\mathfrak{m}_{ij}:=\overline{a}_{i}\sum_{l=1}^{n}\left(\zeta_{i }\sum_{p=1}^{P}\left(\overline{c}_{ijlp}\gamma_{ijlp}^{(1)}+\overline{c}_{ilp }\gamma_{iljp}^{(2)}\right)+\varsigma_{i}\sum_{q=1}^{Q}\left(\overline{d}_{ij lq}\mu_{ijlq}^{(1)}\xi_{ijq}+\overline{d}_{iljq}\mu_{iljq}^{(2)}\widetilde{\xi}_{ijq} \right)\right).\] **Corollary 5.5**.: _Assume (H1), (H2**), (H3*), (H4**), and (5.2). If \(\mathcal{M}\) is a non-singular M-matrix, then the model (5.11) has an \(\omega-\)periodic solution which is globally exponentially stable._ Proof.: As \(\mathcal{M}\) is a non-singular M-matrix, then (see [9]) there exists \(w=(w_{1},\ldots,w_{n})>0\) such that \(\mathcal{M}w^{T}>0\), i.e., \[\underline{a}_{i}\underline{\beta}_{i}w_{i}>\sum_{j=1}^{n}w_{j} \left[\overline{a}_{i}\sum_{l=1}^{n}\left(\zeta_{i}\sum_{p=1}^{P}\left( \overline{c}_{ijlp}\gamma_{ijlp}^{(1)}+\overline{c}_{iljp}\gamma_{iljp}^{(2)} \right)+\varsigma_{i}\sum_{q=1}^{Q}\left(\overline{d}_{ijlq}\mu_{ijlq}^{(1)} \xi_{ijq}+\overline{d}_{iljq}\mu_{iljq}^{(2)}\widetilde{\xi}_{ijq}\right) \right)\right],\] for all \(i=1,\ldots,n\), which is equivalent to \[\underline{a}_{i}\underline{\beta}_{i}>\overline{a}_{i}\sum_{j,l= 1}^{n}\left[\zeta_{i}\sum_{p=1}^{P}\left(\overline{c}_{ijlp}\frac{w_{j}}{w_{i} }\gamma_{ijlp}^{(1)}+\overline{c}_{iljp}\gamma_{iljp}^{(2)}\frac{w_{j}}{w_{i} }\right)\right.\] \[\left.+\varsigma_{i}\sum_{q=1}^{Q}\left(\overline{d}_{ijlq}\frac{w _{j}}{w_{i}}\mu_{ijlq}^{(1)}\xi_{ijq}+\overline{d}_{iljq}\mu_{iljq}^{(2)} \widetilde{\xi}_{ijq}\frac{w_{j}}{w_{i}}\right)\right]\] and consequently \[\underline{a_{i}\underline{\beta}_{i}}>\overline{a}_{i}\sum_{j,l=1}^ {n}\left[\zeta_{i}\sum_{p=1}^{P}\left(\overline{c}_{ijlp}\frac{w_{j}}{w_{i}} \gamma^{(1)}_{ijlp}+\overline{c}_{ijlp}\gamma^{(2)}_{ijlp}\frac{w_{l}}{w_{i}}\right)\right.\] \[\left.\qquad\qquad\qquad\qquad+\varsigma_{i}\sum_{q=1}^{Q}\left( \overline{d}_{ijlq}\frac{w_{j}}{w_{i}}\mu^{(1)}_{ijlq}\xi_{ijq}+\overline{d}_ {ijlq}\mu^{(2)}_{ijlq}\widetilde{\xi}_{ilq}\frac{w_{l}}{w_{i}}\right)\right]. \tag{5.15}\] From (5.13) and (5.15) we obtain (5.12). Now the result follows from Corollary 5.4. **Example 5.1**.: Consider the following low-order Cohen-Grossberg neural network model \[x^{\prime}(t)=a_{i}(x_{i}(t))\left[-b_{i}(t,x_{i}(t))+G_{i}\left(\sum_{j=1}^{n }c_{ij}(t)\int_{0}^{+\infty}x_{j}(t-u)K_{ij}(u)du\right)\right], \tag{5.16}\] for \(t\geq 0\) and \(i=1,\ldots,n\), where \(a_{i}:\mathbb{R}\rightarrow(0,+\infty)\), \(b_{i}:[0,+\infty)\times\mathbb{R}\rightarrow\mathbb{R}\), \(c_{ij}:[0,+\infty)\rightarrow\mathbb{R}\), \(G_{i}:\mathbb{R}\rightarrow\mathbb{R}\), and \(K_{ij}:[0,+\infty)\rightarrow[0,+\infty)\) are continuous functions such that \[\int_{0}^{+\infty}K_{ij}(u)du=1, \tag{5.17}\] for all \(i,j=1,\ldots,n\). The model (5.16) is a generalization of the following autonomous static neural network model \[x^{\prime}(t)=-x_{i}(t)+G_{i}\left(\sum_{j=1}^{n}c_{ij}\int_{0}^{+\infty}x_{j} (t-u)K_{ij}(u)du\right),\,t\geq 0,\,i=1,\ldots,n, \tag{5.18}\] whose the existence and global asymptotic stability of an equilibrium point was studied in [27]. Defining, for each \(i,j=1,\ldots,n\), \(\eta_{ij}:(-\infty,0]\rightarrow\mathbb{R}\) by \[\eta_{ij}(s)=\int_{-\infty}^{s}K_{ij}(-v)dv,\quad\ s\in(-\infty,0] \tag{5.19}\] we have \(\eta_{ij}\) non-decreasing and, from (5.17), \(\eta_{ij}(0)-\eta_{ij}(-\infty)=1\). Consequently, the model (5.16) can be written in the form \[x^{\prime}(t)=a_{i}(x_{i}(t))\left[-b_{i}(t,x_{i}(t))+G_{i}\left(\sum_{j=1}^{n }c_{ij}(t)\int_{-\infty}^{0}x_{j}(t+s)d\eta_{ij}(s)\right)\right],\,t\geq 0,\,i=1, \ldots,n,\] which is a particular situation of (5.11). Consequently, from Corollary 5.4, we obtain the following result. **Corollary 5.6**.: _Assume (H1), (H3*) and, for each \(i,j=1,\ldots,n\), the functions \(t\mapsto b_{i}(t,u)\) and \(c_{ij}\) is \(\omega-\)periodic, the function \(G_{i}\) is Lipschitz with Lipschitz constant \(\varsigma_{i}>0\), and there is \(\alpha>0\) such that_ \[\int_{0}^{+\infty}K_{ij}(u)\mathrm{e}^{\alpha u}du<+\infty. \tag{5.20}\] _If there exists \(w=(w_{1},\ldots,w_{n})>0\) such that, for all \(t\in[0,\omega]\) and \(i=1,\ldots,n\),_ \[\underline{a}_{i}\beta_{i}(t)w_{i}>\overline{a}_{i}\sum_{j=1}^{n}c_{i}|c_{ij}(t )|w_{j},\quad\,t\geq 0,\,i=1,\ldots,n, \tag{5.21}\] _then the model (5.16) has an \(\omega-\)periodic solution which is globally exponentially stable._ For model (5.18), condition (5.21) reads as \[w_{i}>\sum_{j=1}^{n}c_{i}|c_{ij}|w_{j},\quad\,t\geq 0,\,i=1,\ldots,n,\] which is equivalent to matrix \[\mathcal{N}=I_{n}-\big{[}c_{i}|c_{ij}|\big{]}_{i,j=1}^{n},\] where \(I_{n}\) denotes the identity matrix of \(n\)-dimension, being a non-singular M-matrix (see [Fidler]). Consequently, we also have the following result. **Corollary 5.7**.: _For each \(i,j=1,\ldots,n\) assume that the function \(G_{i}\) is Lipschitz with Lipschitz constant \(\varsigma_{i}>0\) and (5.20)._ _If \(\mathcal{N}\) is a non-singular M-matrix, then the model (5.18) has an equilibrium point which is globally exponentially stable._ **Remark 5.8**.: _We remark that in [27] the existence and global asymptotic stability of an equilibrium point of (5.18) was obtained assuming stronger conditions over \(G_{i}\) than being Lipschitz, \(\mathcal{N}\) be a non-singular M-matrix, and_ \[\int_{0}^{+\infty}uK_{ij}(u)du<+\infty.\] _instead of (5.20)._ **Example 5.2**.: Consider the following low-order Cohen-Grossberg neural network model, \[x^{\prime}(t) =a_{i}(t,x_{i}(t))\bigg{[}-b_{i}(t,x_{i}(t))+\sum_{j=1}^{n}c_{ij1 }(t)h_{ij1}(x_{j}(t))+\sum_{j=1}^{n}c_{ij2}(t)h_{ij2}(x_{j}(t-\tau_{ij}(t)))\] \[\quad+\sum_{j=1}^{n}d_{ij}(t)\int_{0}^{+\infty}g_{ij}(x_{j}(t-u)) K_{ij}(u)du+I_{i}(t)\bigg{]}\,,\,t\geq 0,\,i=1,\ldots,n, \tag{5.22}\] where, for each \(i,j=1,\ldots,n\), \(a_{i}:[0,+\infty)\times\mathbb{R}\rightarrow(0,+\infty)\), \(b_{i}:[0,+\infty)\times\mathbb{R}\rightarrow\mathbb{R}\), \(c_{ij1},c_{ij2},d_{ij},I_{i}:[0,+\infty)\rightarrow\mathbb{R}\), \(\tau_{ij}:[0,+\infty)\rightarrow[0,+\infty)\), \(h_{ij1},h_{ij2},g_{ij}:\mathbb{R}\rightarrow\mathbb{R}\), and \(K_{ij}:[0,+\infty)\rightarrow[0,+\infty)\) are continuous functions such that \(K_{ij}\) verifies (5.17). Sufficient conditions for the exponential stability of (5.22) were obtained in [29, 22]. The existence and global asymptotic stability of a periodic solution of (5.22), with finite delays, were studied in [15]. The following particular situation of (5.22) \[x^{\prime}(t) =a_{i}(x_{i}(t))\bigg{[}-b_{i}(x_{i}(t))+\sum_{j=1}^{n}c_{ij1}(t)h_{j 1}(x_{j}(t))+\sum_{j=1}^{n}c_{ij2}(t)h_{j2}(x_{j}(t-\tau_{ij}(t)))\] \[+\sum_{j=1}^{n}d_{ij}(t)\int_{0}^{+\infty}g_{j}(x_{j}(t-u))K_{ij}( u)du+I_{i}(t)\bigg{]}\,,\,t\geq 0,\,i=1,\ldots,n, \tag{5.23}\] was studied in [36], where conditions for the existence and global exponential stability of a pseudo almost automorphic solution were established. Considering the definition of the bounded variation function \(\eta_{ij}\) as in (5.19), model (5.22) is a particular situation of (5.1), thus from Theorem 5.1 we obtain the following stability criterion **Corollary 5.9**.: _Assume that (H1)-(H3), and, for each \(i,j=1,\ldots,n\), \(\tau_{ij}\) is bounded, \(K_{ij}\) verifies (5.17) and (5.20), and \(h_{ij1},h_{ij2},g_{ij}\) are Lipschitz functions with Lipschitz constants \(\gamma_{ij1},\gamma_{ij2},\xi_{ij}>0\) respectively._ _If there exist \(\varepsilon>0\) and \(w=(w_{1},\ldots,w_{n})>0\) such that, for all \(t\geq 0\) and \(i=1,\ldots,n\),_ \[\underline{a}_{i}\Big{(}\beta_{i}(t)+D_{i}(t)\Big{)}w_{i}-\overline{a}_{i} \sum_{j=1}^{n}\bigg{[}\Big{(}|c_{ij1}(t)|\gamma_{ij1}+|c_{ij2}(t)|\gamma_{ij2}+ |d_{ij}(t)|\xi_{ij}\Big{)}w_{j}\bigg{]}>\varepsilon, \tag{5.24}\] _then the model (5.22) is globally exponentially stable._ **Remark 5.10**.: _In [29] the exponential stability of (5.22) was established assuming hypotheses in Corollary 5.9 with the condition_ \[\underline{a}_{i}\Big{(}\beta_{i}(t)+D_{i}(t)\Big{)}-\sum_{j=1}^{n}\overline{ a}_{j}\bigg{[}\Big{(}|c_{ij1}(t)|\gamma_{ij1}+|c_{ij2}(t)|\gamma_{ij2}+|d_{ij}(t)| \xi_{ij}\Big{)}\frac{w_{j}}{w_{i}}\bigg{]}>\varepsilon, \tag{5.25}\] _instead of (5.24)._ Model (5.23) is a particular situation of (5.11), from Corollaries 5.4 and 5.9 and Remark 5.10 we obtain the following result **Corollary 5.11**.: _Assume (H1), (H3*) with \(\beta_{i}(t)\equiv\beta_{i}\), and, for each \(i,j=1,\ldots,n\), \(c_{ij1},c_{ij2},\tau_{ij},d_{ij},I_{i}\) are \(\omega-\)periodic for some \(\omega>0\), \(K_{ij}\) verifies (5.17) and (5.20), and \(h_{j1},h_{j2},g_{j}\) are Lipschitz functions with Lipschitz constants \(\gamma_{j1},\gamma_{j2},\xi_{j}>0\) respectively._ _If there exists \(w=(w_{1},\ldots,w_{n})>0\) such that one of the following conditions_ \[\underline{a}_{i}\beta_{i}w_{i}-\overline{a}_{i}\sum_{j=1}^{n}\bigg{[}\Big{(}| c_{ij1}(t)|\gamma_{j1}+|c_{ij2}(t)|\gamma_{j2}+|d_{ij}(t)|\xi_{j}\Big{)}w_{j} \bigg{]}>0, \tag{5.26}\] _or_ \[\underline{a}_{i}\beta_{i}w_{i}-\sum_{j=1}^{n}\overline{a}_{j}\bigg{[}\Big{(}| c_{ij1}(t)|\gamma_{j1}+|c_{ij2}(t)|\gamma_{j2}+|d_{ij}(t)|\xi_{j}\Big{)}w_{j} \bigg{]}>0, \tag{5.27}\] _holds for all \(t\in[0,\omega]\) and \(i=1,\ldots,n\), then the model (5.23) has an \(\omega-\)periodic solution which is globally exponentially stable._ **Remark 5.12**.: _In [36] the existence of a unique pseudo almost automorphic solution of model \((\ref{eq:11})\) with \(c_{ij1},c_{ij2},\tau_{ij},d_{ij}\) being pseudo almost automorphic functions was obtained assuming (H1), (H3*) with \(\beta_{i}(t)\equiv\beta_{i}\), \(K_{ij}\) verifying (5.17), (5.20), and \(h_{j1},h_{j2},g_{j}\) are Lipschitz functions with Lipschitz constants \(\gamma_{j1},\gamma_{j2},\xi_{j}>0\), and_ \[\underline{a}_{i}\beta_{i}w_{i}-\sum_{j=1}^{n}\overline{a}_{j}\bigg{[}\Big{(} \overline{c}_{ij1}\gamma_{j1}+\overline{c}_{ij2}\gamma_{j2}+\overline{d}_{ij} \xi_{j}\Big{)}w_{j}\bigg{]}>0, \tag{5.28}\] _where \(\overline{c}_{ijp}=\sup|c_{ij1}(t)|\), \(\overline{d}_{ij}=\sup|d_{ij}(t)|\) for \(i,j=1,\ldots,n\) and \(p=1,2\)._ _All periodic functions are pseudo almost automorphic functions. Thus Corollary 5.11 is not a generalization of [36, Theorem 3.1]. However, in case of (5.23) being a periodic model, the existence criterium in Corollary 5.11 is better than the corresponding criterium in [36, Theorem 3.1]._ **Example 5.3.** Consider the following high-order Cohen-Grossberg neural network model, \[x^{\prime}(t) =a_{i}(x_{i}(t))\bigg{[}-b_{i}(x_{i}(t))+\sum_{j=1}^{n}c_{ij}(t)f _{j}(\rho_{j}x_{j}(t))+\sum_{j=1}^{n}d_{ij11}(t)f_{j}\left(\rho_{j}\int_{0}^{+ \infty}K_{ij}(u)x_{j}(t-u)du\right)\] \[+\sum_{j,l=1}^{n}d_{ijl2}(t)f_{j}\left(\rho_{j}\int_{0}^{+\infty} K_{ij}(u)x_{j}(t-u)du\right)f_{l}\left(\rho_{l}\int_{0}^{+\infty}K_{il}(u)x_{l} (t-u)du\right)+I_{i}(t)\bigg{]}\,, \tag{5.29}\] for \(\,t\geq 0,\,i=1,\ldots,n\), where, for each \(i,j,l=1,\ldots,n\) and \(q=1,2\), \(\rho_{i}>0\), \(a_{i}:\mathbb{R}\to(0,+\infty)\), \(b_{i}:\mathbb{R}\to\mathbb{R}\), \(c_{ij},d_{ijlq},I_{i}:[0,+\infty)\to\mathbb{R}\), \(f_{j}:\mathbb{R}\to\mathbb{R}\), and \(K_{ij}:[0,+\infty)\to[0,+\infty)\) are continuous functions such that \(K_{ij}\) verifies (5.17). The existence and global exponential stability of a periodic solution of (5.29) were studied in [23]. Considering the definition of the bounded variation functions \(\eta_{ij}\) as in (5.19), model (5.29) is a particular situation of (5.11), thus from Corollary 5.4 we obtain the following stability criterion. **Corollary 5.13**.: _Assume (H1), (H3*) with \(\beta_{i}(t)\equiv\beta_{i}\), and, for each \(i,j,l=1,\ldots,n\) and \(q=1,2\), \(c_{ij},d_{ijlq},I_{i}\) are \(\omega-\)periodic for some \(\omega>0\), \(K_{ij}\) verifies (5.17) and (5.20), and there are \(M_{j}>0\) and \(\mu_{j}>0\) such that_ \[|f_{j}(u)-f_{j}(v)|\leq\mu_{j}|u-v|\quad\text{and}\quad|f_{j}(u)|\leq M_{j}, \quad\forall u,v\in\mathbb{R},\,j=1,\ldots,n.\] _If there exists \(w=(w_{1},\ldots,w_{n})>0\) such that, for all \(t\geq 0\) and \(i=1,\ldots,n\),_ \[\underline{a}_{i}\beta_{i}w_{i}>\overline{a}_{i}\sum_{j=1}^{n} \bigg{[}|c_{ij}(t)|\mu_{j}\rho_{j}w_{j}+|d_{ij11}(t)|\rho_{j}\mu_{j}w_{j}\] \[+\sum_{l=1}^{n}|d_{ijl2}(t)|\Big{(}w_{j}M_{j}\mu_{j}\rho_{j}+w_{l }M_{l}\mu_{l}\rho_{l}\Big{)}\bigg{]}, \tag{5.30}\] _then the model (5.22) has an \(\omega-\)periodic solution which is globally exponentially stable._ Proof.: If we take in model (5.11) \(P=1\), \(Q=2\), and, for each \(i,j,l=1,\ldots,n\), \(q=1,2\), the functions \(F_{i}(u)=G_{i}(u)=u\), \(\tau_{ij1}(t)=\widetilde{\tau}_{ij1}(t)=0\), \(h_{ij1l}(u_{1},u_{2})=f_{j}(\rho_{j}u_{1})\), \(c_{ij11}(t)=c_{ij}(t)\)\(c_{ij11}(t)=d_{ij11}(t)=0\) for \(l\neq 1\), \(f_{ij11}(u_{1},u_{2})=f_{j}(\rho_{j}u_{1})\), \(f_{ijl2}(u_{1},u_{2})=f_{j}(\rho_{j}u_{1})f_{j}(\rho_{j}u_{2})\), \(g_{ijq}(u)=\widetilde{g}_{ijq}(u)=u\), and \(\tilde{\eta}_{ijq}(s)=\eta_{ijq}(s)\) defined by (5.19) for all \(u_{1},u_{2}\in\mathbb{R}\) and \(s\leq 0\), then we obtain model (5.29). For all \(u_{1},u_{2},v_{1},,v_{2}\in\mathbb{R}\), we have \[|h_{ij11}(u_{1},u_{2})-h_{ij11}(v_{1},v_{2})| =|f_{j}(\rho_{j}u_{1})-f_{j}(\rho_{j}v_{1})|\leq\rho_{j}\mu_{j}|u_ {1}-v_{1}|,\] \[|f_{ij11}(u_{1},u_{2})-f_{ij11}(v_{1},v_{2})| =|f_{j}(\rho_{j}u_{1})-f_{j}(\rho_{j}v_{1})|\leq\rho_{j}\mu_{j}|u_ {1}-v_{1}|,\] and \[|f_{ijl2}(u_{1},u_{2})-f_{ijl2}(v_{1},v_{2})| =|f_{j}(\rho_{j}u_{1})f_{j}(\rho_{j}u_{2})-f_{j}(\rho_{j}v_{1})f_{ j}(\rho_{j}u_{2})|\] \[\leq M_{j}\rho_{j}\mu_{j}|u_{1}-v_{1}|+M_{j}\rho_{j}\mu_{j}|u_{2}- v_{2}|,\] for all \(i,j,l=1,\ldots,n\), thus hypothesis (H4**) holds. Condition (5.2) follows from (5.20) and the inequality (5.12) reads as (5.30). Finally, the result follows from Corollary 5.4. **Remark 5.14**.: _In [23], sufficient conditions for the existence and global exponential stability of a \(\omega-\)periodic solution of (5.29) were presented. However, it is important to mention that the proof of the main result is not correct. Specifically, the way inequality [23, (3.10)] is obtained is problematic. In fact assuming the uniqueness of solution of (5.29) with initial condition \(x_{0}=\psi\) for \(\psi\in BC\), denoting this solution by \(x(t,0,\psi)\), and defining \(P:BC\to BC\) by \(P(\psi)=x_{\omega}(\cdot,0,\psi)\), we always have_ \[\|P^{N}(\psi_{1})-P^{N}(\psi_{2})\| =\|x_{N\omega}(\cdot,0,\psi_{1})-x_{N\omega}(\cdot,0,\psi_{2})\|\] \[=\sup_{s\leq 0}\|x(N\omega+s,0,\psi_{1})-x(N\omega+s,0,\psi_{2})\|\] \[\geq\|\psi_{1}-\psi_{2}\|\] _for all \(\psi_{1},\psi_{2}\in BC\) and \(N\in\mathbb{N}\), since model (5.29) is \(\omega-\)periodic._ ## 6 Numerical Example Here, we present a numerical example to illustrate the applicability of some new results given in this work. The system \[x_{1}^{\prime}(t)= \left(\frac{1}{48}\sin\left(x_{1}(t)\right)+\frac{7}{48}\right) \bigg{[}-(9+\sin(t))x_{1}(t)+c\cos(t)\arctan\left(x_{1}(t-\sin(t))\right) \tag{6.1}\] \[\cdot\arctan\left(x_{2}(t-\cos(t))\right)+d\sin(t)\int_{0}^{+ \infty}\mathrm{e}^{-u}x_{2}(t-u)du+\cos(t)\bigg{]}\] \[x_{2}^{\prime}(t)= \left(2+\cos\left(x_{2}(t)\right)\right)\bigg{[}-(2+\cos(t))x_{2} (t)+\hat{c}\sin(t)\arctan\left(x_{1}(t-\cos(t))\right)\] \[+\hat{d}\cos(t)\tanh\left(\int_{0}^{+\infty}\mathrm{e}^{-u}x_{1} (t-u)du\right)\tanh\left(\int_{0}^{+\infty}\mathrm{e}^{-u}x_{2}(t-u)du\right) +\mathrm{e}^{\sin(t)}\bigg{]}\] for \(t\geq 0\), where \(c,d,\hat{c},\hat{d}\in\mathbb{R}\), is a \(2\pi-\)periodic example of a high-order Cohen-Grossberg neural network model. Defining \(\eta_{ij1},\widetilde{\eta}_{ij1}:(-\infty,0]\to\mathbb{R}\) by \[\eta_{ij1}(s)=\widetilde{\eta}_{ij1}(s)=\int_{-\infty}^{s}\mathrm{e}^{v}dv, \hskip 14.226378pts\in(-\infty,0],\] system (6.1) is a particular situation of (5.11). However, (6.1) is not a particular case of (5.29), thus the model studied in [23] is not general enough to include (6.1) as a particular example. Following the notations in (5.11) and (5.13), we have \(n=2\), \(P=Q=1\), \(\underline{a}_{1}=\frac{1}{8}\), \(\overline{a}_{1}=\frac{1}{6}\), \(\underline{a}_{2}=1\), \(\overline{a}_{2}=3\), \(\underline{\beta}_{1}=8\), \(\underline{\beta}_{2}=1\), \(\zeta_{i}=\varsigma_{i}=1\), \(\gamma^{(1)}_{1121}=\gamma^{(2)}_{1121}=\frac{\pi}{2}\), \(\gamma^{(1)}_{2111}=1\), \(\mu^{(2)}_{1211}=\mu^{(1)}_{2121}=\mu^{(2)}_{2121}=1\), \(I_{1}(t)=\cos(t)\), \(I_{2}(t)=\mathrm{e}^{\sin(t)}\), \(\gamma^{(1)}_{1121}=\gamma^{(2)}_{1121}=\frac{\pi}{2}\), \(\gamma^{(1)}_{2111}=1\), \(\mu^{(2)}_{1211}=\mu^{(1)}_{2121}=\mu^{(2)}_{2121}=1\), \(c_{1121}(t)=c\cos(t)\), \(c_{2111}(t)=\hat{c}\sin(t)\), \(d_{1211}(t)=d\sin(t)\), \(d_{2121}(t)=\hat{d}\cos(t)\), and all other \(c_{ijl1}(t)=d_{ijl1}(t)=0\), for \(i,j,l=1,2\). Consequently, example (6.1) is \(2\pi-\)periodic and the matrix \(\mathcal{M}\), defined in (5.14), has the form \[\mathcal{M}=\left[\begin{array}{cc}1-\frac{\pi}{16}|c|&-\frac{1}{2}\left( \frac{\pi}{2}|c|+|d|\right)\\ -3\left(\frac{\pi}{2}|\hat{c}|+|\hat{d}|\right)&1-3|\hat{d}|\end{array}\right].\] Condition (5.2) trivially holds with \(\vartheta\in(0,1)\). Consequently, Corollary 5.5 assures the existence and exponential stability of a \(2\pi-\)periodic solution of (6.1) in case \(\mathcal{M}\) being a non-singular M-matrix. For example, if we consider \(c=\frac{1}{\pi}\), \(d=\frac{1}{100}\), \(\hat{c}=\frac{1}{30\pi}\), \(\hat{d}=\frac{1}{30}\), we have \[\mathcal{M}=\left[\begin{array}{cc}\frac{15}{16}&-\frac{101}{200}\\ &\\ -\frac{1}{5}&\frac{9}{10}\end{array}\right],\] which is a non-singular M-matrix. **Acknowledgments.** This work was partially supported by Fundacao para a Ciencia e a Tecnologia (Portugal) within the Projects UIDB/00013/2020, UIDP/00013/2020 of CMAT-UM (Jose J. Oliveira), and Project UIDB/00212/2020 of CMA-UBI (Ahmed Elmwafy and Cesar M. Silva).
2308.14980
Scope and limitations of ad hoc neural network reconstructions of solar wind parameters
Solar wind properties are determined by the conditions of their solar source region and transport history. Solar wind parameters, such as proton speed, proton density, proton temperature, magnetic field strength, and the charge state composition of oxygen, are used as proxies to investigate the solar source region of the solar wind. The transport and conditions in the solar source region affect several solar wind parameters simultaneously. The observed redundancy could be caused by a set of hidden variables. We test this assumption by determining how well a function of four of the selected solar wind parameters can model the fifth solar wind parameter. If such a function provided a perfect model, then this solar wind parameter would be uniquely determined from hidden variables of the other four parameters. We used a neural network as a function approximator to model unknown relations between the considered solar wind parameters. This approach is applied to solar wind data from the Advanced Composition Explorer (ACE). The neural network reconstructions are evaluated in comparison to observations. Within the limits defined by the measurement uncertainties, the proton density and proton temperature can be reconstructed well. We also found that the reconstruction is most difficult for solar wind streams preceding and following stream interfaces. For all considered solar wind parameters, but in particular the proton density, temperature, and the oxygen charge-state ratio, parameter reconstruction is hindered by measurement uncertainties. The reconstruction accuracy of sector reversal plasma is noticeably lower than that of streamer belt or coronal hole plasma. The fact that the oxygen charge-state ratio, a non-transport-affected property, is difficult to reconstruct may imply that recovering source-specific information from the transport-affected proton plasma properties is challenging.
Maximilian Hecht, Verena Heidrich-Meisner, Lars Berger, Robert F. Wimmer-Schweingruber
2023-08-29T02:14:08Z
http://arxiv.org/abs/2308.14980v1
# Scope and limitations of ad hoc neural network reconstructions of solar wind parameters ###### Abstract Context:Solar wind properties are determined by the conditions of their solar source region and transport history. Solar wind parameters, such as proton speed, proton density, proton temperature, magnetic field strength, and the charge state composition of oxygen, are used as proxies to investigate the solar source region of the solar wind. The solar source region of the solar wind is relevant to both the interaction of this latter with the Earth's magnetosphere and to our understanding of the underlying plasma processes, but the effect of the transport history of the wind is also important. The transport and conditions in the solar source region affect several solar wind parameters simultaneously. Therefore, the typically considered solar wind properties (e.g. proton density and oxygen charge-state composition) carry redundant information. Here, we are interested in exploring this redundancy. Aims:The observed redundancy could be caused by a set of hidden variables that determine the solar wind properties. We test this assumption by determining how well a (arbitrary, non-linear) function of four of the selected solar wind parameters can model the fifth solar wind parameter. If such a function provided a perfect model, then this solar wind parameter would be uniquely determined from hidden variables of the other four parameters and would therefore be redundant. If no reconstruction were possible, this parameter would be likely to contain information unique to the parameters evaluated here. In addition, isolating redundant or unique information contained in these properties guides requirements for in situ measurements and development of computer models. Sufficiently accurate measurements are necessary to understand the solar wind and its origin, to meaningfully classify solar wind types, and to predict space weather effects. Methods:We employed a neural network as a function approximator to model unknown, arbitrary, non-linear relations between the considered solar wind parameters. This approach is not designed to reconstruct the temporal structure of the observations. Instead a time-stable model is assumed and each point of measurement is treated separately. This approach is applied to solar wind data from the Advanced Composition Explorer (ACE). The neural network reconstructions are evaluated in comparison to observations, and the resulting reconstruction accuracies for each reconstructed solar wind parameter are compared while differentiating between different solar wind conditions (i.e. different solar wind types) and between different phases in the solar activity cycle. Therein, solar wind types are identified according to two solar-wind classification schemes based on proton plasma properties. Results:Within the limits defined by the measurement uncertainties, the proton density and proton temperature can be reconstructed well. Each parameter was evaluated with multiple criteria. Overall proton speed was the parameter with the most accurate reconstruction, while the oxygen charge-state ratio and magnetic field strength were most difficult to recover. We also analysed the results for different solar wind types separately and found that the reconstruction is most difficult for solar wind streams preceding and following stream interfaces. Conclusions:For all considered solar wind parameters, but in particular the proton density, proton temperature, and the oxygen charge-state ratio, parameter reconstruction is hindered by measurement uncertainties. The proton speed, while being one of the easiest to measure, also seems to carry the highest degree of redundancy with the combination of the four other solar wind parameters. Nevertheless, the reconstruction accuracy for the proton speed is limited by the large measurement uncertainties on the respective input parameters. The reconstruction accuracy of sector reversal plasma is noticeably lower than that of streamer belt or coronal hole plasma. We suspect that this is a result of the effect of stream interaction regions, which strongly influence the proton plasma properties and are typically assigned to sector reversal plasma. The fact that the oxygen charge-state ratio --a non-transport-affected property-- is difficult to reconstruct may imply that recovering source-specific information from the transport-affected proton plasma properties is challenging. This underlines the importance of measuring the heavy ion charge-state composition. ## 1 Introduction The properties of the solar wind mostly depend on two factors, the conditions in the solar source region and the transport history of the solar wind until it is measured at a spacecraft. The charge states observed in the solar wind are determined by the electron temperature in the solar region and are in good approximation frozen in after the solar wind leaves the hot corona (Geiss et al., 1995; Aellig et al., 1997). The initial properties bulk proton speed, proton density, and proton temperature also vary with the solar source region of the solar wind, but are, in addition, affected by transport effects. In the context of our study, we consider the following aspects as transport effects: expansion, wave-particle interactions, collisions, and compression regions as found in stream interaction regions (SIRs). Except for expansion, each of them affect the reconstruction of solar wind properties in our study. As a result of expansion, the magnetic field, the proton density, and proton temperature all decrease with increasing solar distance (Marsch et al., 1982; Perrone et al., 2019). While expansion affects all the solar wind at 1 astronomical unit (AU) in the same way, other transport effects impact different types of solar wind differently. Depending on the solar source region, at least two types of solar wind are typically distinguished (von Steiger et al., 2000; Zhao et al., 2009; Zhao and Fisk, 2010; Xu and Borovsky, 2015; Camporeale et al., 2017). Coronal holes have been identified as the source of the (typically) faster component of the solar wind (Hundhausen et al., 1968; Tu et al., 2005; Schwadron et al., 2005), which is associated with low oxygen charge states, low proton densities, high proton temperatures, and high magnetic field strength. Coronal hole wind is strongly affected by wave-particle interactions; in particular (real and apparent) heating of the proton bulk, which explains the high observed proton temperatures in this solar wind type. Wave-particle interactions are also assumed to be the cause of the differential streaming observed in coronal hole wind (Berger et al., 2011; Kasper et al., 2008, 2012; Janitzek et al., 2016). An example of the redundant information contained in the different solar wind parameters is provided by the fact that the high observed proton temperature in coronal hole wind is an effect of the presence of Alfven waves in the solar wind (Marsch et al., 1982). This is illustrated by Heidrich-Meisner et al. (2020), who show that explicit information about the magnetic field is not necessary to identify the same solar wind types as in Xu and Borovsky (2015). Our study investigates such redundancies among solar wind parameters. The properties of the slow solar wind systematically differ from those of the coronal hole wind. Slow solar wind is typically associated with high proton densities, low proton temperatures, low magnetic field, and high (oxygen) charge states (von Steiger et al., 2000; Zhao et al., 2009; Zhao and Fisk, 2010; Xu and Borovsky, 2015). These properties correspond to the properties of closed-field-line regions on the Sun. However, the exact source regions of slow solar wind and the corresponding release mechanisms are still a matter of debate (Schwadron et al., 2005; Sakao et al., 2007; Rouillard et al., 2010; Antiochos et al., 2011; Stakhiv et al., 2015; D\({}^{\prime}\)Amicis and Bruno, 2015). At 1 AU, the slow solar wind has experienced just enough collisions that their impact begins to thermalise the velocity distribution function (Kasper et al., 2012; Janitzek et al., 2016) and Alfvenic wave activity is low. Solar wind originating in equatorial coronal holes can also be observed with comparatively low solar wind proton speeds. Such slow coronal hole wind is also called Alfvenic slow solar wind (D\({}^{\prime}\)Amicis and Bruno, 2015; Panascenco et al., 2020; Louarn et al., 2021). Observing both coronal hole wind and slow solar wind in the same proton speed range also illustrates that the solar wind proton speed alone is not well suited to characterizing solar wind. Other solar wind properties are better tracers of solar wind type. As observations from Helios (Marsch et al., 1982), Parker Solar Probe (Verniero et al., 2020; Zhao et al., 2021), and Solar Orbiter (Jannet et al., 2021; Carbone et al., 2021) show that waves occur frequently in all types of solar wind close to the Sun, the presence or absence of waves can also be considered as a transport effect that is more important close to the Sun than at greater solar distances. Another important transport effect that is increasingly influential as the solar wind travels further from the Sun is linked to the compression regions in SIRs that develop at the boundary of solar wind streams with different speeds (Smith and Wolfe, 1976; Richardson, 2018). If a faster solar wind stream interacts with a preceding slower solar wind stream, an SIR forms that is characterised by (hot and dense) compression regions in both the slow and the fast participating solar wind stream and a high magnetic field strength at the stream interface. As modelled in Hofmeister et al. (2022), SIRs evolve with radial distance and a decreasing amount of unperturbed fast solar wind is observed with increasing distance. Therefore, in this study, we consider SIRs as a transport effect on the solar wind. Since SIRs are often associated with a change in magnetic field polarity, in the Xu and Borovsky (2015) categorisation, compressed slow solar wind tends to be identified as so-called sector reversal plasma (Heidrich-Meisner et al., 2020). Although the properties of slow solar wind can be highly variable and coronal hole wind also shows variability (Zhao and Landi, 2014; Heidrich-Meisner et al., 2016) on multiple scales, the respective average properties are systematically correlated with each other (Lepri et al., 2013; McComas et al., 2000; von Steiger et al., 2000). This redundancy hints at a common underlying cause that determines these properties. Under the assumption that all observed solar wind parameters are determined by the same set of hidden variables in the solar corona, it would be possible to reproduce each solar wind parameter from the redundant measurements of the other solar wind parameters. In this study, we test this assumption with the help of a general function approximator to model the (partly) unknown dependencies of the respective solar wind properties. After the solar wind leaves the solar corona, such a relation can be modified by transport effects. Therefore, we investigate the resulting reconstruction separately for different solar wind types with their different respective transport histories. In this way, our study evaluates the degree to which the relationship between solar wind parameters is modified by different transport effects. To this end, we employ feed-forward neural networks as general function approximators (Hornik et al., 1989) and apply our method to solar wind observed at L1. In recent years, the application of machine learning to solar physics questions has become increasingly popular. For example, unsupervised clustering methods are very well suited to solar wind classification (Heidrich-Meisner and Wimmer-Schweingruber, 2018; Amaya et al., 2020). Camporeale et al. (2017) provide a generalisation of the Xu and Borovsky (2015) method, with a supervised learning approach based on Gaussian processes. Ambitious projects aim to predict the solar wind speed directly from remote sensing observations of the solar corona with deep neural network architectures (Upendran et al., 2020; Raju and Das, 2021). Simple neural networks have been successfully applied as general function approximators in many different research areas (e.g. Kuschewski et al. (1993); An and Moon (1993); Smits et al. (1994); Heidrich-Meisner and Igel (2009); Tahmasebi and Hezarkhani (2011)) and are therefore well suited to our purposes. The main goal of our study is to investigate how the relationship between the considered solar wind parameters depends on transport effects. To this end, we compare how accurately each solar wind parameter can be reconstructed from the others under different solar wind conditions, with different dominant transport effects. The relationship between different solar wind properties depends on the solar source region. All effects that further modify this relationship after the solar wind leaves the Sun are considered as transport effects in this study. This includes an increase in the proton temperature due to wave-particle interactions, a systematic increase in the proton speed \(v_{\rm p}\) and the proton temperature \(T_{\rm p}\) derived from moments of proton velocity distributions that contain a beam, and increased proton density, proton temperature, and magnetic field strength in compression regions in SIRs. The importance of these transport effects is different for different solar wind types: wave-particle interactions are most important in coronal hole wind; collisions become more relevant as the solar wind slows and becomes more dense (and therefore affect slow solar wind), and compression regions are typically found in sector reversal plasma associated with SIRs. In addition, by investigating the impact of measurement uncertainties on our results, our approach provides guidelines as to which solar wind parameters need to be measured with high accuracy. There are several semi-empiric models of the solar wind (Arge & Pizzo, 2000; Cranmer & Van Ballegooijen, 2005; Cranmer et al., 2007; van der Holst et al., 2010; Pizzo, 2011; Schultz, 2011; van der Holst et al., 2014; Pomoell & Poedts, 2018) that derive the solar wind properties at arbitrary positions in the heliosphere through magneto-hydrodynamic (MHD) simulations based on observations of the solar photosphere or the source surface. This is a challenging task, particularly because the release mechanisms of slow solar wind are still unknown and it is not obvious whether the observations that provide the boundary conditions for these simulations contain all the underlying relevant properties of the solar corona at sufficient resolution. Nevertheless, these models manage to derive the properties of pure slow and coronal hole wind streams with reasonable accuracy. However, SIRs tend to be modelled less accurately. Our approach serves as a minimal sanity check for these kinds of models in two respects: First of all, we can determine whether or not all of the considered solar wind properties are determined by the same set of (unknown) properties in the solar corona. Second, we attempt to determine the degree to which transport effects obscure a potentially underlying relationship between different solar wind parameters. In addition, our approach can also be applied to alleviate the problem of data gaps in solar wind data sets in cases where only some but not all quantities are available. As the solar wind properties of interest are determined by different instruments, such situations occur repeatedly because the corresponding data gaps usually do not line up. Of particular interest is the question of whether charge-state ratios, such as the oxygen \(O^{7+}\) to \(O^{6+}\) ratio, can be reproduced from the measurements of proton plasma properties and the magnetic field strength alone. On the one hand, this would imply that a property that is not affected by transport effects but is solely determined by the solar origin can be recovered from the plasma properties that are (strongly) affected by transport effects. On the other hand, this could help with situations where information on the charge-state of heavy ions is not available. Measuring the charge-state composition of the solar wind is a challenging task and the resulting instruments have repeatedly suffered from difficulties. Therefore, for many points in time and space within the heliosphere, only observations of the proton plasma properties are available but no charge-state measurements. If charge-state information could be recovered (even with low accuracy), this could be employed to augment existing data sets. Our neural network approach to reconstruct solar wind parameters is described in detail in Sect. 2. This includes the preprocessing applied to the solar wind data from the Advanced Composition Explorer (ACE). In Sect. 3 we present and analyse the results of this reconstruction. Our results are discussed in Sect. 4. ## 2 Data and methods We use solar wind data from the Advanced Composition Explorer (ACE) measured by the Solar Wind Electron Proton And Alpha Monitor (SWEPAM, McComas et al. (1998)), the magnetometer (MAG, Smith et al. (1998)), and the Solar Wind Ion Composition Spectrometer (SWICS, Gloeckler et al. (1998)) from 2001-2010. All data products are binned to the native 12 minute time resolution of SWICS and the only data points considered are those that contain valid entries for proton speed \(v_{\rm p}\), proton density \(n_{\rm p}\), proton temperature \(T_{\rm p}\) (from SWEPAM), the magnetic field strength \(B\) (from MAG), and the oxygen charge-state ratio, \(n_{O^{3}}/n_{O^{8+}}\), with \(n_{O^{8+}}\), \(n_{O^{3+}}\) as the \(O^{6+}\) and \(O^{7}\) densities measured by SWICS. Each 12 minute bin is treated as its own isolated data point. Thus, our method does not exploit or model the temporal structure of the solar wind. We assume a time-independent relationship. We test the limits of this assumption by analysing the dependency of the results of our approach in Sect. 3.3. The data set used in this study is available at Berger et al. (2023). We chose the 12 min SWICS time resolution as a compromise: it is sufficiently short that we are able to catch short-term variations, while being as long as is necessary to be able to include charge-state composition data. \(O^{6+}\) is the most abundant ion (heaver than He) that is measured in SWICS. Although \(O^{7+}\) is less abundant, \(n_{O^{5+}}/n_{O^{6+}}\) is among the best determined quantities from SWICS (together with Fe, which is instrumentally well separated from other similarly abundant ions). This choice was influenced by the observations that the majority of the 12 minute \(n_{O^{5+}}/n_{O^{6+}}\) data points are within reasonable error margins. This can be seen in Figure 5, where the median of the \(\chi^{2}_{\rm red}\) error is just below 1. The Monte Carlo simulations --which estimate the effect of the measurement uncertainty on the neural network reconstruction-- take the counting statistics of O into account and show that the neural network reconstruction is stable against the sometimes large uncertainties in the oxygen charge-state composition. In the following, we select four of the five aforementioned solar wind parameters as input parameters for a general purpose function approximator and use this function approximator to reconstruct the remaining fifth parameter. As a general function approximator, we employ a simple feed-forward neural network, namely a multi-layer perception (MLP). This type of neural network is described in more detail in Sect. 2.3. Our objective is formulated as a supervised regression task, that is, the neural network is used to model a functional relationship between input and output data and is provided correct output data examples during training. Our experimental setup is described in the remainder of this section and is summarised in Fig. 1 and the source code is available at Hecht et al. (2023). ### Preprocessing: data selection Before the data are presented to the neural network, we apply the following preprocessing to the ACE data set. We apply a decade logarithm to \(n_{O^{3+}}/n_{O^{6+}}\). An output variable \(\mathbf{y}_{\rm rec}\in[v_{\rm p},n_{\rm p},T_{\rm p},B,\log n_{O^{3+}}/n_{O^{6+}}]\) is then selected for each training scenario. Depending on the chosen output variable \(\mathbf{y}_{rec}\), we construct an input vector \(\mathbf{X}\) from the remaining four solar wind parameters. The output variable \(\mathbf{y}_{rec}\) is the data product that is going to be reconstructed, while the input vector \(\mathbf{X}\) contains the measurements provided for the reconstruction. To categorise solar wind types --and thereby implicitly select solar wind observations with different transport histories--we employ the scheme presented in Xu & Borovsky (2015) or order the data according to proton-proton collisional age, which allows (as shown in Heidrich-Meisner et al. (2020)) a very similar solar wind classification. The proton-proton collisional age is calculated by \[\alpha_{\rm col\_p}=\frac{6.4\cdot 10^{8}\rm{K}^{3/2}}{\rm{cm}^{-3}}\frac{n_{\rm p}}{v _{\rm p}\rm{\,T}_{\rm p}^{3/2}}\ . \tag{1}\] The Xu & Borovsky (2015) solar wind classification scheme distinguishes between coronal hole wind, two types of slow solar wind, and ejecta. The two types of slow solar wind were defined to distinguish between helmet-streamer and pseudo-streamer plasma. Sector reversal (or helmet-streamer) plasma includes a change in the magnetic field polarity and therefore consists of slow and dense slow solar wind in the vicinity of stream interaction regions. Streamer belt (or pseudo-streamer) plasma contains the remaining slow solar wind plasma. The fourth category from the Xu & Borovsky (2015) scheme, namely the ejecta category, which is designed to detect interplanetary coronal mass ejections (ICMEs), is disregarded here because it tends to misidentify particularly cold and dense slow solar wind (Sanchez-Diaz et al., 2016) as ejecta. As ICMEs undergo a (most likely) very different release mechanism from the ubiquitous solar wind, we cannot expect the same relations that hold between properties in the solar wind to also hold between properties in ICMEs. Therefore, we do not consider ICMEs in the following analysis. Instead, ICMEs are identified based on the ICME list from Cane & Richardson (2003); Richardson & Cane (2010) and Jian et al. (2006, 2011) and are subsequently removed from the data set. As the start and end times in both ICME lists are not necessarily well defined, we extended each ICME time interval by six hours at the beginning and the end of each ICME. ### Test, training, and validation data sets To apply and evaluate a supervised learning method, we need to separate the available data set into three different subsets: training, validation, and test data. The training data are used in the training of the neural network, and the validation data are used to estimate the generalisation error of the trained model and for the selection of optimal hyperparameters of the model (see Section 2.5). The previously unseen test data set is only used to evaluate the final performance of the model and is the only data set suitable for a comparison between different models. Therefore, we partition the ACE data set into batches with the approximate length of one Carrington rotation, which is 27.24d. Now, these batches are split into test, training, and validation data sets. The selection of training, validation, and test data sets is randomised, but to ensure that each data set is well distributed over time, we apply the following to the five two-year time frames in our data set. Herein, for each two-year time frame, we randomly select four data batches of Carrington rotation length as test data. From the remaining ten data batches of each two-year time frame, we select training and validation data sets for a five-fold cross-validation (Allen, 1974; Stone, 1974, 1977), that is, two batches become part of the validation data set and the remaining eight go into the training data set. Five-fold cross-validation helps to improve the generalisation capabilities of supervised learning methods by permuting the role of each fold as a validation data set (while always training on the remaining data). This reduces the dependency of the result on a particular choice of the training or validation data set. Next, the scikit-learn (Pedregosa et al., 2011) StandardScaler is applied to the input vector \(\mathbf{X}\) to ensure that the learning is not inhibited by ill-conditioned data points. This standardised the numerical value of each input dimension by removing the mean and scaling to unit variance. Afterwards, the training data are shuffled. As we consider each 12 minute bin independently, we neglect the temporal information. Randomising the order of data points is beneficial for the learning speed of a neural network. Then, after training, each trained neural network is applied to the corresponding validation data set from the five-fold cross-validation. The resulting validation score (see Sect. 2.4) is averaged over the five folds. The average validation score is then used for model selection (see Sect. 2.5). The complete experimental framework is illustrated in Fig 1. Figure 1: Workflow of the solar wind parameter reconstruction algorithm. The top shows the steps done during a single test with specific hyperparameters. The data preprocessing steps are indicated with an orange background. The bottom part shows the model selection phase. Here, different hyperparameters are compared and the best hyperparameter combination is chosen to reconstruct the measurements. Figure 2: Schematic overview of our neural network architecture. Four solar wind parameters form the input vector \(\mathbf{X}\) (in this example \(B\), \(\log n_{\rm{0^{\circ}}}\)-\(n_{\rm{0^{\circ}}}\), \(n_{\rm{p}}\), and \(T_{\rm{f}}\)), the hidden layer contains a variable number of neurons \(n_{\rm{k}}\), and the output \(\mathbf{y}_{\rm{src}}\) is the remaining solar wind parameter (in this example \(v_{\rm{p}}\)). Each layer is fully connected by weights that are represented by arrows. ### Multilayer perceptron We employed a multilayer perceptron (MLP) with one hidden layer as a general purpose function approximator (Hornik et al., 1989) to predict a data product \(\boldsymbol{y}_{\text{rec}}\). The input vector \(\boldsymbol{X}\) consists of the remaining four data products. The specific implementation used is the MLPregressor from the Python package scikit-learn (Pedregosa et al., 2011) version 1.1.2. Figure 2 shows a schematic overview of our neural network structure. In addition to the input vector \(\boldsymbol{X}\) and the output, which is the reconstructed value \(\boldsymbol{y}_{\text{rec}}\), the setup includes a hidden layer consisting of \(n_{k}\) neurons. Each layer is fully connected to the next layer by weights \(w_{i,j}\neq 0\in\mathbb{R}\) (with \(i,j\) indices of neurons from two consecutive layers, e.g. input to hidden layer or hidden layer to output). These form a weight matrix \(\mathbf{W}\). The general principle of neural network training can be summarised in three steps: (1) The forward pass. For each layer, the product of the input vector (or neuron vector) with the weight matrix is computed and the activation function \(f\) is applied to obtain the input vector for the next layer \(\boldsymbol{h}\) or the result \(\boldsymbol{y}_{\text{rec}}\): \(\boldsymbol{h}=f(\mathbf{W}\boldsymbol{X})\). (2) Computation of the difference between known output and current output. The difference between the calculated value \(\boldsymbol{y}_{\text{rec}}\) and the measured value \(\boldsymbol{y}_{\text{meas}}\) describes the current training progress and is calculated using the mean squared error. (3) Back-propagation. The aforementioned difference, as the estimated training progress, is minimised by propagating the error information from the output layer backwards through the neural network. This changes the values in the weight matrices and minimises the training error. Here, we employ the efficient Adam solver; see Kingma & Ba (2014) for a full description. These three steps are repeated \(n_{\text{iter}}\) times. As the chosen back-propagation variant, Adam, has stochastic elements, we repeated the training for 100 independent trials. Each trial is initialised with a different random seed for the initial weights and the training data is shuffled for each trial. Training is stopped after 200 iterations. As shown in Fig. 3, the performance appears to converge after less than 200 iterations for the majority of the hyperparameter combinations. While there is no guarantee that more iterations will not yield further improvements, some tests with 2000 iterations showed no indications of this. ### Error estimation and stability In this subsection, we describe the different performance and error estimates that are of interest for our study. First, to evaluate the validation error, which is the basis for selecting optimal hyperparameter settings, we employ the \(R^{2}\) score. For each reconstructed solar wind parameter, the reconstruction \(\boldsymbol{y}_{\text{rec}}\) is compared to the measurements \(\boldsymbol{y}_{\text{meas}}\). The score \(R^{2}\) is calculated by \[R^{2} =\left(1-\frac{r}{s}\right)\enspace, \tag{2}\] \[r =\sum_{i=1}^{m}(y_{\text{meas},i}-y_{\text{rec},i})^{2}\enspace,\] (3) \[s =\sum_{i=1}^{m}(y_{\text{meas},i}-\text{mean}(y_{\text{meas},i}) )^{2}\enspace. \tag{4}\] An \(R^{2}\) score of 1 indicates a perfect reconstruction. The \(R^{2}\) score is used in the model selection to choose the specific hyperparameters for each reconstruction. The \(R^{2}\) score is well suited to comparing different hyperparameter configurations for the same reconstructed parameter and on the same data set. For a comparison of each of the reconstructed solar wind parameters, different neural networks have to be assessed in relation to each other. For this, the \(R^{2}\) score, which depends on the estimated variance of the data set, is less well suited. \begin{table} \begin{tabular}{c|l l l l l l l} hyperparameter & tested hyperparameter & \(B\) & \(n_{O^{1/2}}/n_{O^{n}}\). & \(n_{\text{p}}\) & \(T_{\text{p}}\) & \(v_{\text{p}}\) \\ \hline \(n_{k}\) & [10, 20, 50, 100] & 10 & 10 & 10 & 10 & 10 \\ iterations & 200 & 200 & 200 & 200 & 200 & 200 \\ activation function & relu & relu & relu & relu & relu & relu \\ logarithmic \(n_{O^{1/2}}/n_{O^{n}}\). & True & True & True & True & True & True \\ solver & adam & adam & adam & adam & adam & adam \\ \(\lambda\) & [0.01, 0.001, 0.0001] & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\ \(\beta_{1}\) & [0.75, 0.8, 0.85, 0.9, 0.95, 0.99] & 0.99 & 0.9 & 0.75 & 0.9 & 0.95 \\ \(\beta_{2}\) & [0.8, 0.85, 0.9, 0.95, 0.99, 0.999] & 0.95 & 0.999 & 0.9 & 0.999 & 0.999 \\ \(\epsilon\) & [10\({}^{-6}\), 10\({}^{-7}\), 10\({}^{-8}\), 10\({}^{-9}\), 10\({}^{-10}\)] & 1e-09 & 1e-09 & 1e-08 & 1e-06 & 1e-06 \\ \(\alpha\) & [0.001, 0.0001, 0.00001] & 0.0001 & 1e-05 & 0.001 & 0.0001 & 0.001 \\ \end{tabular} \end{table} Table 1: Investigated (column 2) and best-performing (columns 3-7) hyperparameters from the model selection for each solar wind parameter. The best hyperparameters are used in the final model as well as in the Monte Carlo error simulations. \begin{table} \begin{tabular}{c|l l l l l} parameter & \(v_{\text{p}}\) & \(n_{\text{p}}\) & \(T_{\text{p}}\) & \(B\) & \(nO\) \\ \hline \(\Delta\) & 1.5\% & 15\% & 20\% & 0.1 nT & from counts \\ \end{tabular} \end{table} Table 2: Relative measurements errors \(\Delta\) of solar wind parameters according to Smith et al. (1998); Skoug et al. (2004); Berger (2008). Figure 3: Validation scores for all considered hyperparameter configurations with ten neurons and all 10 trials for \(\boldsymbol{y}_{\text{rec}}=v_{\text{p}}\). Each individual trial is shown with a thin black line. The hyperparameter combination with the highest final median validation score is plotted in blue. In addition, the median and three overlapping confidence intervals (15.9th - 84.1th percentile, 2.5th - 97.5th percentiles, and 0th - 100th percentiles) range are shown in overlapping blue shaded regions. The results of our study are affected by different sources of uncertainty. Therefore, in the following, three different types of error or uncertainty measure are considered. The first type of uncertainty is the measurement error \(\Delta\) from the measurements of the solar wind parameters. For \(\Delta v_{\rm p},\Delta n_{\rm p}\), and \(\Delta T_{\rm p}\), relative errors are taken from the literature (Skoug et al., 2004). For \(\Delta B\), an absolute value of 0.1 nT is given by Smith et al. (1998). For \(\Delta\log n_{O^{+}}/n_{O^{+}}\), the error is derived from the actual counting statistics of SWICS based on Poisson statistics. As \(O^{+}\) is rare and can be at the limit of the detection capabilities of SWICS in very dilute solar wind, the resulting error can be enormous. We decided against excluding the data points with particularly high oxygen charge-state measurement errors because these occur systematically in very dilute coronal hole wind mainly during the solar activity minimum. Therefore, excluding these data points from our analysis would introduce a systematic bias in the data set. These errors and the reference they were taken from can be seen in Table 2. The second type of error is defined by the comparison of the original measured data to the reconstructed values. These errors are only calculated on the test data set, which encompasses a sample size of 63432 points. For this purpose, we consider linear and quadratic measures. A first straightforward approach is to calculate the relative reconstruction errors \(y_{\rm diff,p}\) between the observed and reconstructed quantity: \[y_{\rm diff}=\frac{y_{\rm meas}-y_{\rm rec}}{y_{\rm meas}}\ . \tag{5}\] Further insights are provided by the mean absolute percentage error (MAPE) and the reduced \(\chi^{2}_{\rm red}\) score. These errors are used to evaluate the accuracy of the reconstruction or as a goodness of fit parameter. The MAPE is calculated as \[\mathrm{MAPE}=\frac{1}{m}\sum_{i=1}^{n}\left|\frac{y_{\rm rec,i}-y_{\rm meas,i} }{y_{\rm meas,i}}\right|\ \, \tag{6}\] with \(m\) being the number of samples and \(i\) the index of each sample. An approximation of the reduced chi-square statistic is calculated on the test data set: \[\chi^{2} =\sum_{i}\frac{(y_{\rm meas,i}-y_{\rm rec,i})^{2}}{(\Delta y_{\rm meas,i})^{2}}, \tag{7}\] \[\chi^{2}_{\rm red} =\frac{\chi^{2}}{\nu}\ \, \tag{8}\] with the degrees of freedom \(\nu\), which is here given as the sample size of the test set (63432) minus the number of parameters fixed by the model, that is, the number of connections in the neural network (here 61). Typically, the reduced \(\chi^{2}_{\rm red}\) score is used in the context of fitting and is computed for all summation indices \(i\) in Equation 7 in the data set to which the model was fitted. Here, we apply this concept to evaluate the reconstruction error with respect to the measurement uncertainty. Therefore, in our case, the reduced \(chi^{2}_{\rm red}\) is calculated from the test data set (and not the training data). The \(\chi^{2}\) score is still considered as a measure of goodness of fit and augments the comparison between the five reconstructions based on the MAPE score. In the context of fitting, a \(\chi^{2}_{\rm red}\) of one indicates a good fit consistent with the measurement errors. Lower values of the reduced \(\chi^{2}_{\rm red}\) can indicate overfitting due to large measurement uncertainties. The third type of error in our study is an estimate of the impact of the measurement uncertainties on the reconstruction error. This estimate is derived with a Monte Carlo simulation and the resulting error is therefore also called the Monte Carlo error. As the potential accuracy of the reconstruction by the MLP regressor is limited by the underlying measurement uncertainty of the five considered solar wind parameters, a basic Monte Carlo approach is used to estimate the effect of this measurement uncertainty. To this end, Gaussian noise is added to each data point. The respective standard deviations of these Gaussian noise distributions depend on the data products to be reconstructed and their respective measurement errors \(\Delta\), and are listed in Table 2. For each noisy data set generated in this way, we apply the same procedure as described in the previous subsections. We repeat the Monte Carlo simulation 100 times. The distribution of the resulting reconstructions based on the noisy data set provides a measure of the susceptibility of the MLP regressor to the measurement uncertainty. To ensure that the Monte Carlo simulation is not biased by the occasionally very poor statistics of \(O^{6+}\), we limit the Monte Carlo noise of \(\log n_{O^{+}}/n_{O^{+}}\) to 0.41 and use this value for the 14.9% of the oxygen data that exhibit a larger relative measurement uncertainty. The variability of the Monte Carlo results is indicated by confidence intervals corresponding to a 1\(\sigma\) environment defined by the 15.9th and 84.1st percentiles. We refer to these as 1\(\sigma\) equivalent percentile confidence intervals in the following. ### Model selection The performance of a machine learning method depends (often sensitively) on the choice of hyperparameters of the method. Therefore, unbiased evaluations and comparisons of machine learning methods are only feasible if optimal hyperparameters are used. The process of selecting hyperparameters is called model selection. Here, we employ a simple grid search to select optimal hyperparameters. Table 1 summarises the hyperparameters of the MLP regressor in sci-kit learn and our overall method (see also Fig. 1). For each combination of hyperparameters given in Table 1, we trained our neural network and computed the validation error. The considered hyperparameters includes the number of neurons \(n_{h}\), the initial learning rate \(\lambda\), the L2 penalty \(\alpha\), the exponential decay rate for estimates of the first moment vector \(\beta_{1}\) and the second moment vector \(\beta_{2}\), and the value for numerical stability \(\epsilon\). We also performed tests with different activation functions, finding that replacing relu with the logistic function or a tanh does not have a significant impact on the results of our study. We base the model selection not on the complete learning history as shown in Fig. 3 but on the final validation scores after 200 iterations. Due to the high number of hyperparameter combinations tested, we initially conducted only ten trials for each hyperparameter configuration. The resulting variability is illustrated in Fig. 3 where the performance of each individual trial for all combinations of the hyperparameters in Table 1 is shown for \(\mathbf{y}_{\rm rec}=v_{\rm p}\). Figure 3 illustrates that many (most) hyperparameters combinations lead to very similar final validation scores. The variability of the hyperparameter combination with the highest final median validation error is indicated with the blue shaded area. This shows that the uncertainty from the individual trials is larger than the differences in the median performance of different hyperparameter combinations. For each \(\mathbf{y}_{\rm rec}\), the hyperparameter combination with the highest final median validation score is considered as optimal. The optimal hyperparameters chosen in this way depend on which solar wind parameter is chosen as the reconstructed output vector \(\mathbf{y}_{rec}\). These optimal combinations are used in the remainder of this study to reconstruct the solar wind parameter on the test set. Table 1 shows the final hyperparameters. ## 3 Reconstruction of solar wind parameters We apply our method to the ACE data set as described in the previous section to obtain a model (realised by a neural network) for each of the five considered solar wind parameters. This model can then be applied to previously unseen solar wind data, namely the test data, to evaluate and analyse the performance of our neural network function approximators and to address our research questions. Figure 4 shows 3 of the 22 test data time periods of 27.24 days. For each reconstructed parameter, the respective observation is shown in the same panel. As an inset, in red the MAPE score of the reconstruction is given in each panel together with confidence intervals derived from the Monte Carlo simulations. These confidence intervals reflect the effect of the measurement uncertainty estimated by Monte Carlo simulations and are defined by the 15.9th and 84.1st percentiles of the Monte Carlo simulation results. As these values are calculated from noisy data, the score of the original data is in some cases significantly different (see in Fig. 4c). Overall, the reconstruction captures the major fluctuations in all solar wind parameters reasonably well. However, for all reconstructed solar wind parameters, particular low and high values in the observations are frequently over- or underestimated by the neural network. These observations are supported by the calculated MAPE scores. In particular, \(n_{O^{+}}/n_{O^{0+}}\) is consistently underestimated in the second time period (Fig. 4n)) and for most of 2003 August 26 in the first time period (Fig. 4m)). Further, the reconstruction quality varies for different reconstructed solar wind parameters and in different test data time periods. In particular, the reconstruction of the proton speed appears more accurate than the other reconstructions. This topic is investigated in more detail in the following subsections. ### Reconstruction error The upper row of Fig. 5 shows histograms of relative reconstruction errors for the complete test data set and all reconstructed so Figure 4: Time series of reconstructed and measured solar wind parameters for three selected test data time periods with the average length of a Carrington rotation. The time periods were chosen, based on the MAPE score, to be representative examples of ‘good’, ‘intermediate’, and ‘poor’ performance, respectively. The time period to the left represents one of the most accurate reconstructions, the middle time period represents a reconstruction of average accuracy, and the right time period is one of the poorest reconstructions. For each Carrington rotation, the observed data are plotted in blue and the reconstructed data are plotted in purple. The uncertainty on the reconstruction is estimated with 100 Monte Carlo simulations. The 15.9th to the 84.1st percentiles of the Monte Carlo runs are plotted as purple-shaded areas. Each row depicts one solar wind parameter, from top to bottom: \(\nu_{\rm p}\) (a), (b), and (c) in blue, \(n_{\rm p}\) (d), (e), and (f) in red, \(T_{\rm p}\) (g), (h), and (i) in green, \(B\) (j), (k), and (l) in light grey, and \(n_{O^{+}}/n_{O^{0+}}\) (m), (n), and (o) in dark grey. The MAPE score for each part of the test data set is shown in red as an inset. lar wind parameters. Each panel also gives the MAPE score for each reconstruction. A MAPE score of zero indicates a perfect reconstruction. The sign differentiates between overestimation (negative sign) and underestimation (positive sign). An overestimation of 50% would result in a MAPE score of -0.5 and an overestimation by 100%, or double the measured value, would lead to a MAPE score of -1.0. An underestimation by 50%, or half the measured value, results in a MAPE score of +0.5. The confidence intervals of the MAPE score were calculated for the 100 Monte Carlo runs and are given as \(1\sigma\) equivalent percentile confidence intervals. Each histogram in Fig. 5 is augmented by a black outline that summarises the results of the Monte Carlo simulations. We now first focus on the MAPE scores (included in the legend in each panel). The reconstruction of the proton speed results in a MAPE score of \(0.084\in[0.085,0.108]\). In comparison, the other reconstruction errors are \(n_{\mathrm{p}}:0.400\in[0.369,0.373]\), \(T_{\mathrm{p}}:0.327\in[0.320,0.324]\), \(B:0.277\in[0.280,0.302]\), and \(n_{\mathrm{\mathit{O}}^{2}}/n_{\mathrm{\mathit{O}}^{2}}:0.307\in[0.304,0.305]\). Therefore, as already illustrated in Fig. 4, the reconstruction of the proton speed appears more accurate than the other reconstructions. This is also apparent in the shape of the histograms. The histograms of reconstruction errors for the four other solar wind parameters are asymmetric and feature heavier tails biased towards negative values (Fig. 5 (b), (c), (d), and (e)). This means that the reconstructions for these solar wind parameters tend to overestimate the observations more frequently and more strongly than they underestimate them. The \(1\sigma\) equivalent percentile confidence intervals of the distribution of the individual reconstruction errors (indicated with grey hatching) also underline this. The lower percentile bound is further away from zero than the upper percentile border. In addition, while the maxima of the histogram for the proton speed and the magnetic field strength are located at zero, the maxima are shifted to the right for the three other solar wind parameters. Thus, while extreme values are frequently overestimated by the neural network for \(n_{\mathrm{p}}\), \(T_{\mathrm{p}}\), and \(n_{\mathrm{\mathit{O}}^{2}}/n_{\mathrm{\mathit{O}}^{2}}\), most values are slightly underestimated. This indicates that the model attempts to smooth the solar wind observations more than desired. While this could be a consequence of an excessively small hidden layer in our neural network, our model selection does not indicate an improvement of the validation error for larger hidden layer sizes. Therefore, we favour different explanations: Either (1) the reconstruction might be inhibited by the measurement accuracy, or (2), given that our model is time-independent, within the limitations of the measurement uncertainties, short-term variations that affect some but not all of the considered solar wind parameters cannot be captured by the neural network reconstruction. Figure 5: One-dimensional histograms of reconstruction errors \(y_{\mathrm{\mathit{diff}}}\) and \(\chi^{2}\) scores for each reconstructed solar wind parameter. In each panel, the x-axis is constrained to contain 100 bins from -2 to 1. Each of the top panels depicts the reconstruction accuracy of one of the five reconstructed solar wind parameters (a-e) based on the MAPE score, and the bottom panels (f-j) show normalised densities of the \(chi^{2}\) score for each determined parameter. Each column refers a different reconstructed parameter, from left to right: \(v_{\mathrm{p}}\) (a) and (f) in blue, \(n_{\mathrm{p}}\) (b) and (g) in red, \(T_{\mathrm{p}}\) (c) and (h) in green, \(B\) (d) and (i) in light grey, and \(n_{\mathrm{\mathit{O}}^{2}}/n_{\mathrm{\mathit{O}}^{2}}\) (e) and (j) in dark grey. In each histogram, an area is marked with grey hatches that contains all data from the 15.9th to the 84.1th percentiles. The respective MAPE score and \(\chi^{2}_{\mathrm{\mathit{O}}^{2}}\) are indicated in insets in the top and bottom rows. For both, the respective confidence intervals derived from the Monte Carlo runs are included. An additional black histogram outline is included in each panel, which represents the variability of the Monte Carlo simulations based on randomised input data. In each panel, the x-axis contains 30 logarithmic bins. In the bottom row only, the y-axis is also logarithmic with a lower bound of \(10^{-8}\), i.e. 0.00001% of the \(\chi^{2}\) score density, and the histogram is normalised to the sum of the distribution. The inset also depicts the \(\chi^{2}_{\mathrm{\mathit{O}}^{2}}\) score. In addition, the median of the individual \(\chi^{2}\) scores is noted to show the spread of the distribution. The lower row of Fig. 5 shows histograms of the individual \(\chi^{2}\) values for each reconstructed parameter. The x-axis shows a logarithmic bin distribution of the \(\chi^{2}_{\rm red}\) scores with 30 bins. The y-axis shows the density distribution of each \(\chi^{2}_{\rm red}\) bin. The density distribution is normalised to 1. The red inset provides the reduced \(\chi^{2}_{\rm red}\) score of the whole data set. The confidence intervals in the second row are calculated by taking the \(\chi^{2}_{\rm red}\) scores of the Monte Carlo runs and computing the 15.9th and 84.1st percentiles. The third line of the inset provides the median value of the individual \(\chi^{2}\) on the test data set (not of the Monte Carlo runs). The \(\chi^{2}_{\rm red}\) scores show that the reconstruction that is closest in line with the measurement errors is the proton temperature \(T_{\rm p}\) reconstruction with a score of \(13.3\in[13.6,16.1]\), followed by the proton density \(n_{\rm p}\), the proton speed \(v_{\rm p}\), the oxygen charge-state ratio \(n_{\rm o^{2}}/n_{\rm o^{6}}\), and finally the magnetic field strength \(B\). We interpret the comparatively very poor \(\chi^{2}_{\rm red}\) score of the proton speed despite the apparently good MAPE scores of the re Figure 6: Two-dimensional histograms of the relative reconstruction error (a, b, c, d, e) and MAPE score (f) for each reconstructed solar wind parameter over the proton–proton collisional age: From top to bottom: \(v_{\rm p}\), \(n_{\rm p}\), \(T_{\rm p}\), \(B\), and \(n_{\rm o^{2}}/n_{\rm o^{6}}\). For the first five subplots, the y-axis shows the normalised differences between the reconstruction and the measured data. The x-axis shows the proton-proton collisional age. In all panels, data are sorted into 50 bins between -3.5 and 2.2 on the x-axis, and in panels (a-e) the data are sorted into 50 bins between -2.0 and 1.0 on the y-axis. The vertical black lines give approximate thresholds separating sector reversal from streamer belt plasma (right black line) and streamer belt plasma from coronal hole wind (left black line). The red line highlights the maximum of the distribution in each vertical slice with sufficient statistics (at least 5000 data points over all Monte Carlo runs per column). The bottom-most subplot shows the MAPE scores (on the y-axis) computed separately for all test data points falling into the respective proton–proton collisional age bin (on the x-axis) for each reconstructed solar wind parameter. The MAPE score is calculated according to Equation 6. Confidence intervals in panel (f) are given as three overlapping areas per curve (15.9th - 84.1th percentile, 2.5th - 97.5th percentiles, and 0th - 100th percentiles). Figure 7: MAPE scores for each time period from the test data set for each reconstructed solar wind parameter over ten years and sorted by solar wind type. The top panel shows the respective MAPE scores on all (non-ICME) solar wind data from each test data time period. The three panels below show the MAPE scores separated by their Xu & Borovsky (2015) solar wind type (from top to bottom: sector reversal wind, streamer belt plasma, and coronal hole wind). The bottom panel gives the number of valid data points per test data time period and solar wind type. In addition, on the right y-axis, the bottom panel includes the monthly sun spot number as a reference (SILSO World Data Center 2001–2010). In the four top panels, simple linear fits to the scores of each reconstructed parameter (from 2002-2008) are shown as thin coloured lines. To the right of each of the four upper plots, plus and minus symbols in the colour of the respective reconstructed parameter indicate whether the slope of the corresponding line is positive (+) or negative (–). construction as being the result of the small measurement uncertainty on the proton speed (in particular in comparison to the measurement uncertainties and scores of the proton density and proton temperature; see Table 2). A similar effect probably impacts the magnetic field strength \(\chi^{2}_{\rm red}\) score. For values between 1 and 10 nT, a measurement error of 0.1 nT would correspond to a relative measurement error of 10% to 1%. This is a lower relative measurement error than for the proton density \(n_{\rm p}\) or the proton temperature \(T_{\rm p}\). Therefore, the \(\chi^{2}\) score is poorer despite the fact that the reconstruction accuracy estimated by the MAPE score is similar to that of \(n_{\rm p}\) and \(T_{\rm p}\). However, in the case of the magnetic field strength, the reconstruction is also less accurate than that of the proton speed. The charge-state ratio of oxygen is associated with the largest measurement errors (by far) and this is reflected in the poorest \(\chi^{2}_{\rm red}\) score. The median value of the individual \(\chi^{2}\) scores for the proton temperature (8) and the oxygen charge-state ratio (10) indicate that the majority of the reconstructed data points are consistent with the measurement errors even though their average, the reduced \(\chi^{2}_{\rm red}\), is strongly affected by outliers with a very poor \(\chi^{2}\) score. ### Dependence on solar wind type Next, we investigate how the reconstruction errors relate to the solar wind type. Separating the data into the solar wind types as described in Xu & Borovsky (2015) provides clues as to which solar wind type is most difficult to reconstruct. The MAPE scores and the \(\chi^{2}_{\rm red}\) scores for each parameter and solar wind type are recorded in Table 3. Additionally, the median value for the underlying distribution of each score is provided. Except for the solar wind proton speed \(v_{\rm p}\), the MAPE scores of the sector reversal solar wind type are consistently worse than those of the coronal hole and streamer belt type. Since the three solar wind types considered here are also affected by different transport effects, the comparison of the reconstruction error in different solar wind types also provides hints as to the influence on transport effects on the reconstruction. Most \(\chi^{2}_{\rm red}\) scores follow the pattern that they are higher for higher MAPE scores. Therefore, a poor reconstruction results in comparatively high MAPE scores and \(\chi^{2}_{\rm red}\) scores. Nevertheless, there are some exceptions. The proton density \(n_{\rm p}\) for sector reversal plasma shows a low \(\chi^{2}_{\rm red}\) score compared to coronal hole or streamer belt plasma. We suspect that reconstructing the proton density in coronal hole wind is an indirect result of wave activity. Waves, which at 1AU are mainly observed in coronal hole wind, increase the variability in the proton speed, proton temperature, and the magnetic field strength, while not affecting the proton density. This creates an ambiguity for the proton density, because for the same constant proton density, variable combinations of proton speed, proton temperature, and magnetic field are observed. Figure 6 shows the reconstruction errors for each solar wind parameter ordered by the proton-proton collisional age \(a_{\rm col,p-p}\). The proton-proton collisional age (also referred to as the Coulomb number; see Kasper et al. (2012)) is well suited to ordering the solar wind observations directly by their collisional history (Kasper et al., 2019, 2012; Tracy et al., 2016); it also serves as a proxy for the Xu & Borovsky (2015) solar wind types, as discussed in (Heidrich-Mestiner et al., 2020). Low proton-proton collisional age (\(\log a_{\rm col,p+p}\lesssim-0.9\)) approximately corresponds to coronal hole wind, intermediate proton-proton collisional age (\(-0.9\lesssim\log a_{\rm col,p-p}\lesssim 0.1\)) to streamer belt plasma, and high proton-proton collisional age (\(0.1\lesssim\log a_{\rm col,p+p}\)) to sector reversal plasma. Therefore, the three solar wind plasma types can be approximately separated by the proton-proton collisional age. The locations of the column-wise maxima in Fig. 6 and the shape of the distributions in each panel are different for each reconstructed solar wind parameter. For the magnetic field strength, the core of the coronal hole wind population (low proton-proton collisional age) and the streamer belt population are shifted to the negative relative errors, that is, to overestimation. For the proton density, the peak positions exhibit a similar systematic behaviour to the magnetic field strength. For the proton temperature, the core of the streamer belt population is shifted slightly towards the positive direction, that is, underestimation in the intermediate proton-proton collisional age range, and more strongly in the positive direction in the low proton-proton collisional age range, which again indicates underestimation. Under the assumption that waves are predominantly observed in coronal hole wind, which is here identified by a low proton-proton collisional age, this systematic change of the positions of the core positions could be interpreted as the effect of the presence or absence of waves on the reconstruction. The (apparent or real, see Verscharen & Marsch (2011)) heating of the proton core distribution by waves increases the observed proton temperature in coronal hole wind. This is likely not covered well by the neural network model, because in the corresponding proton-proton collisional age range, the distributions of the reconstruction errors for the proton temperature moves to the right, which indicates underestimation of the observed proton temperatures by the neural network model. In addition, for the sector reversal plasma, the higher the proton-proton collisional age, the less accurate the reconstruction for all reconstructed parameters except for the proton speed. This effect is most pronounced in the proton density and in the proton temperature. With compressed slow solar wind from SIRs, the sector reversal plasma contains strongly transported solar wind plasma. In particular, the compression regions affect the proton density, proton temperature, and magnetic field strength, but the proton speed is not affected to the same extent, and the oxygen charge-state ratio is not affected at all. The systematic shifts away from zero, which are visible in the reconstruction error for \(n_{\rm p}\), \(T_{\rm p}\), and \(B\) for the corresponding high proton-proton collisional age in Fig. 6, show that this case is not well represented in the static neural network model. We suspect that the same transport effect is indirectly responsible for the shift of the reconstruction error of the oxygen charge-state ratio in the same high proton-proton collisional age region. Here, the input solar wind parameters, in particular \(n_{\rm p}\) and \(T_{\rm p}\), are systematically changed by the SIR, which makes it more difficult for the static model to recover the transport unaffected charge-state ratio from these strongly transport-affected solar wind parameters. This effect is probably reinforced by the comparatively poorer statistics of sector reversal plasma compared to the other two solar wind types. The distribution of the proton speed reconstruction differences is visibly narrower and the MAPE score for the reconstruction of the proton speed is low over almost the complete range of proton-proton collisional age bins (see bottom-most panel in Fig. 6), which again supports the conclusion that the proton speed reconstruction is the most accurate based on the MAPE score --although the accuracy is low compared to the small measurement uncertainty of the proton speed; cf. Sect. 3.1. ### Limits of the time-independent model The underlying ACE data cover 10 years. This is almost one solar activity cycle. It is well known that the properties of the solar wind all change systematically over time (McComas et al., 2000; Kasper et al., 2012; Shearer et al., 2014). As our neural network model is stationary, this time-dependent effect cannot be captured by our model. Therefore, in this section, we investigate how strong this effect is and how the reconstruction accuracy varies over time. As the Sun and the solar wind properties are less variable during the solar activity minimum, we expected the reconstruction accuracy to worsen with increasing solar activity. Therefore, the best reconstruction would be expected during the solar activity minimum and the worst during the solar activity maximum. Figure 7 shows the MAPE score for each reconstructed solar wind parameter and different Xu & Borovsky (2015) solar wind types for each individual time period from the test data set. As discussed in Sect. 2, the training, validation, and test data sets are all similarly distributed over time. In each of the four top panels of Fig. 7, the MAPE score for each reconstructed parameter is shown for easy comparison. In each panel, we again see that the reconstruction of the proton speed achieves the smallest MAPE scores. The proton density reconstruction shows the largest variation over time, in particular in sector reversal plasma. Linear fits using the mean deviation of the upper and lower bound of the Monte Carlo confidence intervals from each data point as an estimate for the standard deviation of each value (restricted to 2002-2008) illustrate that, independently of solar wind type, the reconstruction accuracy is similar for all solar wind parameters during all times of the solar cycle. In all cases, the slope of the lines is small, but --as indicated by the plus and minus symbols to the right of each subplot-- the slopes are significantly different from zero (based on a Wald test for statistical significance). A possible explanation for the rising slope (+ symbol) of \(B\) and \(n_{O^{2}}\)-/\(n_{O^{2}}\)- could lie in their respective measurement errors. A rising slope means the reconstruction accuracy is lower during solar activity minimum. The measurement error of \(B\) is given as an absolute error (0.1 nT). Its impact on the MAPE score is greatest for smaller values of \(B\), which are more likely during solar activity minimum. Similarly, for \(n_{O^{2}}\)-/\(n_{O^{2}}\)-, less ions are measured during solar activity minimum, and therefore the counting statistics show smaller values, which results in greater measurement uncertainties. As discussed in Sect. 3.2 and shown in Table 3, the sector reversal plasma reconstruction is the least accurate. In Fig. 7 sector reversal plasma shows the greatest variability between the different data points (each data point corresponds to approximately one Carrington rotation). The outliers in sector reversal plasma are also the most extreme. One explanation could be that the Xu & Borovsky (2015) scheme employed here misidentifies some coronal hole plasma as sector reversal plasma (and vice versa). As plasma from stream interaction regions is strongly affected by transport effects, this regime is more difficult to reconstruct and therefore has a negative affect on the reconstruction accuracy of whichever group it gets sorted into; in this case, sector reversal plasma. However, in contrast to our expectations, for all reconstructed parameters, the influence of the solar activity cycle is small compared to the uncertainty arising from the measurement uncertainty. Therefore, with the large underlying measurement uncertainties on the proton temperature, the proton density, and the oxygen charge-state ratio, the reconstruction is not accurate enough to allow a considerably better reconstruction during the solar activity minimum. As the measurement uncertainty on the proton temperature, the proton density, and the oxygen charge-state ratio is also large compared to the systematic variations of these parameters with the solar activity cycle, this is not surprising. ## 4 Discussion The properties of the solar wind are determined by a combination of the conditions in the solar source region and the transport history experienced by the solar wind. As a result, the proton plasma properties, the magnetic field strength, and the charge-state composition are correlated to each other (Lepri et al., 2013; McComas et al., 2000; von Steiger et al., 2000). Here, we investigate whether a combination of four of these solar wind parameters is sufficient to reconstruct the remaining fifth solar wind parameter. If the considered solar wind parameters were to contain all the information necessary, a perfect reconstruction would be possible. We therefore consider the obtained reconstruction accuracy as a surrogate to quantify the degree to which other (unknown) hidden parameters play a role in the correlations between our considered solar wind parameters. By analysing how this changes under different solar wind conditions with different respective transport histories, we can extend this argument to the impact of the varying influence of transport processes on the properties of the solar wind. To this end, we investigate interdependencies between different solar wind parameters, namely \(v_{\rm p}\), \(n_{\rm p}\), \(T_{\rm p}\), \(B\), and \(n_{O^{2}}\)-/\(n_{O^{2}}\)-. \begin{table} \begin{tabular}{c|c c|c c|c c|c c} & \multicolumn{2}{c|}{all data} & \multicolumn{2}{c|}{coronal hole} & \multicolumn{2}{c|}{sector reversal} & \multicolumn{2}{c}{streamer belt} \\ & MAPE & median & MAPE & median & MAPE & median & MAPE & median \\ \hline \(v_{\rm p}\) & 0.084 & -0.001 & 0.090 & 0.015 & 0.085 & 0.002 & 0.079 & -0.012 \\ \(n_{\rm p}\) & 0.400 & -0.108 & 0.400 & -0.169 & 0.545 & 0.016 & 0.312 & -0.126 \\ \(T_{\rm p}\) & 0.327 & -0.078 & 0.291 & 0.016 & 0.382 & -0.347 & 0.316 & -0.033 \\ \(B\) & 0.277 & -0.053 & 0.222 & -0.023 & 0.352 & -0.107 & 0.265 & -0.052 \\ \(n_{O^{2}}\)-/\(n_{O^{2}}\)- & 0.307 & -0.031 & 0.261 & -0.014 & 0.379 & -0.036 & 0.290 & -0.040 \\ \hline \(\chi^{2}_{\rm red}\) & median & \(\chi^{2}_{\rm red}\) & median & \(\chi^{2}_{\rm red}\) & median & \(\chi^{2}_{\rm red}\) & median \\ \hline \(v_{\rm p}\) & 50.6 & 22.4 & 59.7 & 24.4 & 47.0 & 24.9 & 47.6 & 19.8 \\ \(n_{\rm p}\) & 37.4 & 3.9 & 80.1 & 3.0 & 19.5 & 6.0 & 22.6 & 3.4 \\ \(T_{\rm p}\) & 13.3 & 1.8 & 4.1 & 1.1 & 32.0 & 4.1 & 7.4 & 1.6 \\ \(B\) & 350.8 & 101.9 & 273.2 & 72.8 & 430.2 & 141.3 & 350.7 & 104.5 \\ \(n_{O^{2}}\)-/\(n_{O^{2}}\)- & 269.0 & 1.0 & 34.0 & 0.3 & 540.8 & 1.8 & 246.2 & 1.6 \\ \end{tabular} \end{table} Table 3: MAPE and \(\chi^{2}_{\rm red}\) scores for the five reconstructed solar wind parameters \(v_{\rm p}\), \(n_{\rm p}\), \(T_{\rm p}\), \(B\), and \(n_{O^{2}}\)-/\(n_{O^{2}}\)-. Additionally, the median value for each score is provided. The first column shows the scores for the complete test data set. After applying the scheme from Xu & Borovsky (2015), the scores of the resulting subsets are recorded in columns two to four. All five considered solar wind parameters depend on the respective solar source of the observed solar wind. However, they are affected differently (or not at all, as in the case of \(n_{O^{+}}/n_{O^{0+}}\)) by different transport effects, which obscures the original source-driven interrelationships. We use a neural network as a general function approximator to model the interdependencies of these solar wind parameters. The lowest mean absolute reconstruction error is achieved for the proton speed \(v_{\rm p}(T_{\rm p},n_{\rm p},B,n_{O^{0+}}/n_{O^{0+}})\). One possible interpretation is that the information carried by the proton speed can be extracted from the other four parameters. However, the proton speed is also associated with a small measurement uncertainty, and compared to the measurement uncertainty the accuracy of the proton speed reconstruction is lower than for the proton density and proton temperature. Nevertheless, our results indicate that the solar wind parameter proton speed can be replaced by other measurements, and appears therefore the least important parameter to measure. Reconstruction of any of the other four parameters, \(T_{\rm p}\), \(n_{\rm p}\), \(B\), or \(n_{O^{0+}}/n_{O^{0+}}\), has proven to be more difficult. Here, the (absolute) reconstruction errors remain high, \(\approx 30\%\). The accuracy of the proton density, proton temperature, and the oxygen charge-state ratio is strongly limited by the underlying measurement uncertainty. On the one hand, the measurement uncertainty determines the accuracy of the reconstructed parameter itself. This is the case for, for example, the proton temperature, which shows a high (and therefore poor) MAPE score, but reaches the lowest (best) \(\chi^{2}_{\rm red}\) score. On the other hand, the large measurement uncertainties of, for example, the proton temperature and the oxygen charge-state ratio also limit the reconstruction of all other parameters in our setup, because inaccurate input parameters also affect the output parameter. We suspect this effect is the reason for the low reconstruction accuracy compared to the measurement uncertainty we obtained for the proton speed and the magnetic field strength. For the oxygen charge-state ratio, the average reconstruction accuracy compared to the measurement uncertainty is very low, but the reconstruction accuracy is on the order of the measurement uncertainty for the majority of the individual data points. Therefore, measuring these quantities with higher accuracy is important in order to understand the interdependencies of the solar wind parameters. This is of particular interest in the case of the oxygen charge-state ratio \(n_{O^{0+}}/n_{O^{0+}}\). As the only parameter not affected by transport effects, this latter contains unique information that cannot be substituted by a non-linear relationship between the proton plasma parameters with high absolute accuracy, and the reconstruction of all solar considered solar wind parameters is likely inhibited by the large measurement uncertainties on \(n_{O^{0+}}/n_{O^{0+}}\). This emphasises the need for heavy ion instruments, such as ACE/SWICS, or the Heavy Ion Sensor (HIS), which is part of the Solar Wind Analyzer (SWA) (Owen et al., 2020) on board Solar Orbiter. These instruments require a stable high-voltage supply, which is challenging to design, and analysis of the data they provide is a complex undertaking. However, our results illustrate that the effort to build instruments of this type is necessary. From the point of view of the solar source region, our reconstruction approach implies that the proton speed carries less detailed information about the source conditions than the other four solar wind parameters. Studying the signatures in \(T_{\rm p}\), \(n_{\rm p}\), \(B\), and \(n_{O^{0+}}/n_{O^{0+}}\) may therefore provide a better chance to capture relevant details of a specific solar source region and the solar wind release mechanisms. Given the large measurement uncertainties on \(T_{\rm p}\), \(n_{\rm p}\), and \(n_{O^{0+}}\)/\(n_{O^{0+}}\), these four solar wind parameters cannot be completely reconstructed from each other, and therefore investigating which other properties --not included in this study-- determine their variability may help us to identify the underlying mechanisms behind the release of (slow) solar wind. The reconstruction accuracy differs depending on solar wind type. Based on the absolute reconstruction accuracy (estimated with the MAPE score) and the reconstruction accuracy relative to the measurement uncertainty (estimated with the \(\chi^{2}_{\rm red}\) and the median of the individual \(\chi^{2}\) scores), the reconstructions of \(B\), \(T_{\rm p}\), and \(n_{O^{0+}}/n_{O^{0+}}\) are best in coronal hole wind, the reconstruction of \(v_{\rm p}\) is best in sector reversal plasma, and the reconstruction of \(n_{\rm p}\) is best in streamer belt plasma. This illustrates that reconstructions face different challenges in different solar wind types, which differ both in the properties of the solar source region and in the transport history experienced during the solar wind travel time. To further investigate our results from the point of view of the transport history of solar wind, we make use of the proton-proton collisional age, which --although not defined for this--can serve as a proxy to differentiate between solar wind types with different transport histories (Heidrich-Meisner et al., 2020). Coronal hole wind is often influenced by wave activity. Waves have several effects on the solar wind plasma: The core of the proton population is (or is apparently) heated, probably preferentially perpendicular to the magnetic field; waves are speculated to play a role in the formation of the beam (Marsch et al., 1982; Verniero et al., 2020; D'Amicis & Bruno, 2015; Panasenco et al., 2020; Louarn et al., 2021); and wave-particle interaction likely plays a role in differential streaming (Kasper et al., 2012; Janitzek et al., 2016; Marsch et al., 1982a). Here, we cannot resolve these different effects, but argue that they are all mainly confined to coronal hole wind (or Alfvenic slow solar wind), which is typically associated with low proton density, high proton temperature, and high solar wind speed, which all lead to a low proton-proton collisional age. Therefore, we assume that the effect of waves as transport processes are relevant in solar wind with a low proton-proton collisional age. In this regime, we find that our neural network reconstruction tends to underestimate the proton temperature and (to a lesser degree) the magnetic field strength. Among the solar wind parameters considered here, these two, \(T_{\rm p}\) and \(B\), are exactly the parameters that are expected to be most influenced by Alfven waves. Therefore, our neural network reconstruction appears to focus on the underlying (source-driven) relationship between the solar wind parameters and not on the effect of waves on the solar wind plasma. Indirectly, this is supported by the observation that the oxygen charge-state ratio does not show a preferential over- or underestimation in the coronal hole wind regime. This is in agreement with the expectation that the oxygen charge-state ratio is not affected by transport effects. If we focus on solar wind with a particularly high proton-proton collisional age, this selects solar wind with high proton densities, high proton temperatures, and low proton speeds. These conditions are best realised in compressed slow solar wind in SIRs and in the preceding slow solar wind. Therefore, as argued in Heidrich-Meisner et al. (2020), we consider solar wind with a high proton-proton collisional age as a proxy for SIRs, which tend to be included in the sector reversal plasma category in the Xu & Borovsky (2015) categorisation. The sector reversal plasma also included a very slow, cool, and dense solar wind type (Sanchez-Diaz et al., 2016), which also falls in the high proton-proton collisional age regime. In this regime, we observe a systematic increase in the overestimation of the proton temperature and a systematic increase in the underestimation of the proton density. The underestimation of the proton densities again im plies that the neural network model is probably tailored to 'normal' conditions that are unaffected by transport, and is therefore ill-equipped to adapt to the different relationship between proton density and proton temperature in compressed solar wind. For the proton temperature, the observed underestimation is the result of a compromise between attempting to model the higher proton temperatures in compression regions and those in the very slow solar wind identified in Sanchez-Diaz et al. (2016). This bolsters the argument that the information contained in the solar wind parameters considered here (and in other studies) is likely not sufficient to completely characterise the plasma, its solar source, or the experienced transport history. Our results also support the idea that any solar wind classification using only one or a few of the solar wind parameters considered here contains an inherent bias towards greater accuracy during certain conditions that are more or less affected by transport processes. Our neural network models a static, time-independent relationship between the considered solar wind parameters. However, all considered solar wind parameters systematically change with the phase of the solar activity cycle. Therefore, in principle, our neural network model cannot be expected to perform equally well in all phases of the solar activity cycle. However, investigating how the reconstruction accuracy changes over (almost) one solar cycle shows that the influence of the high measurement uncertainties on the underlying parameters is stronger than a potential solar activity cycle-dependent effect. That the model achieves better results for the reconstruction of the oxygen charge-state ratio during the solar activity maximum is probably also an effect of the measurement uncertainty on \(n_{O^{\alpha}}\), \(n_{O^{\alpha}}\). During the solar activity minimum phase, observations of very dilute plasma are more likely. This condition can lead to very low count rates in ACE/SWICS and therefore to a very high measurement uncertainty. Although the oxygen charge-state ratio is the only solar wind parameter considered here that is not affected by any transport effects that complicate the relationship between the different considered solar wind parameters, the neural network reconstruction of the oxygen charge-state ratio does not prove to be easier than that of the other transport-affected solar wind parameters. This can be caused by at least three mechanisms: (1) The oxygen charge-state ratio at the source depends (also) on a property that is not included in our analysis; (2) recovering sufficiently detailed information on the solar source region (which determines the oxygen charge-state ratio) from the proton plasma properties and the magnetic field strength is hindered by the influence of the transport history, which strongly affects the proton temperature and the proton density; and (3) the high measurement uncertainties are unconductive to a good reconstruction. ## 5 Conclusion We investigated non-linear relationships between different solar wind parameters; namely proton speed, proton density, proton temperature, magnetic field strength, and the oxygen charge-state ratio. Our findings suggest that only the proton speed can be substituted with other measurements with reasonable absolute and relative accuracy. This implies that the proton speed carries less unique information about the solar source region and transport effects than the other considered solar wind parameters. The precision of the reconstructions of the proton density, proton temperature, and the oxygen charge-state ratio is constrained by their respective measurement uncertainties. While the average reconstruction accuracy of the oxygen charge-state ratio compared to the measurement uncertainty is generally low, most individual data points exhibit reconstruction accuracy in line with the measurement uncertainty. While the magnetic field strength can be measured with high accuracy, its reconstruction in our study is similarly inhibited by the comparatively high uncertainties on proton density, proton temperature, and the oxygen charge-state ratio. Therefore, to further our understanding of the relationships between different solar wind parameters and the process they originate from, it is crucial to further enhance the measurement accuracy for these quantities. Our neural network reconstruction appears to focus on the underlying relationship driven by the sources of the solar wind, rather than disentangling the impact of transport effects such of wave-particle interactions, collisions, or compression regions on the solar wind plasma. Nevertheless, the reconstruction accuracy clearly differs depending on the solar wind type. We note that different transport effects are dominant in different respective solar wind types, and therefore transport effects, such as wave-particle interactions in coral hole wind, collisions in slow solar wind, and compression regions in SIRs, limit the potential accuracy of identifying the source region of the solar wind purely based on the observations of proton speed, proton density, proton temperature, and magnetic field strength. Our results therefore underline the importance of measuring the charge states of the solar wind directly and with high accuracy. For complex models based on magnetohydrodynamics and solar corona magnetic field models (Arge & Pizzo 2000; Cranmer & Van Ballegooijen 2005; Cranmer et al. 2007; van der Holst et al. 2010; Pizzo 2011; Schultz 2011; van der Holst et al. 2014; Pomoell & Poedts 2018), capturing the properties of SIRs tends to be comparatively difficult. The fact that our simple approach of ad hoc neural network reconstruction is also least accurate for the solar wind type that contains the most SIRs suggests that the additional effect of compression --which dominates the plasma properties in SIRs-- needs to be considered in the design of highly accurate models. Consequently, incorporating comprehensive details regarding transport effects, compression regions, and the progressive impingement of faster solar wind into SIRs (Hofmeister et al. 2022) into consistent MHD-driven solar wind models holds promise for enhancing their accuracy. ###### Acknowledgements. This work was supported by the Deutsches Zentrum fur Luft- und Raumfahrt (DLR) as SOHO/CELIAAS 50 OC 2104. We further thank the science teams of ACE/SWPARA, ACE/MAG as well as ACE/SWICS for providing the respective level 2 and level 1 data products. The Sunspot data is taken from the World Data Center SILSO, Royal Observatory of Belgium, Brussels.
2309.02575
In Situ Soil Property Estimation for Autonomous Earthmoving Using Physics-Infused Neural Networks
A novel, learning-based method for in situ estimation of soil properties using a physics-infused neural network (PINN) is presented. The network is trained to produce estimates of soil cohesion, angle of internal friction, soil-tool friction, soil failure angle, and residual depth of cut which are then passed through an earthmoving model based on the fundamental equation of earthmoving (FEE) to produce an estimated force. The network ingests a short history of kinematic observations along with past control commands and predicts interaction forces accurately with average error of less than 2kN, 13% of the measured force. To validate the approach, an earthmoving simulation of a bladed vehicle is developed using Vortex Studio, enabling comparison of the estimated parameters to pseudo-ground-truth values which is challenging in real-world experiments. The proposed approach is shown to enable accurate estimation of interaction forces and produces meaningful parameter estimates even when the model and the environmental physics deviate substantially.
W. Jacob Wagner, Ahmet Soylemezoglu, Dustin Nottage, Katherine Driggs-Campbell
2023-09-05T20:52:16Z
http://arxiv.org/abs/2309.02575v1
| In Situ Soil Property Estimation for Autonomous Earthmoving Using Physics-Infused Neural Networks ###### Abstract A novel, learning-based method for in situ estimation of soil properties using a physics-infused neural network (PINN) is presented. The network is trained to produce estimates of soil cohesion, angle of internal friction, soil-tool friction, soil failure angle, and residual depth of cut which are then passed through an earthmoving model based on the fundamental equation of earthmoving (FEE) to produce an estimated force. The network ingests a short history of kinematic observations along with past control commands and predicts interaction forces accurately with average error of less than 2kN, 13% of the measured force. To validate the approach, an earthmoving simulation of a bladed vehicle is developed using Vortex Studio, enabling comparison of the estimated parameters to pseudo-ground-truth values which is challenging in real-world experiments. The proposed approach is shown to enable accurate estimation of interaction forces and produces meaningful parameter estimates even when the model and the environmental physics deviate substantially. Soli Property Estimation Fundamental Equation of Earthmoving Physics-Infused Neural Networks Autonomous Earthmoving ## 1 Introduction Autonomous earthmoving is important for enhancing the safety, productivity and efficiency of construction and mining operations, especially in hazardous and remote environments where human operators face significant risks. One big challenge in designing these systems is enabling the equipment to operate efficiently and robustly across a wide variety of soils. Soils are complex heterogeneous granular materials whose properties vary significantly, not just between disparate locations, but also locally. As soil strength properties vary, the force required to shear the soil changes as well, which has a large effect on the earthmoving process. For example, in low strength soils a bulldozer can fill its blade in a short distance by making a deep cut without the vehicle stalling or tracks slipping. In contrast, cut depth is limited for higher strength soils because force limits of the machine are exceeded by higher required shear forces that increase with depth. #### 1.1. Autonomous Earthmoving Early approaches to autonomous excavation are posed as trajectory control problems where a desired trajectory is generated, typically based on some heuristic such as ensuring the swept volume is equal to the capacity of the bucket, and tracked using a position controller (Bradley and Seward, 1998). However, these kinematic trajectory control approaches fail when the generated trajectory is infeasible due to force limits of the machine. The simplest approach to address this problem is to assume the soil is very high-strength and to conservatively select trajectories that ensure that the machine does not stall and tracks do not slip during execution of the motion. Unfortunately, this is very inefficient in lower-strength soils and limits the practicality of autonomous earthmoving. To address these issues, alternative control strategies have been proposed that enable adaptation to changing soil parameters. In bulldozers, blade control technologies that adjust blade position to ensure the machine is exerting a force near the optimal load without producing track slip are commercially available (Hayashi et. al., 2013; Jackson, 2017). Others have proposed impedance control (Ha et. al., 2000) or power maximization (Sotiropoulos and Asada, 2019) as a way to balance the position control objective with force limitations. Instead of following a kinematic trajectory, a prototypical force-torque trajectory can be tracked which produces kinematic trajectories that vary depending on soil composition (Jud et. al., 2017). More recently, reinforcement learning methods have been used to train control policies that achieve greater fill-factors in excavation and scooping tasks (Azulay and Shapiro, 2021; Backman et. al., 2021; Egli et. al., 2022). However, local adaptation to soil conditions does not ensure efficient completion of a global terrain shaping task, implying that adaptation should be considered at the planning level as well. For an excavation task, iterative learning control has been used to adjust the desired trajectory between dig cycles to generate more feasible paths that result in increased bucket fill (Meada and Rye, 2012). Another approach is to estimate soil properties in situ and use these estimated properties to inform planning and control (Singh, 1995a). At the planning level, these estimated properties can be used in combination with a model of earthmoving to ensure generated trajectories are feasible and efficient (Singh, 1995b). To improve tracking of the trajectory, the interaction forces can be predicted and used with impedance control (Tan 2005). #### 1.1.2 The Fundamental Equation of Earthmoving The first step in estimating soil properties is to assume a model, as the properties are tautologically tied to a soil model. In most soil property identification for autonomous earthmoving work, the Mohr-Coulomb model of soil shear strength is assumed which relates the shear strength of the soil, \(\tau\), to the applied normal force, \(\sigma_{n}\)via \[\tau=c+\sigma_{n}\tan(\phi) \tag{1}\] where the parameters \(c\) and \(\phi\) are the soil cohesion and angle of internal friction respectively. Autonomous earthmoving requires a model of how the soil properties affect this process. Using the method of trial wedges where soil is assumed to fail along a plane producing a wedge, (Mckyes 1989; Reece 1964), an equation can be derived that describes the applied force \(F\) required to shear soil with a flat blade or bucket moving horizontally through the soil, as depicted in Fig. 1. The remaining forces acting on this soil wedge include the force of the loose soil accumulated on the blade which is referred to as surcharge or \(Q\), the weight of the soil wedge \(W\), the frictional component of the soil shear force combined with the normal force \(R\), the soil cohesive force, and the soil-tool adhesive force. The forces are assumed to be in equilibrium, i.e. neither the soil nor the blade is accelerating, and are summed up along \(\bar{x}\) and \(\bar{x}\) directions to arrive at the fundamental equation of earthmoving (FEE): \[F=f_{FEE}(\Theta)=\gamma d^{2}wN_{r}+cdwN_{c}+QN_{Q}+c_{a}wN_{a} \tag{2}\] where the coefficients are given by \[\begin{array}{l}N_{r}=\frac{[\cos(\phi)+\cos(\bar{\theta})]\sin(\alpha+\phi +\beta)}{2\sin(\eta)},\ \ N_{c}=\frac{\cos(\phi)}{\sin(\rho)\sin(\eta)}\\ N_{Q}=\frac{\sin(\alpha+\phi+\beta)}{\sin(\eta)},\ \ N_{a}=\frac{-\cos(\phi+\phi+\beta)}{ \sin(\rho)\sin(\eta)}\end{array}\] (3-6) where \(\eta=\delta+\rho+\phi+\beta\) is defined for notational convenience. This particular formulation of the FEE was developed by Holz et. al. (2013) to enable consideration of inclined surfaces. Initially, the equation has three unknown variables \((F,R,\beta)\), but applying the equilibrium conditions reduces this down to the single unknown soil failure angle \(\beta\). Typically, it is assumed that the optimal soil failure angle should be chosen such that \[\beta^{*}=\arg\min_{\beta}N_{r} \tag{7}\] In some cases, it is possible to derive a closed form expression for \(\beta^{*}\), but often numerical methods are relied upon. The interaction force can be broken into its \(\bar{x}\bar{x}\) subcomponents by \[\begin{array}{l}F=f_{FEE}(\Theta)=\\ \left[F\cos(90^{*}-\rho-\delta+\alpha),\ F\sin(90^{*}-\rho-\delta+\alpha)\right] \end{array} \tag{8}\] During bulloozing, the blade may move vertically in addition to horizontally as the desired cut depth is adjusted. This causes a change in the relative movement between the soil wedge and the blade. In the case that the wedge is moving down with respect to the blade, this results in the soil-tool adhesive and frictional forces changing direction. In order to account for these effects, Holz et. al. (2013) make the following modification \[\delta^{\prime}=\tanh(-C_{1}(\overline{\eta_{b}}\cdot\overline{\nu}))\,\delta,\ c_{a}^{\prime}=\tanh(-C_{1}(\overline{\eta_{b}}\cdot\overline{\nu}))\,c_{a} ^{\prime}\] (9-10) where \(C_{1}\) is a positive constant, \(\overline{\eta_{b}}\) is the unit vector pointing upwards along the blade, and \(\overline{\nu}\) is the velocity vector of the blade represented in the \(\bar{x}\bar{x}\) coordinate system. The values \(\delta^{\prime}\) and \(c_{a}^{\prime}\) are used in place of \(\delta\) and \(c\) in equations 2-6. ### Soil Property Estimation There has been considerable attention on developing methods for estimating and predicting vehicle traversability for the purposes of autonomous off-road navigation (Borges et. al. 2022). Of particular interest for this work is the characterization of soil properties from on-board vehicle sensing including vision and propriception, sometimes referred to as inverse terramechanics (IT) estimation. In-situ soil property estimation using these methods has been studied within the field of terramechanics and is often motivated by the need for autonomous navigation of planetary rovers (Lopez-Arreguin et. al. 2021). This problem is formulated as the estimation of parameters of a soil-wheel interaction model and its often solved using optimization-based approaches (Iagnemma et. al. 2004), although filtering methods have been developed (Dallas et. al. 2020) and learning based methods have been used (Lopez-Arreguin and Montenegro 2021). For autonomous earthmoving tasks, a similar IT approach can be taken. However, as the vehicle is more forcefully interacting with the terrain, different models must be assumed. Typically, a subset of the model parameters is assumed to be known, and the remaining parameters are found by fitting the model to observed interaction forces (Singh 1995a, Luengo et. al. 1998, Tan et. al. 2005, Althoefer et. al. 2009). It is often difficult to determine the accuracy of the estimated parameters directly, but Tan et. al. (2005) report predictions of the soil failure force to within 10%-30%. ### Contributions The estimation of soil properties is a key challenge for autonomous earthmoving operations, such as bulldozing, Existing IT methods have mainly focused on the soil-tire interaction for vehicle navigation applications and, while motivating, these methods do not directly apply to earthmoving tasks due to different underlying physics. The relatively limited prior work in IT methods for construction tasks leverage optimization approaches to find parameters of the earthmoving model given observations of the interaction force. In this work, we propose a novel IT method based on physics-infused neural networks Figure 1: FEE soil wedge geometry and force diagram. This is a cross-sectional view depicting a blade, shown in gray, moving in the \(\bar{\nu}\) direction through the soil at a depth of cut \(d\) and failing the soil along the purple line at an angle \(\beta\) with respect to the surface. The soil surface, shown in green, is inclined at the angle \(a\) from the horizon and the flat blade is at an angle \(\rho\) with respect to the soil surface. The passive failure case is assumed where the soil wedge moves up with respect to the blade and forward with respect to the un-sheared soil. The cohesive forces resist this movement and are drawn accordingly. (Holz et. al. 2013) (PINNs) that can estimate soil properties in situ from kinematic and control observations of a bulldozer blade. The main contributions of this work are: * Development of a PINN-based in situ soil-property estimation method that incorporates physical laws and constraints into the learning process without requiring knowledge of true soil properties * Evaluation of the accuracy and robustness of the proposed method using simulation data with ground-truth soil properties * Introduction of a physical inconsistency loss function that enables enforcement of minimization or maximization of certain physical quantities * Demonstration of a novel technique to estimate the blade-soil interaction forces directly from kinematic and control data ## 2 Methods The creation of a simulation to study the in-situ soil-property estimation problem is first discussed, and the PINN approach to soil property estimation is then developed. ### Simulation In this work, the proposed soil property estimation method is evaluated to determine how capable the system is at extracting characteristics of the soil. Real world evaluation a challenging task, as evaluation of the full system would require significant instrumentation of a bulldozer, time consuming real-world experimentation across various soil types and levels of compaction, and careful setup (e.g., compaction, moisture content) and characterization of these soils to establish a soil property ground-truth to enable validation of the approach. Instead, using a simulation as an analog for experiments on a real bulldozer is proposed. To perform these experiments, a simulator that exhibits similar behavior to a real system is required, but faithful representation of the dynamics of a specific piece of earthmoving equipment for a specific soil is not strictly necessary. #### 2.1.1 Vortex Studio Vortex Studio1 is a simulation software which enables modelling of multi-body dynamics and supports accurate real-time soil-tool interaction physics. In Vortex, the soil is modeled using a hybrid heightfield-particle representation. An FEE based model is used to determine the reaction force of the soil prior to shearing. As the relative density \(I_{\alpha}\) of the soil increases, \(\phi\) and \(c\) are varied to increase the resistance of the soil to shearing due to stronger particle interlocking, and \(\gamma\) is varied to adjust the weight of the soil wedge and surcharge (Holz, 2009). Footnote 1: [https://www.cm-labs.com/en/vortex-studio/](https://www.cm-labs.com/en/vortex-studio/) When the force exerted by the tool on the soil reaches this threshold, the portion of the heightfield in contact with the blade is converted into particles whose behavior is governed by a discrete element model (DEM) simulation which more accurately characterizes the behavior of the disturbed soil. As particles come to rest, they are then reintegrated back into the heightfield representation. The initial relative density is defined as a fixed value but may vary throughout the simulation as the soil is compacted by the tool or as sheared particles are merged back into the heightfield (Holz, 2009; Holz et. al., 2013). The remainder of the soil properties remain constant for a given soil type and do not vary within a simulation episode. Haeri et al. (2020) showed that Vortex Studio is capable of predicting the soil-tool interaction forces for a lunar simulant material within 20%-30% of the measured force obtained from real-world experimentation. They introduce some modifications to the original Vortex model, including a term that reduces the contribution of the surcharge by a factor of 10 in equation 2. Additionally, they find that selecting \(\beta\) such that the top surface of the soil wedge matches extent of the surcharge pile, better matches the experimental data as compared to minimization of \(N_{\gamma}\) as in Equation 7. One benefit of using a simulation to study soil-property estimation is that it can provide ground-truth knowledge of the soil parameters for validation of a given approach. However, in Vortex, this is not straightforward as the simulation is not a purely physics-based model, but rather a combination of physics and heuristics that have been added to improve the realism, stability, and performance of the system. In addition to the modifications discussed by Haeri et. al. (2020), these heuristics include various adjustments to the FEE force calculation, such as limiting the force based on the submerged surface area, the addition of penetration forces, scaling of the FEE force, inclusion of additional frictional, elastic, and damping contact forces between the heightmap and tool, etc. These heuristics are not directly derived from the soil properties, but rather must be tuned to match experimental data or may be modified to incorporate operator feedback, improving the realism of the simulation. Vortex provides four default soil configurations (Clay, Loam, Sand, and Gravel), but further tuning is recommended for specific applications. However, it is not obvious how to configure these parameters to achieve realistic performance for different soil types and blade geometries. Despite these limitations, Vortex meets the needs for a system that exhibits realistic earthmoving phenomenon and is sufficiently complex to act as an analog for experiments on a real machine. #### 2.1.2 Simple Bladed Vehicle Model A simplified simulation of a bladed earthmoving vehicle is developed, see Figure 2, and consists of two dynamic Figure 2: Rendering of simple bladed vehicle simulation. The chassis is shown in green and the flat blade collision geometry is shown in blue. The soil particles generated due to the blade failing the soil constitute the surcharge. components: a chassis and a blade. The chassis is modeled as a rectangular prism (3m, 2m, 2m) with a mass of 5,000kg. The blade is flat 3.164m wide, 0.660m tall, 0.001m thick2 with a mass of 400kg. Footnote 2: Originally, the blade was made to have a more reasonable thickness, but initial experimentation revealed problems with the vehicle being able to penetrate the soil surface even with large forces for denser soil configurations. The authors believe that the is an artifact of the way contact detection is used to compute the soil blade angle \(\rho\) internal to Vortex. The hypothesis is that the system computes the angle using the bottom surface of the blade instead of the leading edge causing the simulator to produce unrealistically large reaction forces. Reducing the blade width alleviated most of these problems. Vortex constraints, which enable restricting the motion between two or more parts, are used to control the motion of the chassis and the blade. Motorized constraints enable control of a part's velocity and exert a force proportional to the velocity error. The inverse of the loss parameter specifies this constant. Locked constraints enable control of a part's position and exert a viscoelastic force on the part equivalent to a spring-damper system (CM Labs Simulations, 2016). A motorized constraint is used to control the forward velocity of the chassis and a locked constraint is used control its vertical position. The lateral position and orientation of the chassis are locked, limiting the motion of the vehicle to the \(\bar{xz}\) plane. The blade is attached to the chassis body using a locked constraint enabling control of the vertical position relative to the chassis. To emulate limits on a real machine, the relative position of the blade with respect to the chassis is limited to enable the bottom edge of the blade to reach 0.3m. below and 1.0m above the surface when the chassis is on level ground. The soil height is measured at four locations underneath the chassis and the average of these height is used to command the chassis vertical position to emulate a simple vehicle suspension. Additionally, the vertical and horizontal constraint forces are limited to 30kN and 20kN respectively to emulate the tractive and penetrative force limits of a bladed earthmoving machine. On a bulldozer, hydraulic pressures in the vehicle hydrostatic drive circuit and in the cylinders controlling the blade position can be measured to derive estimates of the tractive force given knowledge of the machine's dynamics (Yamamoto et. al. 1997). The simulation developed in this effort has a simpler drive system, therefore, the simulator chassis-blade constraint forces are used to measure the interaction force instead. Significant effort was given to tune the loss, stiffness, and damping coefficients of the constraints to yield a stable simulation with minimal oscillations in the constraint forces and part positions. However, oscillations on the order of 5kN and 10kN in the x and z forces respectively are commonly observed in the experiments, see Fig. 3. These oscillations arise from the interaction of the constraints used to control the vehicle and the constraints imposed by the soil-tool interaction model. In particular, when a portion of the heightfield is converted into particles, the blade momentarily loses contact with the heightfield resulting in a significant disturbance in the force applied to the blade. This amounts to a step input to the control constraints and can lead to transient violations of the force limits, as Vortex constraints are essentially dynamical systems. During simulation, at a rate of 60Hz, observations are collected of the blade \(x\) and \(z\) positions and chassis \(z\) positions, \(\mathbf{p}=[p_{x}^{b},p_{y}^{b},p_{z}^{c}]\), the blade \(z\) velocity and chassis \(x\) and \(z\) velocities, \(\mathbf{v}=[v_{x}^{b},v_{x}^{c},v_{z}^{c}]\), and the \(x\) and \(z\) cutting forces, \(\mathbf{F}=[f_{x}^{b},f_{y}^{b}]\). Additionally, the commanded blade relative and absolute \(z\) position, and the chassis \(x\) velocity \(\mathbf{u}=[u_{x}^{b},u_{x}^{b},u_{x}^{c}]\) are collected. At each timestep \(t\), the observations and actions are concatenated into the observation vector \(\mathbf{o}_{t}=[\mathbf{p}_{x},\mathbf{v}_{x},\mathbf{u}_{t}]\) which is consumed by the PINN. Pseudo-ground truth soil-propense are collected from the simulation as \(\mathbf{\theta}=[\phi,c,\delta,\alpha_{x},\gamma,\rho,a,w,d,d,\partial,\vec{v}]\), where \(Q\) is obtained using the Vortex soil mass sensor plugin. The simulation does not provide access to the soil failure angle \(\beta\), but for notational simplicity it will be included in \(\mathbf{\theta}\) for the remainder of the text. #### 2.1.3 Data Collection Controller To ensure clean data is collected to train an effective model, a simple data collection controller was designed. In the case that the force required to shear the soil is larger than the machine limits, the blade becomes stalled, and the soil is not sheared. Therefore, an anti-stall blade controller is developed which enables automatic data collection. For a given desired forward velocity \(\vec{v}_{x}\) and target absolute cut depth \(\bar{d}_{x}\), the controller adjusts the blade relative position command \(u_{x_{r}}\) to ensure that the velocity tracking error \(e_{v_{x}}\) does not drop below a target velocity dependent threshold \(\vec{e}_{v_{\text{min}}}=\vec{v}_{x}e_{v_{x}}\). This is accomplished using a gain-scheduled integral controller which reduces the desired depth of cut by a depth offset \(\Delta\bar{d}_{x}^{k}\) at each timestep \(k\); this quantity is limited to enable raising of the blade only a small amount above the surface. The controller is defined as follows \[\Delta\bar{d}_{x}^{k}=\text{K}_{\text{App}}\left(\text{e}_{v_{x}}^{k}-\text{e }_{v_{\text{min}}}\right)+\Delta\bar{d}_{x}^{k-1}\quad\in\left[0,\bar{d}_{x}+ \Delta\bar{d}_{x_{\text{max}}}\right] \tag{11}\] \[\text{u}_{x_{r}}=-(\bar{d}_{x}-\Delta\bar{d}_{x}^{k}) \tag{12}\] The scheduled depth offset gain term, \(K_{\text{App}}=K_{v}/\bar{v}_{x}\), modulates the depth offset integral gain \(K_{v}\) to enable improved forward velocity tracking for low desired velocities \(\vec{v}_{x}\). Overall, the effect of this controller is that vehicle maintains forward motion with some low-frequency oscillation in the depth of cut and forward velocity around the targets. As the soil becomes Figure 3: Measured forces and cut depth for a typical episode within the \(D_{\text{default}}\) dataset. This episode was recorded for a loan soil with \(\bar{d}_{x}=60\%\) and illustrates the large amplitude noise in force measurements observed particularly for denser soils. Also, note how the surcharge grows with time indicating the accumulation of soil on the blade from the cutting operation. harder to shear and vehicle force limits are reached, the desired depth of cut becomes harder to track, but stalling is prevented ensuring the soil failure condition assumed by the FEE is met. ### Physics-Infused Neural Network (PINN) Broadly, the approach taken in this effort is to train a neural network to produce estimates of the soil parameters which are then fed through the FEE model to produce an estimated force. The network ingests a \(T\) step history of observations and control commands \(\mathbf{\theta}_{[tx+T]}\) along with known FEE parameters \(\mathbf{\theta}_{\mathbf{k}}=[c_{a},\mathbf{r},\rho,a,w,d,Q,\overline{\mathbf{v}}]\), and outputs an estimate of the unknown parameters \(\widehat{\mathbf{\theta}}_{\mathbf{u}}=[\widehat{\phi},\hat{c},\delta,\widehat{\theta },\underline{\mathbf{\theta}},\underline{\mathbf{\theta}},\underline{\mathbf{\theta}}]\) and the estimated interaction force \(\widehat{\mathbf{F}}\). It is important to include control actions in addition to kinematic information, as tracking error can be thought of as a measure of the interaction force. For example, Bradley and Seward (1998) use the position tracking error to achieve "software force feedback" that can be used to determine when the bucket comes into contact with the ground. By properly constraining the network, the hypothesis is that the model will converge to estimating reasonable soil parameters because this representation is the most parsimonious representation of the data. In other words, the model should learn to estimate meaningful soil properties as they explain the observations in the simplest form. #### 2.2.1. Network Architecture While it may be possible to estimate FEE parameters from observations and commands at a single time-step, this is difficult given the noise present. Therefore, observations and actions across \(T=60\) timesteps (1 sec.) are incorporated, and a temporal convolution is leveraged to compress the \(T\times 9\) dimensional observation \(\mathbf{\theta}_{[tx+T]}\) into a \(l\times N\) vector, where \(l\) and \(N\) are model hyperparameters. This vector is passed through a stacked transformer style encoder network with multi-headed attention, modeled after the network presented by Zerveas et. al. (2021), producing another \(l\times N\) vector. This vector is then combined with the known FEE parameters \(\mathbf{\theta}_{\mathbf{k}}\) and passed through a dense network with 2 hidden layers of 20 neurons each that is referred to as the integration network. The role of this network is to incorporate the knowledge of the existing soil parameters with the latent state produced by the transformer from the observation history. This enables estimation of the unknown soil parameters. Without this network, the system cannot learn to account for variation in \(\mathbf{\theta}_{\mathbf{k}}\) by changing \(\widehat{\mathbf{\theta}}_{\mathbf{u}}\). A linear output layer produces the estimated unknown FEE parameters \(\widehat{\mathbf{\theta}}_{\mathbf{u}}\) and the residual end to end force \(\mathbf{F}_{r}\). The FEE parameters are combined \(\widehat{\mathbf{\theta}}=[\mathbf{\theta}_{\mathbf{k}},\widehat{\mathbf{\theta}}_{\mathbf{u}}]\), clipped to remain within the limits defined in Table 1, and passed to the FEE network to produce the FEE reaction force \(\widehat{\mathbf{F}}^{\text{FEE}}=\mathbf{f}_{\text{FEE}}(\widehat{\mathbf{\theta}})\). The residual and FEE reaction forces are then summed to produce the estimated interaction force \(\widehat{\mathbf{F}}=\mathbf{F}^{\text{FEE}}+\mathbf{F}^{\prime}\). See Fig. 4 for a visual depiction of this architecture. The residual depth of cut which augments the depth of cut by \(d^{\prime}=d+\Delta d\), was added to the model after initial experimentation revealed significant sensitivity of the FEE model to the depth of cut; major improvements in reconstructing the observed force entirely from the ground-truth parameters was shown to be possible with hand tuning of \(\Delta d\). Similarly, the residual interaction force \(\mathbf{F}^{\prime}\) was added to the network to avoid non-FEE components of the simulation (e.g. noise in the measured force and non-zero measured force when \(d<0\)) corrupting the parameter estimates. #### 2.2.2. Network Training To allow for stable training, observations \(\mathbf{o}\) are normalized to have unit mean and variance where the normalizing mean and variances are computed from the training set observations. Values for the residual force are normalized \(\mathbf{F}^{\text{F}}\) to 10% of the value of \(\mathbf{F}\). Both known and unknown FEE parameters are scaled using min-max normalization with ranges provided in Table 1. The ranges for \(\phi,c,\delta\), and \(c_{a}\) are obtained from multiple sources listing geotechnical parameters (Geotechnata.info 2023, MATLAB 2023, and Fine Software 2023) while the remaining parameters are scaled using knowledge of machine limits and geometry of the FEE. During tuning of the model, some liberty was taken to adjust parameter ranges for \(\beta\) and \(c\) from the initially assumed values in order to avoid the model converging to a poor local optimum. The loss function for the network is comprised five types of losses: a mean average error force prediction loss (\(\mathcal{L}_{MAE}^{F}\)), a MAE regularization of the residual force (\(\mathcal{L}_{MAE}^{F}\)), ReLU-based regularization of FEE parameters (\(\mathcal{L}_{\text{ReLU}}^{T}\)), a mean square error (MSE) based regularization of the residual depth of cut (\(\mathcal{L}_{MSE}^{\Delta d}\)), and a MAE regularization of the gradient \(\partial N_{\gamma}/\partial\beta\) (\(\mathcal{L}_{MAE}^{\Delta N_{\gamma}/\partial\beta}\)). It is defined as follows (13) \[\mathcal{L}_{MAE}^{F}(\mathbf{\bar{\theta}},\widehat{\mathbf{F}},\mathbf{F}^{ \prime})=\lambda_{F}\mathcal{L}_{MAE}^{F}(\mathbf{F},\widehat{\mathbf{F}},\mathbf{w}_{xx}) +\lambda_{F}\mathcal{L}_{MAE}^{F}(\mathbf{F}^{\prime},\mathbf{0},\mathbf{w}_{xx})+\] \[\sum_{x\in\mathbb{Z}}[\lambda_{x}\mathcal{L}_{\text{ReLU}}^{F} (\lambda,\mathcal{L}_{\text{ReLU}}^{F}(\lambda,\mathcal{L})]+\lambda_{AdMSE} \mathcal{L}_{MSE}^{Ad}(\Delta d,\underline{\mathbf{\theta}})+\] \[\lambda_{0N_{\gamma}/\partial\beta}\mathcal{L}_{MAE}^{\Delta N_{ \gamma}/\partial\beta}\left(\frac{\Delta N_{\gamma}}{\partial\beta},\mathbf{0}, \mathbf{w}_{\partial N_{\gamma}/\partial\beta}\right)\] \[\mathcal{L}_{MAE}^{F}(\mathbf{\bar{\beta}},\mathbf{y},\mathbf{w})=\sum_{i=0}^{ \left|\mathbf{y}\right|}\mathbf{w}_{i}|\overline{\mathbf{y}}_{i}-\mathbf{y}_{i}|\] (14) \[\mathcal{L}_{MSE}^{MSE}(\Delta d,\underline{\mathbf{\theta}})=\left( \underline{\mathbf{\theta}}\ -\ \underline{\mathbf{\theta}}\ \right)^{2}\] (15) \[\mathcal{L}_{\text{ReLU}}^{T}(\lambda,\mathcal{L}^{T})=max(0, \mathcal{L}^{T}_{i}-\chi)\ +\ max(0,\chi-l_{u}^{T})\] (16) The FEE parameters regularized to remain within the limits listed in Table 1 are \(\mathbf{X}=[\phi,c,\delta,\beta,\eta,\zeta,\underline{\mathbf{\zeta}},\underline{ \mathbf{\zeta}},\underline{\mathbf{\zeta}},d,d^{\prime}]\) and the weights \(\lambda_{\chi}=1\) for all of these parameters except for \(\lambda_{u},\lambda_{\beta}=10\). These values are chosen to be higher because when violated, they lead to singularities in Eq. 2 causing \(\widehat{\mathbf{\Gamma}}\) to become unstable, derailing training of the model. The other loss weights are tuned to achieve low force estimation error and low residual contributions. Their values are: \(\lambda_{F}=0.5\), \(\lambda_{F_{r}}=3e-2\), \(\lambda_{AdMSE}=5e-3\), \(\lambda_{0N_{\gamma}/\partial\beta}=0.2\). \begin{table} \begin{tabular}{l c c c} Parameter & Norm. Range & Limits \([l_{L},l_{u}]\) & Units \\ \hline \(\phi\) & \([17,45]\) & \([0,90]\) & - \\ \(c\) & \([0,10e3]\) & \([0,\cdot]\) & Pa \\ \(\delta\) & \([11,35]\) & \([0,90]\) & deg. \\ \(c_{a}\) & \([0,10e3]\) & \([0,\cdot]\) & Pa \\ \(\gamma\) & \([14e3,22e3]\) & \([0,\cdot]\) & N/m3 \\ \(\rho\) & \([2,179]\) & \([2,178]\) & - \\ \(a\) & \([10,10]\) & \([-30,30]\) & - \\ \(w\) & \([0,3.164]\) & \([0,3.164]\) & m \\ \(d^{\prime}\) & \([0,0.3]\) & \([0,0.660]\) & m \\ \(\Delta d\) & \([-5e-2,5e-2]\) & \([-5e-2,5e-2]\) & m \\ \(Q\) & \([0,10e3]\) & \([0,\cdot]\) & N \\ \(v_{x}\) & \([0,1]\) & \([-2,2]\) & m/s \\ \(v_{x}\) & \([-1,1]\) & \([-2,2]\) & m/s \\ \(g\) & \([11.5,34.5]\) & \([11.5,34.5]\) & - \\ \(\beta N_{\gamma}/\partial\beta\) & \([-10,10]\) & \([-,\cdot]\) & unitless \\ \(\eta\) & \([2,178]\) & \([2,178]\) & - \\ \(\zeta=\phi-\delta\) & \([11,35]\) & \([0,\cdot]\) & - \\ \hline \end{tabular} \end{table} Table 1FEE parameter normalization ranges and limits. Although not explicitly expressed in Eq. 13, averaging is performed over all samples in the batch for \(L^{X}_{\text{ReLU}}\), \(L^{\partial N_{Y}/\partial B}_{MAE}\), and \(L^{\Delta g}_{MSE}\). The \(L^{\nabla}_{MAE}\) loss is only averaged over samples for which the limit constraints for \(\eta\) and \(\beta\) are not violated to avoid the gradient step causing large changes to the network parameters when near a singularity of the FEE model. Similarly, \(L^{\nabla}_{MAE}\) is only averaged over samples for which the residual augmented depth of cut \(d^{\prime}>0\) to avoid penalizing the residual force from compensating for when the FEE model produces a zero-output force. All of the losses are computed using the normalized values, which helps to expedite hyperparameter tuning. To encourage the model to estimate \(\beta\) so that it is consistent with the assumptions of the FEE model, it is limited to lie within an interval for which the minimum is expected to exist using ReLU-based physical inconsistency regularization, Equation 16. Additionally, \(\partial N_{Y}/\partial B\) is regularized to be small. It is important to note that only gradients of this loss \(L^{\partial N_{Y}/\partial B}_{MAE}\) with respect to \(\beta\) are allowed to flow, i.e. gradients with respect to the other model parameters of \(\partial N_{Y}/\partial B\) are zero. Combining these losses encourages the network to converge to the critical point \(\beta^{*}\) that is the global optimum of Eq. 7 in a _soft_ fashion, allowing for deviations from this optimum when significant force prediction accuracy improvements can be achieved. This approach is novel and expands the type of physical inconsistency loss functions that can be accounted for when developing a PINN beyond equality and inequality constraints, as outlined by Karpatne et. al. (2022). ## 3 Results & Discussion To evaluate the approach proposed in this effort, the PINN derived parameters are compared with the ground truth parameters used in the simulation. ### Experimental Setup Two sets of experiments using two different datasets collected from different configurations of the Vortex soil model are performed. The first dataset, \(D_{\text{FEE}}\), is collected by configuring the soil model where the heuristics discussed in Section 2.1.1 are disabled such that only the reaction forces generated by the FEE model are exerted on the blade. Additionally, particle generation is disabled, i.e. \(Q=0\), to ensure that the forces generated by particles do not confound the parameter estimation. The goal of making these changes is to collect a dataset for which accurate ground-truth knowledge of the soil parameters is available. This enables evaluation of the accuracy of the parameter estimation approach in an ideal scenario, where the physics that governs the system matches the model that is assumed. The second dataset, \(D_{\text{default}}\), uses the default configuration for the soils, leaving the generation of particles and all of the heuristics enabled. In this case, there exists a significant mismatch between the physics governing the system and the assumed model. As discussed previously, while the simulation does not necessarily reflect the real-world physics to some quantifiable accuracy, the system exhibits similar phenomena as observed in real-world earthmoving operations. The \(D_{\text{default}}\) dataset can therefore be treated as an analog to experimentation on a real-world system. For all experiments, the ground is assumed to be flat and level, meaning that \(\alpha=0\) and \(d=p_{\theta}^{b}\). The mass of the particles in front of the blade is obtained using a soil mass sensor yielding the surcharge force \(Q\). A simple windowed average is used to filter the velocity measurement \(\vec{v}_{t}=\text{avg}(\left[v_{t}^{x},v_{t}^{b}\right]_{t=10:t})\) to reduce oscillations in the velocity-based scaling of the soil-tool parameters. The blade angle is fixed for all experiments at \(\rho=80^{\circ}\) as is the tool width at \(w=3.164\)m. The initial values of \(\phi,c\), and \(\gamma\) are obtained at the beginning of an episode based on the initial relative density \(I_{d}\), as it is not supported to obtain the changing parameter values while running the simulation. This is not problematic in practice as the system only moves forward and does not interact with any previously disturbed terrain. The adhesion is fixed for all soil types, \(c_{\alpha}\)=200 Pa, as is the soil-tool friction angle, \(\delta=10^{\circ}\). Since the blade angle is fixed at \(\rho=80^{\circ}\), the FEE predicts zero force in the \(z\) direction for larger forward velocities. To enable observation of some non-zero forces in the \(z\) direction for \(D_{\text{FEE}}\), the default soil-tool angle is overridden to \(\delta=15^{\circ}\). All of these values are collected to form the pseudo-ground-truth FEE parameters \(\mathbf{\theta}\). The \(D_{\text{FEE}}\) dataset consists of 4,320 sequences collected from 432 separate 10-second-long episodes using the default simulation update frequency of 60 Hz. Data is collected across all 4 default soil types and the relative density of the soil is varied across episodes, \(I_{d}\in[0,\!100]\), as are target velocity and cut depth commands, \(\hat{v}_{x}\in[0.3,\!1.0]\)m/s, \(\hat{d}_{x}\in[0.05,\!0.3]\)m. The \(D_{\text{default}}\) dataset contains 5,760 sequences from 576 episodes and is collected similarly. The PINN is implemented in PyTorch, training performed on a laptop equipped with a GPU, and the network loss in Eq. 13 is minimized using the Adam optimizer. ### Results The model does well predicting the interaction force, achieving an average magnitude error \(|\mathbf{F}-\mathbf{\bar{F}}|\) of approximately 730N, or 11% of the measured force, on \(D_{\text{FEE}}\) and 1940N, 13% of measured, on \(D_{\text{default}}\). To put this in perspective, normalizing the error by machine limits reveals 2% error on \(D_{\text{FEE}}\) and 5% error on \(D_{\text{default}}\). This is compelling and shows that the network is doing a good job of learning the dynamics of both the simplified and complex system. To evaluate the ability of the model to identify soil parameters, parameter estimates \(\mathbf{\bar{\theta}}\) and estimated forces \(\mathbf{\bar{F}}^{\text{FEE}}\) are compared to their pseudo-ground-truth counterparts as a function of relative density. Parameter estimates from each episode within the dataset where relative density \(I_{d}\) and soil type are the same are aggregated. \[\mathbf{\bar{\theta}}_{i}^{j}=\left[\mathbf{\bar{\theta}}^{j}\in\mathbf{D}\text{ s.t. }I_{d}=i\right]\text{for}\,j\in\mathbf{\bar{\theta}} \tag{17}\] The mean and variance of the aggregated parameters are then computed at each \(I_{d}\) for each soil type and visually depicted in the upper half of Figs. 5 and 6. This enables direct comparison between individual parameter estimates and their corresponding pseudo-ground-truth. Figure 4: FEE parameter estimation PINN diagram. [https://doi.org/10.56884/ABCD1234](https://doi.org/10.56884/ABCD1234) (_ed. note: ISTVS will update your paper's DOI_) ISBN 978-1-942112:55-6 0 2023 International Society for Terrain-Vehicle Systems. All rights reserved. Fig. 5: \(D_{\text{free}}\) dataset (top) FEE parameters estimated by the model (blue) and the pseudo-ground-truth values obtained from Vortex. (green), (bottom) Soil failure force predicted by the FEE component of the model. Units for all angles are in radians, Figure 6: \(D_{\rm default}\) dataset (top) FEE parameters estimated by the model (blue) and the pseudo-ground-truth values obtained from Vortex. (green), (bottom) Soil failure force predicted by the FEE component of the model. Units for all angles are in radians, For the \(D_{\text{FEE}}\), we can see that the network latches on to these true parameter values. It struggles a bit in correctly estimating \(\phi\), but the error is relatively small for smaller \(I_{d}\). However, the magnitude of the residual force \(\mathbf{F}^{r}\) also increases correspondingly, hinting that this magnitude may be a predictor of degrading parameter estimation performance. This is intuitive because as the larger \(\mathbf{F}^{r}\) becomes, the less of the force is accounted for by the FEE model. Additionally, as \(I_{d}\) increases the average depth of cut \(d\) across the dataset also decreases. This is because the vehicle is experiencing higher resistance to movement and increased stalling, which forces the blade controller to raise the blade to account for poor tracking of the forward velocity command. This indicates that the assumption of soil failure made by the FEE is increasingly being violated as the soil becomes more compact. When this assumption does not hold, then the model becomes invalid, and it becomes impossible to correctly estimate parameters from the observations alone. This analysis is useful but does not inform the effect errors in the parameters have on the estimated failure force. Small parameter error is a sufficient but not necessary condition to achieve low error of the failure force. This is because the FEE model, Eqs. 2-10, is under constrained, meaning that there are a multitude of parameter combinations that will produce the same failure force. This presents a significant challenge, particularly as the number of unknown model parameters increases. In large part, this is why the loss function, Eqs. 13-16, contains so many regularization components. These components all constrain the estimation to encourage the network to learn the true underlying parameter values. To complement the direct parameter error analysis and provide insight regarding the effect these parameters have on the force, the mean of the parameters at each relative density are used to produce predictions of a hypothetical soil failure force for which the depth of cut is fixed, and the surcharge is zero. This means of visualizing the results provides an alternative for evaluating the quality of the parameter estimation and illustrates the effect of the ambiguity present in the FEE model (see Figs. 5 and 6). In general, this approach enables reasoning about the effect the estimated parameters have on the expected soil failure force, which is the relevant information autonomous soil-property aware earthmoving planners require. For the \(D_{\text{FEE}}\) dataset, the predicted force obtained from the averaged estimated parameters \(\mathbf{\hat{P}^{EE}}=f_{FEE}(\mathbf{\bar{\theta}}|\mathbf{Q}=\mathbf{0},\mathbf{d}=\mathbf{0}.\mathbf{1})\) agrees well with the predicted force obtained from the ground-truth parameters \(\mathbf{F^{FE}}=f_{FEE}(\mathbf{\theta}|\mathbf{Q}=\mathbf{0},\mathbf{d}=\mathbf{0}.\mathbf{1})\). This indicates that while the parameter estimates do not match exactly the ground-truth, the resulting predicted force is still accurate. This is important from an autonomy perspective, because if the parameter estimates are consistently biased, meaning that the system estimates the same parameter values for the same soil conditions, and the system is able to accurately predict the cutting force, then the model can be used to map soil conditions during earthmoving and utilize this map to plan efficiently. Moving to the real-world analog dataset \(D_{\text{default}}\), the analysis becomes a bit more challenging. The parameter estimates are noisier than in the simplified dataset, and largely only \(\$\) matches with the pseudo-ground-truth values. This is expected, due to the increased complexity of the full Vortex model that results in interaction forces only partially derived from the FEE and the pseudo-ground-truth parameters. Broadly, the forces observed in this dataset are larger than those in \(D_{\text{FEE}}\), in particular \(\mathbf{F_{x}}\) is much larger. That being said, for the non-cohesive soils, \(\phi\) is estimated well for lower compaction. For the cohesive soils, the general trend of both \(\mathbf{\hat{\phi}}\) and \(\ell\) to increase with relative density is observed. This indicates that the network has uncovered a relationship between the observational history \(\mathbf{o}_{[t:t+\tau]}\) and these properties that can be explained by \(I_{d}\). Essentially, this means that the model has implicitly learned to estimate \(I_{d}\). The predicted force plots reveal a more compelling result. As the compaction increases, a gradual rise in the predicted force is observed, mirroring the smaller changes of the pseudo-ground-truth derived forces, until \(I_{d}>60\), where the force rises rapidly and exceed the machine tractive force limits. Note that the mean depth of cut \(d\) also drops to nearly zero at \(I_{d}=80\), particularly for the non-cohesive soil, again indicating that soil failure assumption is not being met. ## 4 Nomenclature \begin{tabular}{l l l} c & Soil cohesion & [Pa] \\ c\({}_{\text{a}}\) & Soil-tool adhesion & [Pa] \\ \(d\) & Depth of cut & [m] \\ \(F\) & Cutting force & [N] \\ N\({}_{\text{a}}\) & FEE Soil-tool adhesion coeff. & [unitless] \\ N\({}_{\text{c}}\) & FEE Soil cohesion coeff. & [unitless] \\ N\({}_{\text{r}}\) & FEE Soil wedge weight coeff. & [unitless] \\ N\({}_{\text{q}}\) & FEE Surcharge coeff. & [unitless] \\ \(Q\) & Surcharge force & [N] \\ \(R\) & Soil friction and normal force & [N] \\ \(\ddot{x}\) & Longitudinal direction & [unit vector] \\ \(W\) & Soil wedge weight & [N] \\ \(w\) & Tool width & [m] \\ \(\dot{Z}\) & Vertical direction & [unit vector] \\ \(\alpha\) & Soil surface inclination & [rad] \\ \(\beta\) & Soil failure angle & [rad] \\ \(\gamma\) & Soil moist unit weight & [N/m\({}^{2}\)] \\ \(\delta\) & Soil-tool friction angle & [rad] \\ \(\rho\) & Soil-tool angle & [rad] \\ \(\alpha_{\text{n}}\) & Normal force & [N] \\ \(\tau\) & Shear force & [N] \\ \(\phi\) & Soil internal friction angle & [rad] \\ \end{tabular} ## 5 Conclusions In this work, a novel physics-infused neural network approach to soil property estimation approach is introduced, a simulation is developed to enable attaining ground truth values of soil parameters, and the system is evaluated across two different datasets representing two environments with significantly different physics. The approach is shown to work well when the assumed model aligns with the environment's physics and is shown to produce meaningful parameter estimates when the model and the environmental physics deviate substantially. For both datasets, the parameter estimates enable prediction of interaction forces that are informative for the planning of autonomous earthmoving operations. While this work provides a novel framework for soil property estimation, a number of future improvements are being considered including: estimation of parameter uncertainty, development of a soil-property mapping system, extending the model to account for changes in the surface profile by varying \(\alpha\), estimation of additional parameters of the FEE model (\(\gamma,Q\), and \(\alpha_{\text{a}}\)), and modifications to the network to enable supervision of the model with only knowledge of kinematic history and without interaction force measurements to enable more rapid integration on existing equipment. ## 6 Acknowledgements The authors are grateful to Xavier Trudeau-Morin, Marek Teichmann, and Laszlo Kovacs from CM Labs for their support on the Vortex soil simulation and for Xavier's assistance in obtaining force measurements in the simulator.
2302.10203
Nonlinear response of Silicon Photonics microresonators for reservoir computing neural network
Nowadays, Information Photonics is extensively studied and sees applications in many fields. The interest in this breakthrough technology is mainly stimulated by the possibility of achieving real-time data processing for high-bandwidth applications, still implemented through small-footprint devices that would allow for breaking the limit imposed by Moore's law. One potential breakthrough implementation of information photonics is via integrated photonic circuits. Within this approach, the most suitable computational scheme is achieved by integrated photonic neural networks. In this chapter, we provide a review of one possible way to implement a neural network by using silicon photonics. Specifically, we review the work we performed at the Nanoscience Laboratory of the University of Trento. We present methodologies, results, and future challenges about a delayed complex perceptron for fast data processing, a microring resonator exploiting nonlinear dynamics for a reservoir computing approach, and a microring resonator with the addition of a feedback delay loop for time series processing.
Emiliano Staffoli, Davide Bazzanella, Stefano Biasi, Giovanni Donati, Mattia Mancinelli, Paolo Bettotti, Lorenzo Pavesi
2023-02-20T11:00:59Z
http://arxiv.org/abs/2302.10203v1
# Nonlinear response of Silicon Photonics microresonators for reservoir computing neural network ###### Abstract Nowadays, Information Photonics is extensively studied and sees applications in many fields. The interest in this breakthrough technology is mainly stimulated by the possibility of achieving real-time data processing for high-bandwidth applications, still implemented through small-footprint devices that would allow for breaking the limit imposed by Moore's law. One potential breakthrough implementation of information photonics is via integrated photonic circuits. Within this approach, the most suitable computational scheme is achieved by integrated photonic neural networks. In this chapter, we provide a review of one possible way to implement a neural network by using silicon photonics. Specifically, we review the work we performed at the Nanoscience Laboratory of the University of Trento. We present methodologies, results, and future challenges about a delayed complex perceptron for fast data processing, a microring resonator exploiting nonlinear dynamics for a reservoir computing approach, and a microring resonator with the addition of a feedback delay loop for time series processing. ## I Introduction The interest in Artificial Neural Networks (ANNs) has considerably increased in recent years due to their versatility, which allows for dealing with a huge class of problems [1]. Nowadays, ANNs are mostly implemented on electronic circuits, in particular, on von Neumann architectures in their different specifications such as the general purposes CPU (Central Processing Units), the massively parallel GPU (Graphical Processing Units) or the specialized integrated circuits used to accelerate specific task such as the TPU (Tensor Processing Units) [2; 3; 4]. Very-large-scale ANN models have been elaborated which outperform human minds in given tasks [5; 6] at the expense of large training times and huge power consumption [7; 8; 9]. Other intrinsic limits of electronic ANNs are related, for example, to the ease in interference between electrical signals, the difficulty in handling a large number of floating point operations and a low parallel computing efficiency [10; 11; 12]. A possible solution to these limitations is provided by Photonic Neural Networks (PNNs) which enable high-speed, parallel transmission (Wavelength Division Multiplexing, WDM) and low power dissipation [10; 11]. PNNs have the same overall architecture as an ANN, namely, they are made by several interconnected neurons where each neuron receives multiple inputs and feeds multiple other neurons (Fig. 1). The received inputs are weighted, combined, and processed by each neuron which through a nonlinear activation function feeds its interconnected neurons. When optics comes into play, some of these operations are very easy to implement. For example, large matrix multiplication becomes very fast and energy-efficient [13], giving PNNs a great advantage compared to electronic ANNs. These advantages lead to the development of photonic accelerators for electronic ANNs [14]. On the other hand, in integrated PNNs, the inter- and intra-neurons connections are easily established through waveguides in an on-chip optical switching network [15; 10] where the propagating optical signal can be modified using tunable waveguide elements (e.g. phase shifters or Mach Zehnder interferometers MZI [16]). The complexity in operations achievable by a PNN depends on multiple factors. These include the network topology, namely how the single processing units (neurons or nodes) are interconnected. Artificial neurons are combined in a huge variety of networks, from basic structures (e.g. the single perceptron shown in Fig. 1 Figure 1: Left: Sketch of a biological neuron (black dot). Different signals from nearby neurons (colored) are collected by the neuronal dendrites through interconnecting synapses. The neuronal body integrates the signals and, if above a threshold, produces a voltage spike which is sent via the neuronal axons (black arrow) to the post-synaptic neurons. Right: Sketch of an artificial neuron where the output y is produced by the formula given in the inset from the different inputs \(i\) (image courtesy of Giamarco Zanardi). right [17]) to very complex ones [18]. Potentially, any topological organization of neurons can be achieved, the optimal structure depending on the specific task to be solved and on the amount and format of data to be analyzed [19]. Problems that require low latency and fast reconfigurability fit with PNNs of the feed-forward type, where the data flows only in one direction (from the input neurons to the output neurons) through different layers of nodes [20]. On the contrary, high-complexity tasks where long short-term memory plays a key role require recurrence where the information flow back and forth between the neurons [21]. A model which is easily implemented in PNNs is the photonic reservoir computing, where random fixed connections between the nodes are established, with training only performed in the output layer [22]. Finally, the readout strategy is another key element since it provides direct access to the information elaborated by the network itself. The readout can be optical or electrical and its choice again depends on the topology of the network and the specific requirements of the task [23]. In ANNs, the nonlinear activation function is implemented within the node. This aspect of the neuron plays a fundamental role in the learning process since it determines the output of the node. Here, PNNs provide many possible choices [24; 25]. For example, a suitable activation function is the square modulus \(|\cdot|^{2}\): this is easily implemented by a direct detection process of the optical signal (e.g. with a photodiode) [26; 27]. The intensity (and thus the power) \(I(t)\) associated with an optical signal is indeed directly proportional to the square modulus of the electric field \(E(t)\), i.e. \(I(t)\propto|E(t)|^{2}\)[28]. Another activation function can be provided by a Semiconductor Optical Amplifier (SOA) integrated within the neuron [29]. An SOA behaves linearly for low input optical power but reaches saturation for higher power values [30]. Its power-gain curve thus is strongly nonlinear, making SOAs suitable for acting as a nonlinear node in a PNN. Here, we are mostly interested to discuss nonlinear nodes based on microring resonators [31; 32]. Since the microring's optical transfer function depends on the stored optical power [33], they can be used to implement different kinds of nonlinear transfer functions [34]. The optical domain offers an optimal testing ground for ANNs that process information using complex-valued parameters and variables [35]. The light propagation in waveguides and its nonlinear interaction with various media are naturally described in the complex domain where both the phase and the amplitude of the electric field associated with the optical signal have to be taken into consideration. Complex numbers are thus intrinsically involved in optical systems which turns out to be ideal for the implementation of complex-valued neural networks [35; 36; 37]. Even though each complex number can be represented by means of two real numbers, a complex-valued ANN must not be considered equivalent to a real ANN with a doubled number of parameters [36; 37]. Indeed, when it comes to complex multiplication, the rotatory dynamics of complex numbers enter into play, leading to a reduction of the degrees of freedom as compared to the case of completely independent parameters. Therefore, this opens further opportunities for PNNs which easily manipulate complex numbers. This possibility, associated with properly chosen nonlinear nodes and an effective readout strategy, yields that even a simple hardware implementation of a PNN manages to perform demanding tasks, which would require a much higher cost if faced with traditional ANNs [38]. In this chapter, we will discuss a few simple PNNs implemented on a silicon photonics platform [32], that demonstrate the basic mechanism of silicon-based PNNs. Silicon photonics is particularly interesting since its easy integration with electronics allows for on-chip training of the network and for volume fabrication of the PNNs [16]. In section II, a simple optical neuron is discussed where different delayed versions of the input optical signal are made to interfere before the output port [38; 39]. In section III, the simple microring resonator is used to demonstrate complex nonlinear dynamics [33]. In section IV, a Reservoir Computing network implemented by a single microring resonator within a time delay scheme is used for complex classification tasks [40]. In section V, linear and nonlinear memory tasks are used to evaluate the memory capacity of a microring resonator [41]. Section VI shows the possibility to extend the microring resonator fading memory by using an external optical feedback loop [42]. Finally, section VII concludes the chapter with a summary and perspectives. ## II A simple photonic network: the delayed complex perceptron The simplest perceptron consists of an algorithm that associates to two given input and weights vectors, respectively \(\vec{x}\) and \(\vec{w}\), the output of an activation function \(f(\vec{x}\cdot\vec{w})\), according to [17] \[f(\vec{x}\cdot\vec{w})=\begin{cases}1&\quad\text{if}\quad\vec{x}\cdot\vec{w}> 0;\\ 0&\quad\text{otherwise};\end{cases} \tag{1}\] where \(\cdot\) is the inner product of the Euclidean space. It can be considered a binary classifier without memory. Still, it can be used to describe the working principle of the individual nodes in complex topologies. A modified version of this simple algorithm has been implemented in an optical circuit to realize what we named a Delayed Complex Perceptron (DCP) [38]. Its structure is illustrated in Fig. 2. The input optical signal (\(u(t)\)) is split into 4 channels (\(u_{k}(t),k=1,\ldots,4\)), where the waveguides are spiralized so that a delay multiple of \(\Delta_{t}=50\) ps is induced with respect to an unperturbed copy traveling in the top channel (\(u_{1}(t)\)). The phase of the signal in each channel is then modified through independent phase shifters, which are actuated by micro-heaters (in yellow in Fig. 2). Therefore, the relative phase \(\phi_{k}\) of each signal can be controlled by the driving current in the phase-shifters and acts as weight \(w_{k}=e^{i\phi_{k}}\). Indeed, these currents represent the tunable parameters in the network during the learning phase. Finally, the modified signals are made to interfere (summed) in the output combiner. The result of this interference provides the output optical signal. The nonlinear node of the PNN is here represented by a fast photodiode that detects the output signal intensity \(y(t)\) by performing the square of the output signal. The ultimate purpose of the DCP is to combine the information of the input signal at the present time and at fixed delays in the past. The role of the phases is to modulate the interference between the signals in the different channels, selecting thus the proper combination of information from each time instant to perform the assigned task. The algorithm which describes the DCP is \[y(t)=f(\vec{x}\cdot\vec{w})=\left|\sum_{k}u_{k}(t)e^{i\phi_{k}}\right|^{2}. \tag{2}\] The DCP is fabricated on a Silicon-on-insulator (SOI) wafer, being the silicon layer 220 nm thick. The waveguides are 450 nm wide which allows for single-mode operation on the TE (transverse electric) polarization fixed by the input grating coupler. The classifier nature of the perceptron can be declined into different tasks. For example, the DCP has proven effective in solving logical tasks involving two bits separated in time (e.g. the XOR performed between the current bit and the first in the past) [38]. In another application, the DCP is used as a compensator for distortions induced by the chromatic dispersion on optical signals propagating in fiber [39]. In fact, chromatic dispersion causes an intersymbol interference between adjacent bits, with the consequent loss of information at the receiver [43]. Nowadays, the recovery process can be accomplished by Dispersion Compensating Fibers (DCFs) [44], which however are non-tunable devices and introduce latency [45]. An alternative is represented by Bragg Gratings [46], but, since they work on the entire WDM aggregate, they cannot perfectly compensate all the channels. The DCP constitutes an alternative to these methods, with the advantage of being reconfigurable and providing a drastic latency reduction. Moreover, compared to other technologies that implement coherent receivers and digital signal processing (DSP) for equalization, the DCP has the advantage of operating the corrections directly on the optical sequence, avoiding the complexity and the energy demand of DSP (which represent a significant fraction of the power budget). The experimental setup of Fig. 3 has been used to access the compensation capabilities of the DCP. In the transmission stage, a laser source operating in the third telecom window is modulated as a 10 Gbps Non-Return-To-Zero (NRZ) signal, based on a Pseudo-Random Binary Sequence (PRBS) of order 10 and period \(2^{10}\) bits. A Fiber Optic Coupler sends part of the optical power to a fast photodiode (RX1), while the other fraction proceeds into an optical fiber with a length of 125 km. The distorted signal is then coupled to the DCP for optical processing and it is finally detected by a fast photodiode (RX2). During the training, for each tested configuration of the injected currents, the input and output curves are acquired and compared in order to determine the expected level (0 or 1) associated with each output bit. The loss function provided to a Particle Swarm Optimizer [47] for the training aims to create the maximum relative separation between the distributions associated with the two classes, namely 0 and 1 (i.e. the maximum contrast between levels). This leads to a reduced Bit Error Rate (BER). The compensating effect of the perceptron for a span of 125 km is summarized in Fig. 4. The intersymbol generated by chromatic dispersion generates the closing of the gap between the distributions in Fig. 4(b). The fact that the two distributions are close to each other or even overlapping leads to an increased probability of incorrect identification of the bit value when a threshold is applied, thus the BER increases. The trained DCP manages to combine information coming from the present, the first, and the second past bits to split the distributions. In this way, a BER reduction is achieved. The performances of the DCP are comparable with other alternative approaches such as those in [48; 49], which exploit photonic reservoir computing. The benefits of the DCP regard the simpler architecture, minimized latency, and full on-chip signal processing with conse Figure 2: Sketch of the integrated photonic circuit which performs as a complex perceptron. Gratings are used to couple the light in and out of the optical circuit. The red spot on the left shows the input signal. A \(1\times 4\) splitter distributes the input signal to four delay lines, realized by spirals (each spiral adds a delay of \(\Delta_{t}=50\) ps). Then, thermal phase modulators (yellow segments) allow controlling the relative phases (\(w_{k}=e^{i\phi_{k}}\)) of the signals (\(u_{k}(t)\), blue lineshapes). These are then summed by combiners and the resulting signal (red lineshape) is output via a grating. A fast detector provides the output nonlinear node and yields the output signal \(y(t)\). Adapted from [38]. quent optical readout strategy (except for the training phase). Moreover, neither the response of the system is determined by random connections between nodes nor recurrence is present in the network, whose action is thus simpler to simulate. In the future, the recovery of nonlinear effects mediated by self-phase modulation is going to be attempted too, foreseeing implementations in transmission optical lines. With this perspective, in order to provide the user with a ready-to-use full-optical transceiver, the next-generation devices will be provided with a full-optical activation function stage directly in the photonic chip itself. This can be accomplished both by active components as integrated SOA with nonlinear properties and also by passive structures (e.g. integrated microring resonators) working in a nonlinear regime. ## III Nonlinear dynamics in a microring resonator Microring resonators (MRs) are resonant devices, characterized by compact footprint, and wide operation bandwidth [31]. When operating close to the resonant condition, a strong increase in power density in the MR occurs, inducing a nonlinear response. These features make them versatile photonic structures for different applications, including communications [15], bio-sensing [50], spectroscopy [51], frequency metrology [52], and quantum optics [53]. The temporal dynamics of a MR have to consider many intertwined nonlinear processes. Sufficient light intensity triggers nonlinearities related to the Si third-order susceptibility [54], specifically Two-Photon Absorption (TPA). TPA generates free carriers which thermalize by the emission of phonons and heat up the MR. The temperature (\(T\)) and the free-carrier (FC) density (\(N\)) are influenced by the optical power (\(P\)) circulating in the MR and, in turn, generate a nonlinear variation of the MR refractive index (\(n(P)\)): \[n(P)=n_{0}+\frac{dn}{dT}\Delta T-\frac{dn}{dN}\Delta N, \tag{3}\] where \(n_{0}\) is the linear refractive index, \(dn/dT\) the thermo-optic (TO) coefficient and \(dn/dN\) the free carrier dispersion (FCD) coefficient [54]. Noteworthy, since both coefficients are positive, the TO and FCD phenomena exhibit opposite sign. The MR resonant condition (\(m\lambda_{res}=2\pi Rn(P)\), where \(m\) is resonant order and \(R\) the MR's radius) is thus power dependent which, in turn, alters the optical power circulating in the MR. In this way, the pattern of inter-dependencies is complete [55]. The dynamics given by the coupling between these processes are described by three coupled differential equations [56], which indeed describe the variation of internal energy, FC population, and temperature. The complexity of the dynamics in the MR may generate temporal instability of the transmitted optical signal. Even when the system is fed with a continuous wave (CW) source, under particular conditions one may observe self-pulsing (SP), bi-stability, chaos, excitability, or a nonlinear distortion in the transmitted spectrum [57; 58; 59; 60; 61; 62]. The insurgence of these effects in a MR fabricated on a SOI wafer has been studied in [33]. A MR constituted by a 220 nm \(\times\) 450 nm\({}^{2}\) Si waveguide, with a 7 \(\mu\)m radius in an Add-Drop configuration was used. The coupling occurs through a 250 nm gap over a length of 3 \(\mu\)m, corresponding to a coupling coefficient of Figure 4: Results for 125 km fiber compensation. Distributions of expected 0s (red bars) and 1s (green bars) in (a) input, (b) uncompensated output, and (c) compensated output. (d) Time traces of target (black line), uncompensated (red line), and compensated output (green line). Circles represent the reference sample for each bit. Adapted from [39]. Figure 3: Experimental setup. Light is generated and modulated in the transmission stage and then sent into a 125 km optical fiber span. It proceeds then through the DCP for optical processing. Two fast photodiodes monitor the input (RX1) and the output (RX2) signals. The inset shows the actual design of the NN device, where one can observe the cascaded \(1\times 4\) and \(4\times 1\) splitter and combiner, the three spirals, and the four phase shifters (small blue rectangles) connected to the external DC current controller. Adapted from [39]. 0.063. The measured quality factors (Q-factors) amount to \(Q_{i}=1.11(8)\times 10^{5}\) (intrinsic) and \(Q_{L}=6.5(2)\times 10^{3}\) (loaded). The low-power transmission at the Through port is shown in Fig. 5, together with the experimental setup used for the nonlinear regime of operation. The MR has been investigated through scans both in the input optical CW power (\(P\)) and laser frequency detuning (\(\Delta\nu\)) with respect to the cold resonance frequency of the MR itself. Figure 6 presents the stability regions as a function of \(P\) and \(\Delta\nu\). As expected, for low input power no combination of (\(P\), \(\Delta\nu\)) exists for which a SP phenomenon is observed, since high input power is necessary (but not sufficient) for triggering this phenomenon. Panel (b) indicates the frequency of the measured SP extrapolated from the time traces, showing the tendency to generate faster oscillations at high powers in the red-shifted detuning region. By looking at the time traces of panel (c), it is possible to observe how each nonlinear phenomenon contributes to the generation of SP. Each cycle starts with the generation of FC population induced by TPA, which in turn triggers free-carrier dispersion and a blue shift of the resonant frequency of the ring. All these mechanisms generate an hysteresis response, and thus bistability in the ring. The structure remains in the bistable regime (narrow peak) for a short time, due to the relaxation of free carriers, with a typical time \(\tau_{fc}\). The resonance frequency is lowered, leading to a quasi-constant transmission for a short time (central region of the pulse). Finally, due to the heating of the microring generated by the free carrier relaxation, the Thermo-Optic Effect becomes predominant over the FCD, which drives the MR out of resonance (red-shift). Consequently, the Drop port transmission decreases. Then, the MR cools down with a typical time \(\tau_{th}\) and gradually returns to the initial state and a new cycle begins. The frequency at which SP occurs depends on both \(P\) and \(\Delta\nu\), ranging from a minimum of \(\sim 400\) kHz to a maximum of \(\sim 1\) MHz. The actual frequency value roughly depends on \(\tau_{fc}+\tau_{th}\), since both the relaxation dynamics have to occur to complete a full cycle of oscillation. The observation of sub-MHz self-pulsing suggests values for \(\tau_{fc}\sim 45\) ns and \(\tau_{th}\sim 270\) ns, that are much larger than the typical ones for SOI waveguides [61, 63, 64, 65, 59]. Describing the internal dynamics of the MR is made even more complex by the dependence of these parameters on the instantaneous carrier concentration and the material properties [33]. The commonly used approach to describe the temporal dynamics may often be reductive and the approximations behind its derivations too simplistic. The canonical three coupled differential equations may need some adjustment depending on the specific framework on which they are applied [33], but in general, they provide the physical sense behind the nonlinear processes in a MR. The coupling between nonlinear processes in a MR is difficult to describe, nonetheless, it is one of the key mechanisms behind the versatility of these devices. Figure 5: (a) Low power transmission spectra of the MR, collected at the Through port. Highlighted in red is the resonance order where the self-pulsing (SP) regime is analyzed. An enlarged view of this resonance is shown on the right panel. Here, the blue region marks the range of laser tuning \(\Delta\nu\) used. (b) The experimental setup implemented to measure the SP of the MR and the carrier lifetime. A CW Tunable Laser Source (TLS) operating in the C-band is polarization stabilized through a Fiber Polarization Controller (FPC) and amplified by an Erbium Doped Fiber Amplifier (EDFA), before being coupled with the MR through a grating coupler. The optical power sent to the Input port of the MR is controlled through a Variable Optical Attenuator (VOA). The optical signal at the output of the Drop port of the MR is collected through another grating coupler and sent to an optical bandpass filter (BPF). Finally, the signal is detected by a 20 GHz bandwidth fast photodiode (PD) connected to a 40 GSa/s oscilloscope. Adapted from [33]. Figure 6: (a) Stability map of the MR in the \(\Delta\nu=\) laser frequency detuning, \(P=\) input power plane. Red crosses indicate the points where the MR, after an initial transient, shows stable output. Green circles indicate the points where the MR is self-pulsing (SP). (b) Map of the oscillation frequencies of the self-pulsing regime given in the side color code bar. (c) Examples of time traces recorded at the output of the Drop port of the MR. The maximum of the intensity is normalized to one. The labels (1), (2), and (3) refer to the values of (\(P\), \(\Delta\nu\)) associated with each trace, which are indicated in panel (b). Figure from [33]. MR find wide applications in the field of PNNs too [34], for example as memory units, optical filters, or nonlinear nodes, as will be discussed in the next sections. Their small footprint makes them suitable for integrated solutions. ## IV Silicon microring resonator for reservoir computing Particularly suitable for PNN is the Reservoir Computing model for ANN (RC-NN)[66, 22]. We do describe here a simple implementation of this ANN model which is based on a MR in a pump-and-probe configuration and uses the concept of virtual nodes to enlarge the complexity of the network [40]. The pump signal is first modulated according to the procedure described in Fig. 7. The RC-NN receives in input a sequence \(\mathbf{X}_{in}=[\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(M)}]\) of \(M\) bits of dimension \(N\). The dimension of each bit is increased to \(N_{v}\) (which is the number of virtual nodes) by means of a proper connectivity matrix \(\mathbf{W}_{in}\). A scale factor \(\alpha\) and an offset \(u_{0}\) are applied to the so-obtained \(N_{v}\times M\) matrix, resulting in the sequence of operations which can be written as \(U=\alpha(\mathbf{W}_{in}\mathbf{X}_{in}+\mathbf{1}u_{0})\). The \(n\)-th column \(\mathbf{u}^{(n)}\) of the resulting \(N_{v}\times M\) matrix codifies for the \(n\)-th input sample \(\mathbf{x}^{(n)}\) and is imprinted into the pump signal \(u(t)\) in the time interval \([t_{n}=nT,t_{n+1}=(n+1)T)\), where \(T\) is the duration of an input bit. The temporal sequence is obtained by holding fixed each value in \(\mathbf{u}^{(n)}\) for a time \(\Delta\), having \(T=N_{v}\Delta\). The average power of the modulated pump sequence \(P_{p}\) at the input of the MR has to be kept sufficiently high to trigger nonlinearities. The pump is combined with a CW probe signal with power \(P_{pr}\ll P_{p}\) and then the resulting signal is coupled with the MR, where its nonlinear dynamics transfer information from the pump to the probe. The temporal sequence \(u(t)\) generates at the Drop port an output probe signal \(u_{pr}(t)\). The response of the reservoir to an input sample \(\mathbf{x}^{(n)}\) is then the sequence \(\mathbf{u}^{(n)}_{pr}=[u_{pr}(t_{n}),\ldots,u_{pr}(t_{n}+N_{v}\Delta)]\), where the samples represent the virtual nodes of the RC network and are acquired simultaneously with the pump signal. In the approximation of small perturbations to the input power \(u(t)\) with respect to a reference value \(\overline{u(t)}\), the output signal \(u_{pr}(t)\) can be written as [40] \[\begin{split}& u_{pr}(t)=c_{0}+c_{1}\int_{-\infty}^{t}e^{-\left( \frac{t-\xi}{\tau_{fc}}\right)}u^{2}(\xi)d\xi\\ &+c_{2}\int_{-\infty}^{t}e^{-\left(\frac{t-\xi}{\tau_{fc}} \right)}u^{2}(\xi)u_{pr}(\xi)d\xi\qquad,\end{split} \tag{4}\] where \(\tau_{fc}\) is the free carrier lifetime and \(c_{0}\), \(c_{1}\) and \(c_{2}\) are defined in [40]. The first integral term in Eq. 4 describes the intrinsic nonlinear memory of the system induced by the relaxation time of the free carriers generated with TPA. This fading memory has a duration on the order of \(\sim 3\tau_{fc}\). The second integral term describes a nonlinear coupling between the virtual nodes, which is triggered by the nonlinear free carrier dynamics in the resonator in response to variations of the pump signal. This yields recurrence and creates connectivity between the virtual nodes in the reservoir. The RC network serves the purpose of projecting the input sequence \(\mathbf{X}_{in}\) to a state matrix \(\mathbf{X}=[\mathbf{u}^{(1)}_{pr},\ldots,\mathbf{u}^{(M)}_{pr}]\) in a higher dimensional space, in which the observables \(\mathbf{Y}\) (in general an \(M\times Q\) dimensional matrix) linked to the input sequence are linearly separable. In this new space, the relation between virtual nodes \(\mathbf{X}\) and the predictions \(\mathbf{Y}\) can be written as \(\mathbf{Y}=\mathbf{W}_{out}\mathbf{X}\). The task then is reduced to find the output weight matrix \(\mathbf{\tilde{W}}_{out}\) that minimizes the regularized least square error, defined as \(\sum_{k=1}^{M}||(\mathbf{Y}-\mathbf{\tilde{W}}_{out}\mathbf{X}||^{2}+\lambda^{2}||\mathbf{ \tilde{W}}_{out}||^{2}\), where \(\lambda\) is the regularization parameter and it is determined by a 5-fold cross-validation [67]. The experimental implementation of the RC network is presented in Fig. 8. A tunable laser generates a CW pump signal, on which the desired pattern is imprinted through an Electro-Optical Modulator. The pump signal is then amplified and combined with a weak CW probe signal generated by another tunable laser. These are then coupled to the Input port of a MR through a grating coupler. The pump and probe signals are detuned by \(\Delta\lambda_{p}\) and \(\Delta\lambda_{pr}\) with respect to two different cold resonant frequencies of the MR (e.g. 1549 nm Figure 7: Process flow of the encoding of the input signal. \(M\) input samples \(\{\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(M)}\}\) of dimension \(N\) are queued on the columns of a matrix \(\mathbf{X}_{in}\). The dimension of each sample is then increased to \(N_{v}\) using a connectivity matrix \(\mathbf{W}_{in}\). A global offset \(u_{0}\) is applied to \(\mathbf{W}_{in}\mathbf{X}_{in}\) to remove the negative values, and a multiplicative scale factor \(\alpha\) is applied. The resulting column values represent the input pump power (red curve) \(u\) of each sample, which are sequentially injected at times \(t_{n}=n(N_{v}\Delta)\) at the input port of the MR (central inset). Similarly, the values of the probe power \(u_{pr}(t_{n,i})\) at times \(t_{n,i}=n(N_{v}\Delta)+i\Delta\), with \(i=\{1,\ldots,N_{v}\}\), define the virtual nodes at the output of the Drop port of the MR. Figure from [40]. and 1538 nm). Inside the MR, the magnitude of the free-carrier dynamics varies according to the modulation imprinted on the pump signal and this dynamic alters the intensity of the probe signal too due to the power dependence of the MR refractive index (Eq. 3). The output probe signal \(u_{pr}(t)\) at the Drop port is recorded, sampled at the virtual nodes, and digitally processed. Proof of the principle of operation of the RC network, which involves the interplay between the nonlinear transformation of the inputs and the presence of a fading memory, has been given with the 1-bit delayed XOR task. Given a binary input sequence \(\mathbf{x}\), at time \(t_{k}\) the RC network has to provide the result of the XOR operation applied to the input bit \(x^{(k)}\) and \(x^{(k-1)}\). For this particular task, the input sequence is constituted by a PRBS sequence of bit rate \(B\), and the connectivity matrix \(\mathbf{W}_{in}=[1,1,1]^{T}\) is adopted. Thus, the only mediation imposed by the mask is to bring the number of virtual nodes to \(N_{v}=3\), without altering the original information contained in the input sequence before this is imprinted in the pump signal. The time spacing between the virtual nodes results to be \(\Delta=(BN_{v})^{-1}\). The detunings are set to \(\Delta\lambda_{p}=\Delta\lambda_{pr}=60\) pm and the optical powers to \(P_{p}=3\) dBm and \(P_{pr}=-3\) dBm, respectively. A threshold is applied to the outcome of the predictor \(\mathbf{W}_{out}\mathbf{X}\) and compared with the target to obtain the BER. The results are shown in Fig. 9. The traces reported in panel (a) illustrates the response of the MR to a binary input signal, highlighting the incoherent transfer of information between pump and probe signals driven by the nonlinear dynamics of the MR. The BER as a function of the input bitrate \(B\) is reported in panel (b) and is evaluated in the case in which the state matrix \(\mathbf{X}\) is sampled from the input pump signal (black dots) and the output pump (red dots) and probe (blue dots). In the first case, high BER values highlight the non-separability of the task, which cannot be solved at any bitrate. The nonlinear dynamics and the memory introduced by the MR determine an improvement in the performance of the RC network, especially at small bitrates. The BER increases again at high modulation frequencies since the free carrier dynamics become too slow compared to the pump power variations. Panel (c) reports the BER values for \(B=20\) Mbps as a function of \(P_{p}\), showing that the action of the reservoir becomes effective in BER reduction only when the nonlinear dynamics occur. The second task on which the RC network has been trained consists in Iris flower recognition. The system receives in input the information (codified through a different \(\mathbf{W}_{in}\)) about the length and width of petals and sepals of a flower, with the objective of classifying it to one of the three possible species. Different configurations of the system are explored, reaching the highest recognition rate of \((99.3\pm 0.2)\%\) for \(N_{v}=50\), \(P_{p}=7\) Figure 8: Sketch of the experimental setup. TLS = Tunable Laser Source, AWG = Arbitrary Waveform Generator, EO mod. = Electro Optic modulator, FPC = Fiber Polarization Controller, VOA = Variable Optical Attenuator, PD = Photodiode, EDFA = Erbium Doped Fiber Amplifier, BS = Beam Splitter, GC = Grating Coupler, BPF = Band-Pass Filter, OSC = Oscilloscope. An enlarged view of the device layout and the main logical steps which describe how the information is processed are shown in the bottom part of the figure. The intensity-modulated pump (red) and the CW probe (blue) are injected into the input GC. The incoherent transfer of information from the pump to the probe occurs within the resonator (reservoir), where the different virtual nodes (yellow dots) interact and process the input data. The probe exits from the Drop port, carrying the result of the computation. Virtual nodes are sampled and sent into several linear classifiers, each trained to recognize a specific class. The decision-making process is based on a winner takes all scheme. Figure from [40]. Figure 9: (a) Examples of waveforms processed during the XOR task. The pump laser driving the input port of the MR is shown in black, while the pump and probe outputs from the Drop port are respectively shown in red and blue. (b) Bit Error Rate (BER) as a function of the bitrate for the 1-bit delayed XOR task. Black dots use a predictor matrix \(\mathbf{X}\) whose entries are sampled from the input pump power. Red and blue dots use respectively predictors sampled from the pump and the probe traces at the Drop port of the MR. In all three cases, the average pump power is set to 3 dBm. (c) Bit Error Rate as a function of the average pump power for a fixed bitrate of 20 Mbps. The insets show details of the probe waveform at the pump powers -10 dBm and 4 dBm. Figure from [40]. dBm, \(B=20\) Mbps and \(\Delta=50\) ns. In this configuration, \(P_{p}\) is sufficiently high to trigger also thermal effects in the MR, which enter as a further element in the free carrier dynamics. The tests performed with the delayed XOR and Iris flowers classification have demonstrated the potentiality of a RC-NN approach realized with a MR resonator and linear classifiers. A key role in the RC network is played by memory and nonlinear dynamics in the MR, which increase the separability of the observables with respect to the virtual nodes. In the next section, further tests involving linear and nonlinear logic tasks performed on the RC network will be described. ## V Linear and nonlinear tasks on a microring resonator A MR resonator working in nonlinear regime serves the double purpose of providing memory to the system (on the typical time scale for FC relaxation and thermal cooling of the ring) and nonlinearities. To isolate these two mechanisms and to study their role in logic operations performed by a RC-NN, we investigated both linear logic tasks to get insights on the amount of fading memory induced by FC dynamics, as well as nonlinear logic tasks to access the interplay between memory and activation function [41]. In this case, we used a simple MR to form a RC-NN together with the use of virtual nodes. The experimental setup, shown in Fig. 10, is comparable with that of Fig. 8, except for the presence of the probe source only. Using the same formalism of the previous section, the input matrix to the system \(\mathbf{X}_{in}\) is represented by a \(1\times M\) binary sequence repeating in time, consisting in a PRBS sequence of order 8 and length \(M=255\). The connectivity matrix \(\mathbf{W}_{in}\) consists of a \(N_{v}\times 1\) matrix of 1s, whose role is then simply to increase the dimensionality of the input matrix to \(N_{v}\times M\). The input matrix is then \(\mathbf{I}=\mathbf{W}_{in}\mathbf{X}_{in}\), where no rescaling or offset factors are applied so that each input virtual node \(I_{i}^{j}\) has the same value (0 or 1) as the corresponding bit \(x_{j}\) in the input sequence. The modulation imprinted in the pump laser corresponds exactly to \(\mathbf{X}_{in}\), with \(N_{v}\) virtual nodes in each bit and a total bit width of \(T=N_{v}\delta\), with \(\delta\) being the time duration of a single input virtual node. The virtual nodes at the output are obtained with the same modalities described above, and so it is for the training procedure. Here the output is collected directly from the pump, without relying on a probe signal. The tasks on which the RC-NN has been trained have been defined starting from both linear (AND, OR) and nonlinear (XOR) logic operations performed between the present bit and the \(n_{1}\)-th bit in the past of the input sequence \(\mathbf{X}_{in}\). The target sequence consists then of a sequence of 0s and 1s with one single value for each input bit and it is obtained by applying the selected logic operation directly to the digitized input sequence. The linear classifier receives as input the state matrix \(\mathbf{X}\) populated with virtual nodes associated with the current and previous bits up to \(n_{2}\) bits (the ridge regression bits, or R-bits) in the past (for a total of \(n_{2}\times N_{v}\) virtual nodes). It produces a \(1\times M\) output sequence \(\mathbf{Y}=\mathbf{W}_{out}\mathbf{X}\) from which a digital sequence is obtained by the application of a threshold. A task is thus defined by the logic operation and the parameters \(n_{1}\) and \(n_{2}\), e.g. we refer to the AND operation with \(n_{1}=1\) and \(n_{2}=2\) as the AND 1 with 2 R-bits. The specific tasks on which the network is trained are obtained as variants of these basic operations, varying \(n_{1}\) and \(n_{2}\) as indicated in Fig. 11. A mapping of the system in terms of state matrix is performed, varying the input bitrate, the average pump power entering the MR, and the detuning with respect to the resonant frequency of the MR. The training procedure is repeated for each of the so-obtained state matrices and also on those obtained sampling directly the input sequence, providing feedback on the effectiveness of the action of the reservoir. For the construction of each state matrix, the input and output curves are acquired with a fixed sampling rate of 20 GSa/s. The number of samples per bit \(N_{s}\) varies then depending on the selected bitrate \(B\), ranging from 20 Mbps to 4000 Mbps. How the virtual nodes are evaluated depends on the chosen number of virtual nodes \(N_{v}\) in relation to the bitrate. Considering, for example, \(N_{v}=10\): * for \(B<20\) GHz, one has \(N_{s}>N_{v}\), thus the acquired samples within a single bit are grouped into \(N_{v}\) bins and for each group, the average is performed; * for \(B=20\) GHz, one has \(N_{s}=N_{v}\), thus the virtual nodes simply coincide with the acquired samples in each bit; * for \(B>20\) GHz, one has \(N_{s}<N_{v}\), thus the first \(N_{s}\) virtual nodes coincide with the acquired sam Figure 10: Diagram of the experimental setup. CWTL: Continuous Wave Tunable Laser, AWG: Arbitrary Waveform Generator, EOM: Electro-Optic Modulator, PD: photodetector, PC: polarization control, VOA: Variable Optical Attenuator, EDFA: Erbium Doped Optical Amplifier, BPF: Band Pass Filter, Pc: Personal computer. Note that the second amplification stage constituted by VOA2 and EDFA2 keeps the average power at the receiver at a constant value. This limits the variations of the Signal-to-Noise ratio (SNR) at PD2, since the most significant noise source at the detector is represented by thermal and shot noise. Adapted from [41]. ples for that bit, while the remaining are set to zero. For each analyzed task, the results are enclosed in a 3-graph figure, where the outcomes of the training procedure are accessed by means of the BER. In the first contour plot, each point represents the best value of the BER (\(BER_{out}^{b}\)) obtained with the RC-NN for a specific bitrate and detuning. Red points indicate the configurations in which the statistical limit is reached. The lowest input power that allows achieving \(BER_{out}^{b}\) is presented in the second contour plot. Finally, the third plot shows the comparison between the results of the training procedure applied on the output of the RC-NN and directly on the input sequence (producing \(BER_{in}^{b}\)). The plot presents the ratio \(RB=BER_{in}^{b}/BER_{out}^{b}\), with the red dots indicating where the statistical limit is reached in BER evaluation at the output (cross) or at the input (empty circles). Two color-maps are used: the gray scale describes the configurations where the RC-NN worsens the performance compared to the unprocessed optical sequence, while the contrary applies to the colored regions. Black regions indicate an equal performance, namely \(RB=1\). First, we focus on the solution of delayed linear tasks: the MR should provide memory to the system, temporarily storing the information of the input sequence in the nonlinear dynamics of the FC. The results for the AND 1 with 1 R-bits are presented in Fig. 12 (left). The statistical limit in \(BER_{out}^{b}\) is reached in a vast region of the detuning-bitrate space, up to a bitrate of 500 Mbps. The second contour plot highlights the presence of a region in correspondence of a zero detuning where the task is resolved even for low input power. The third plot shows that lower BER values are reached with the optical processing operated by the MR compared to those obtained by applying the training to the input sequence, with the most evident benefits appearing in the region between 40 Mbps and 50 Mbps. This is the region where the MR nonlinearity provides enough memory to the RC-NN to solve the delayed logical operation. On the other hand, for high bitrates the performances associated with the two treatments tend to be comparable, observing also worsening of the performance induced by the MR for 4000 Mbps. Indeed, for such a bitrate the statistical limit is reached for multiple detunings applying the training directly to the unprocessed input sequence. In this extreme case, the source of the memory is represented by the non-idealities introduced by the generation and detection stages, due to their limited electronic bandwidth. The nonlinear dynamics of the MR do not provide further memory, on the contrary, it causes distortions in the signal, assuming thus a detrimental role in the training process. This is also highlighted by the darker region present on the power map for high bitrates, asserting that the corresponding \(BER_{out}^{b}\) values are obtained with minimum input power, thus trying to minimize the nonlinear effects induced by the MR. Figure 12 (right) presents the results for AND 2 with 1 R-bit, for which a higher amount of memory is required. The BER map in the top panel shows a region in correspondence of a detuning of 20 GHz and Figure 11: Sketch representing the three cases on which we tested the logical operations. According to the notation used in the text, \(\mathbf{b}_{j}^{b}=x_{j}\) and \(N\) is the length in terms of bits of an input sequence, constituted by \(N/M\) copies of \(\mathbf{X}_{in}\) and \(M=255\) is the length of a single \(\mathbf{X}_{in}\) sequence (PRBS). \(n_{1}\) indicates the distance between the bits on which the logical operation (LO) is performed and \(n_{2}\) is the number of bits provided to the ridge regression in the training procedure. Note that the flow of bits is such that the past bits \(\mathbf{b}_{j-n_{1}}^{b}\) are processed by the microresonator before the present bit \(\mathbf{b}_{j}^{b}\). Figure from [41]. Figure 12: Maps as a function of the frequency detuning and input bitrate for AND 1 with 1 R-bit and \(N_{d}^{N}=5\) (left column) and for AND 2 with 1 R-bit and \(N_{d}^{N}=5\) (right column). (top panels) BER estimation from the RC-NN at the power which ensures the best network performances; (middle panel) the power at which the \(BER_{out}^{b}\) values in the first panel are achieved; (bottom panel) the ratio between \(BER_{in}^{b}\) and \(BER_{out}^{b}\). All the values are given in a logarithmic scale. Figure from [41]. a bitrate of 100 Mbps where the \(BER_{out}^{b}\) are lower compared to the rest. The power map shows that the best results are obtained for high input power values, which ensures the presence of nonlinear effects in the MR. In this configuration of bitrate and detuning, the action of the MR improves the performance of the linear classifier, compared to the training performed on the optically unprocessed input sequence. This aspect is highlighted in the bottom panel, where the higher RB ratio appears in the same map region and amounts to \(RB=10^{1.5}\). The fact that the lowest \(BER_{out}^{b}\) value appears in correspondence with a negative detuning suggests that the nonlinear dynamics occurring in the MR are related to FC rather than thermal relaxation [33, 55]. The training procedure performed on the AND 3 with 1 R-bit returned a minimum \(BER_{out}^{b}\) of \(10^{-1}\), demonstrating that the memory of the MR related to its internal nonlinear dynamics is limited to 2 bits. The network has been also trained to solve the XOR 1 with 1 R-bit. In Fig. 13, the top panel contour plot shows various regions of low \(BER_{out}^{b}\), reaching a minimum value of \(10^{-1.7}\). Many local minima lie in the negative detuning half-plane, where the combination with high input power triggers nonlinear effects related to FC dynamics in the MR. On the contrary, in correspondence with positive detuning and high input power, the system sees a performance degradation, a symptom that nonlinearities induced by thermal effects are detrimental. It is interesting to notice that the task is solved in the region around the bitrate of 250 Mbps, which is the inverse of the typical FC lifetime in the MR (\(\tau_{fc}\sim 4.5\) ns and \(\tau_{th}\sim 100\) ns). The higher BER values in the map appear for bitrates higher than 800 Mpbs. In the same region of the power map, low input power values are registered, meaning that the system tries to avoid entering the nonlinear regime in the MR. Even if the RC-NN does not manage to perfectly solve the task in any of the explored conditions, it still provides performance enhancement compared to the training performed on the unprocessed optical input. Indeed, the colored regions in the RB contour plot cover the majority of the map, corresponding to the regions where low BER values are observed in the BER map. With the RC-NN, nonlinearities generate memory in the system and act also as an activation function, providing effective separability of the virtual nodes for the linear classifier. On the contrary, for high bitrates the intersymbol interference induced by the modulation process becomes more significant, thus inserting a sufficient amount of memory already into the input signal. Subsequent nonlinear processes in the MR become unnecessary. The results illustrated in this section witness the role of the MR in providing both memory and nonlinearity. The tests performed on the structure used as a reservoir for linear delayed tasks revealed a maximum amount of memory of 2 bits provided by the MR. On the other side, both memory and nonlinearity induced in the MR are necessary to solve nonlinear delayed tasks, revealing to be detrimental when the combination of fast modulation and detection already induces sufficient separation in the virtual nodes. As we saw in the current and the previous sections, MRs are versatile tools whose properties in terms of memory and nonlinearities can be used to solve logic and analog tasks. A single MR has, however, some limitations. One of the most evident is related to the restricted amount of memory that it provides. ## VI Microring resonators with external optical feedback for time delay reservoir computing Away from the self-pulsing regime, the amount of memory provided by a MR is linked to the typical lifetime of FC and the thermal relaxation of the structure, both of which are larger than the typical photon lifetime in the MR. An external optical feedback inserted in the structure enhances the MR memory and allows the use of MR-based RC-NN in time series prediction [42]. The scheme of a simulated RC-NN based on a MR with external optical feedback is presented in Fig. 14. The MR operates in an add-drop configuration with two equal coupling coefficients \(\gamma_{e}\) to the bus waveguides, as sketched in the dashed box. The signal is coupled to the MR from the input port (left-bottom) while the output signal is collected from the Drop port (top right). Figure 13: Maps as a function of the frequency detuning and input bitrate for XOR 1 with 1 R-bit and \(N_{d}^{y}=5\). (top) BER estimation from the RC network at the power which ensures the best network performances; (middle) the power at which the \(BER_{out}^{b}\) values in the first panel are achieved; (bottom) the ratio between \(BER_{in}^{b}\) and \(BER_{out}^{b}\). All the values are given in a logarithmic scale. Figure from [41]. The Through (bottom right) and Add (top right) ports are coupled by a feedback loop providing a delay of \(\tau_{F}\) and a tunable phase shift \(\phi_{F}\). The feedback strength is controlled by \(\eta_{F}\in[0,1]\), with \(\eta_{F}=0\) standing for full attenuation (\(E_{add}=0\)) and \(\eta_{F}=1\) standing for null attenuation. Practically, the feedback loop can be realized by an optical fiber, a variable optical attenuator, a SOA (to recover the insertion losses), and a phase shifter. Using a scattering matrix approach, the electric field amplitude at the Through (\(E_{th}\)), Drop (\(E_{drop}\)), and Add (\(E_{add}\)) ports are related to the input field (\(E_{inp}\)) by: \[E_{th}(t)=t_{r}E_{inp}(t)+\sqrt{2\gamma_{e}}U(t)\] \[E_{drop}(t)=\sqrt{2\gamma_{e}}U(t)+t_{r}E_{add}(t)\] \[E_{add}(t)=\sqrt{\eta_{F}}e^{-i\phi_{F}}E_{th}(t-\tau_{F})\] where \(t_{r}\) indicates the transmission coefficient respectively from the Input to the Through port and from the Add to the Drop port, while \(U(t)\) represents the optical energy amplitude circulating in the MR. The time dependence of \(U(t)\) derives from the interplay between linear and nonlinear dynamics in the MR, as described in section III and in [33]. Typical values for MR parameters are a Q-factor of \(Q=3.19\times 10^{4}\), an intrinsic photon lifetime \(\tau_{ph}\sim 50\) ps, a free carrier lifetime \(\tau_{FC}\sim 3\) ns and a thermal lifetime \(\tau_{TH}\sim 83\) ns. To create the input optical signal (Fig. 14 bottom left), we start from an analog or digital sequence \(X(t)\), with every bit \(x_{i}\) having a duration of \(b_{w}\) (common to all the bits) and an amplitude of \(b_{h,i}\). A periodic random mask \(M(t)\) is then applied to the sequence in order to increase the dimensionality of each bit to \(N_{v}\) (namely the number of virtual nodes chosen for the reservoir), corresponding to a time-separation between mask values of \(\theta=b_{w}/N_{v}\). Each mask entry is sampled from a uniform distribution and the periodicity condition \(M(t)=M(t+b_{w})\) is obeyed. The so-obtained masked sequence \(X(t)M(t)\) is used as a modulation pattern for a CW optical signal provided by a laser operating at a given optical power \(P_{max}\) and detuning \(\Delta\lambda_{s}\) with respect to the linear resonance frequency of the MR. The resulting input optical field is then written as \[E_{inp}(t)=\left[X(t)M(t)\right]^{1/2}=\left[x_{i}m_{j}\right]^{1/2}, \tag{5}\] for \(b_{w}(i-1)+\theta(j-1)\leq t\leq b_{w}(i-1)+\theta j\). The sequence of virtual nodes \(N_{j,i}\) with \(j=1,\ldots,N_{v}\) associated with the input bit \(x_{i}\) is obtained from the detected output signal \(|E_{drop}|^{2}\). The output virtual node \(N_{j,i}\) represents the sample of the output signal acquired simultaneously with the injection of \(x_{i}m_{j}\) in the structure, in formulas \[N_{j,i}\propto|E_{drop}(b_{w}(i-1)+\theta j)|^{2}. \tag{6}\] A unique estimator \(o_{i}\) for each sequence \(N_{j,i}\) with \(j=1,\ldots,N_{v}\) (namely for each input bit \(x_{i}\)) is produced as \[o_{i}=\sum_{j=1}^{N_{v}}W_{j}N_{j,i}. \tag{7}\] where \(W_{j}\) is a \(N_{v}\) dimensional vector. The training of the network is performed by linear regression, which provides the values \(W_{j}\) that minimize the Normalized Mean Square Error (NMSE) between the predictions \(o_{i}\) and the nominal outcomes \(y_{i}\) obtained from \(x_{i}\) by means of a given task. For each selected operation, the performance of the network has been accessed through a mapping of the NMSE values as a function of \(P_{max}\in\) [1,8] mW, \(\Delta\lambda_{s}\in\) [-50,50] pm, \(\eta_{F}\in\) [0,1] and \(\phi_{F}\in\) [0,2\(\pi\)]. The numerical experiments have been performed with \(\theta=40\) ps, meaning that, since \(\theta\approx\tau_{ph}\), photons circulating in the ring contribute to short-term memory creation. The bit width is set to \(b_{w}=1\) ns \(\approx\tau_{FC}\) (\(N_{v}=25\)), so that each input bit \(x_{i}\) manages to produce observable variations in the FC population and triggers nonlinear effects which cause in turn longer-term memory formation. Finally, the delay introduced by the feedback loop amounts to 1 ns. Along with the analysis of the performance of the reservoir computing approach in solving specific tasks, a study on the amount of memory achieved by the system in specific conditions has been conducted. The reservoir is provided with a random sequence of bits sampled from a uniform distribution and it is trained to remember the \(l\)-th previous element of the input sequence. The amount of Figure 14: Schematic of time delay RC-NN with a MR subject to optical feedback. The MR structure is shown in the dashed box. It is in an add-drop configuration with the external optical feedback where both the phase \(\phi\), the amplitude \(\eta\) and the dealy \(\tau_{F}\) can be controlled. \(\gamma_{e}\) represents the MR extrinsic losses due to the coupling with the straight waveguides. The encoded information \(X(t)\) is masked with a sequence \(M(t)\) and modulates the optical power from the laser (LAS) emission. At the drop port, the photodetected (PD) signal provides the time-multiplexed output states of the reservoir, which are weighted and linearly combined to compute the predicted value \(o_{i}\). The weight optimization is performed via a linear classifier, with supervised learning over the expected values \(y_{i}\) data set. Figure adapted from [42]. memory for the trained system is obtained by evaluating the Memory Capacity (MC) \[MC=\sum_{l=1}^{l_{max}}m(l), \tag{8}\] with \[m(l)=\frac{cov^{2}(o(n),i(n-l))}{\sigma_{o}^{2}\sigma_{i}^{2}}, \tag{9}\] where each term \(m(l)\) measures the covariance (\(cov\)) between the output vector \(o(n)\) and the input bit sequence \(i(n-l)\) delayed by \(l\) bits, with \(\sigma_{o}\) and \(\sigma_{i}\) representing the respective variances. The influence of nonlinear processes in memory formation can be detected by simulating the evolution of the standard deviation of the wavelength resonant shift \(\sigma(\lambda_{0})\): high values are symptom of the presence of nonlinear effects. These numerical simulations for the dynamics of the MR have been performed by integrating the 3 canonical coupled differential equations derived from the couple mode theory [42, 55]. The first benchmark task is represented by the Narma-10 task, in which the system is trained to predict the response of a discrete-time tenth-order nonlinear auto-regressive moving average [68]. In order to solve the task, it is necessary for the system to show a memory of at least 10 bits. The results are reported in Fig. 15. Panels (a) and (b) contain respectively the NMSE and the MC parameter values mapped as a function of the feedback parameters \(\eta_{F}\) and \(\phi_{F}\), for the optimized values \(P_{max}=0.1\) mW and \(\Delta\lambda_{S}=-10\) pm. Red circles highlight the configuration for which the NMSE is at its minimum value, which is obtained in a region where the MC parameter approaches its maximum measured value. The best performance of the system is thus obtained when operating in a linear regime and considering a strong feedback signal (\(\eta_{F}=0.9\)). In this configuration, the memory necessary to solve the task is provided by the external feedback loop, without the need for the MR nonlinearities as a further source of memory. The memory increase provided by the feedback loop is evident in Panel (c): the system without the introduction of the external feedback shows a maximum MC parameter of 2, compared to a maximum value of about 13 in the optimal feedback configuration. Indeed, with \(\eta_{F}=0\) and low input power, the only source of memory is related to photon lifetime in the ring: information retained from the past bit is only related to the inertia between the last virtual nodes of the previous bit \(x_{i-1}\) and the first ones of the current bit \(x_{i}\). This result is also deducible from Panel (d), which presents the values of the weights found by the RC classifier for the system in the memory-less and optimal configuration, respectively. For \(\eta_{F}=0\), only the weights related to the first virtual nodes are significant compared to the others, while in the optimal feedback configuration, all the virtual nodes assume a role in carrying information. Another benchmark test is represented by the Mackey-Glass prediction task which we operated in a weakly chaotic behavior [69]. In this case, the RC-NN is required to predict the next bit \(x_{i+1}\) knowing the current bit \(x_{i}\). An overview of the performances reached by the system is portrayed in Fig. 16. Panel (a) and (b) report the NMSE and \(\sigma(\lambda_{0})\) as a function of \(\eta_{F}\) and \(\Delta\phi_{F}\), while keeping \(P_{max}=5\) mW and \(\Delta\lambda_{S}=-30\) pm (optimal operational conditions). The black circle highlights the configuration corresponding to the lowest NMSE overall: the high value of \(\sigma(\lambda_{0})\) of Panel (b) indicates that, contrary to what is shown for the Narma-10 task, here the optimal configuration is obtained by exploiting nonlinearities induced in the MR. Notice also that the feedback strength is large, but the signal reaching the Add port is in an intermediate state between constructive and destructive interference with the signal circulating in the MR. Indeed, a fully constructive interference condition would have promoted even more significant variations in \(\lambda_{0}\), but more nonlinearities would have been detrimental to the performance (Panel (a)). In this case, the feedback loop has the double purpose of extending the memory and tuning the level of nonlinearities induced in the MR. The worst performance of the system is highlighted in Panel (a) and (b) by the red circles. In Panel (c) this configuration is shown by the red curves. The deep spikes in \(\Delta\lambda_{0}(t)\) appearing in Panel (c) are symptoms of the presence of SP in the MR, which degrades the perfor Figure 15: Performance of the RC-NN for the Narma-10 benchmark task. (a) NMSE and (b) MC, versus optical feedback strength \(\eta_{F}\) and phase \(\Delta\phi_{F}\). Red circle denotes the conditions with the lowest NMSE. (c) Memory function \(m(l)\), for the cases without feedback (blue line) and with feedback conditions that result in the lowest NMSE (red line). (d) Readout weights for a task to remember the previous input value \(x_{i-1}\), for the cases without feedback (blue line) and with feedback conditions that result in the lowest NMSE (red line). MC is computed using \(l_{max}=19\). The initial wavelength shift is \(\Delta\lambda_{s}=-10\) pm and the MR is operating in the linear regime, with \(b_{w}=1\) ns. Figure from [42]. mance. When operating in SP, the light does not couple with the MR, but propagates straight through the delay line and then to the Drop port, without recirculating in the MR (Panel (d), configuration 2). In conclusion, the delay feedback loop coupled with the MR has proven effective as a memory extender for the RC-NN. When operating in a linear regime, the MR coupled with the delay loop assumes the function of a shift register in the optical domain, while when nonlinearities are triggered in the MR, the delay loop serves the purpose of both controlling the strength of nonlinearities and extend the memory of the system. ## VII Conclusions Different optical circuits have been described as basic elements of PNNs within a silicon photonics platform. Multiple architectures have been analyzed and tested with respect to various tasks, demonstrating their specific properties and establishing performance benchmarks. A simple delayed complex perceptron employed as a feed-forward neural network with memory has proved effective in compensating distortions induced by chromatic dispersion in a 10 Gbps NRZ signal propagating in a 125 km fiber. The trained perceptron restores the opening of the eye diagram of the signal after the propagation, thus drastically diminishing the BER compared to the uncompensated signal at the fiber output. The optical processing operated by the perceptron permits minimizing latency and tuning the properties. In addition, adding more delay lines and nonlinearities would increase the computational capabilities of the complex perception, leading thus to more advanced mitigation actions in optical communications [70]. Furthermore, the use of passive elements in PNNs is of extreme importance to reduce the power budget. Interesting possibilities are given by the nonlinear dynamics of a MR. These have been explored, observing the self-pulsing regime in different configurations of input power and initial detuning. Nonlinear dynamics plays a fundamental role in long short-term memory formation in the MR and enables the use of a MR as a reservoir in a RC-NN. In one implementation, the FC dynamics triggers the incoherent transfer of information between pump and probe signals, increasing the separability of the data which allows the use of linear classifiers to achieve complex tasks. Remarkably the best performance is achieved at the edge of the self-pulsing regime where both the free carrier dispersion and the thermo-optic effect are critical. Out of the pump-and-probe approach, linear delayed tasks have been adopted to decouple the memory formation from the role of the MR used as a nonlinear node. These tests revealed the finite memory retained by the MR, which is limited to 2 bits at the best bit rate. The introduction of an external feedback loop coupled with the MR represents an effective memory source. In the RC-NN approach with virtual nodes, this new structure is able to time series forecast with memory on timescales larger than those typically associated with nonlinear processes induced by FC dynamics (not thermal effects) in a MR. Here we have discussed a few examples of the use of single MR in PNNs. More elaborated and complex neural networks are possible when matrices of perceptrons or microrings are used, as reviewed in [19, 71, 72, 73]. PNNs with extremely high performances and speed have been demonstrated in [74, 75]. ###### Acknowledgements. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 788793, BACKUP and No 963463, ALPI) and from the MUR under the project PRIN PELM (grant number 20177 PSCKT). Figure 16: Performance of the RC-NN for the Mackey-Glass benchmark task. (a) NMSE and (b) standard deviation of the resonance wavelength shift \(\sigma(\lambda_{0})\), versus optical feedback strength \(\eta_{F}\) and phase \(\Delta\phi_{F}\) of the MR system. Black (red) circle denotes the conditions with the lowest (highest) NMSE. (c) Temporal evolution of the resonance shift and the bit error \(|o_{i}-y_{i}|\) during the task for two feedback conditions: the black line corresponds to the lowest NMSE (black circle, (a)), and the red line corresponds to the highest NMSE (red circle, (a)). (d) Dynamical operation of the MR with optical feedback under self pulsations: light occasionally enters (path 1, upper) or bypasses (path 2, lower) the MR. The initial wavelength shift is \(\Delta\lambda_{S}=-30\) pm, the maximum launched optical power at the input is \(P_{max}=5\) mW and \(b_{w}=1\) ns. Figure from [42].
2301.03049
AutoAC: Towards Automated Attribute Completion for Heterogeneous Graph Neural Network
Many real-world data can be modeled as heterogeneous graphs that contain multiple types of nodes and edges. Meanwhile, due to excellent performance, heterogeneous graph neural networks (GNNs) have received more and more attention. However, the existing work mainly focuses on the design of novel GNN models, while ignoring another important issue that also has a large impact on the model performance, namely the missing attributes of some node types. The handcrafted attribute completion requires huge expert experience and domain knowledge. Also, considering the differences in semantic characteristics between nodes, the attribute completion should be fine-grained, i.e., the attribute completion operation should be node-specific. Moreover, to improve the performance of the downstream graph learning task, attribute completion and the training of the heterogeneous GNN should be jointly optimized rather than viewed as two separate processes. To address the above challenges, we propose a differentiable attribute completion framework called AutoAC for automated completion operation search in heterogeneous GNNs. We first propose an expressive completion operation search space, including topology-dependent and topology-independent completion operations. Then, we propose a continuous relaxation schema and further propose a differentiable completion algorithm where the completion operation search is formulated as a bi-level joint optimization problem. To improve the search efficiency, we leverage two optimization techniques: discrete constraints and auxiliary unsupervised graph node clustering. Extensive experimental results on real-world datasets reveal that AutoAC outperforms the SOTA handcrafted heterogeneous GNNs and the existing attribute completion method
Guanghui Zhu, Zhennan Zhu, Wenjie Wang, Zhuoer Xu, Chunfeng Yuan, Yihua Huang
2023-01-08T14:38:32Z
http://arxiv.org/abs/2301.03049v2
AutoAC: Towards Automated Attribute Completion for Heterogeneous Graph Neural Network (Extended Version) ###### Abstract Many real-world data can be modeled as heterogeneous graphs that contain multiple types of nodes and edges. Meanwhile, due to excellent performance, heterogeneous graph neural networks (GNNs) have received more and more attention. However, the existing work mainly focuses on the design of novel GNN models, while ignoring another important issue that also has a large impact on the model performance, namely the missing attributes of some node types. The handcrafted attribute completion requires huge expert experience and domain knowledge. Also, considering the differences in semantic characteristics between nodes, the attribute completion should be fine-grained, i.e., the attribute completion operation should be node-specific. Moreover, to improve the performance of the downstream graph learning task, attribute completion and the training of the heterogeneous GNN should be jointly optimized rather than viewed as two separate processes. To address the above challenges, we propose a differentiable attribute completion framework called AutoAC for automated completion operation search in heterogeneous GNNs. We first propose an expressive completion operation search space, including topology-dependent and topology-independent completion operations. Then, we propose a continuous relaxation schema and further propose a differentiable completion algorithm where the completion operation search is formulated as a bi-level joint optimization problem. To improve the search efficiency, we leverage two optimization techniques: discrete constraints and auxiliary unsupervised graph node clustering. Extensive experimental results on real-world datasets reveal that AutoAC outperforms the SOTA handcrafted heterogeneous GNNs and the existing attribute completion method. heterogeneous graph, graph neural network, attribute completion, differentiable search ## I Introduction Graph-structured data are ubiquitous, such as social networks [1], scholar networks [2], biochemical networks [3], and knowledge graphs [4]. Meanwhile, many real-world graph data are heterogeneous [5]. Unlike the homogeneous graph with only one node type and one edge type, the heterogeneous graph [6] consists of multiple types of nodes and edges associated with attributes in different feature spaces. For example, the IMDB dataset is a typical heterogeneous graph, which contains three node types (movie, actor, director) and two edge types (movie-actor, movie-director), as shown in Figure 1(a). Due to containing rich information and semantics, heterogeneous graphs have drawn more and more attention. Recently, graph neural networks (GNNs) [7, 8] have demonstrated powerful representation learning ability on graph-structured data [9]. Meanwhile, many heterogeneous GNNs (HGNNs) have been proposed for heterogeneous graphs [10][11][12][13][14][15][16][17]. However, the existing work on heterogeneous graphs mainly focuses on the construction of novel GNN models, while ignoring another important issue that also has a large impact on the model performance, namely the attributes of some types of nodes are missing [18]. Missing node attributes is a common problem because collecting the attributes of all nodes is prohibitively expensive or even impossible due to privacy concerns. Since the attributes of all nodes are required in the GNN-based heterogeneous models, some handcrafted ways are employed to deal with the problem of missing attributes. For example, the missing attribute vector can be the sum or the mean of directly connected nodes' attribute vectors. Besides, the one-hot representations of a certain node type can also be used to replace the missing attributes. However, the handcrafted ways require huge expert experience and domain knowledge. Also, the topological relationships in the graph are not taken into account. Recently, an attention-based method [18] was proposed to complete each no-attribute node by weighted aggregation of the attributes from the directly neighboring attributed nodes. Such an attribute completion method only considers the attributes of 1-hop neighbors without exploiting the attributes of higher-order neighbors. Moreover, existing attribute completion methods are all coarse-grained. That is, for a specific node type without attributes, they adopt the same attribute completion operation for all nodes without considering the differences in semantic characteristics between nodes. In practice, fine-grained attribute completion is more reasonable. The attribute completion operations for the nodes with different semantics should be different. Take the IMDB dataset as an example. The target type of nodes (i.e., movie nodes) has attributes, and the other types of nodes (i.e., actor nodes and director nodes) have no attributes. As shown in Figure 1(b), there exist three attribute completion operations, including 1) For actors (e.g. Jackie Chan) who are involved in movies that mostly belong to the same genre (Kung Fu movies), average attribute aggregation of local (i.e., 1-hop) neighboring nodes should be used. 2) For actors who have strong collaborative relationships with other actors and directors, the message-passing based multi-hop attribute aggregation is more suitable. 3) For guest actors without representative movies, we can directly use the simple one-hot encoding to complete attributes. For the IMDB dataset, the number of actor nodes that have no attributes is 6124. Manually differentiating the semantic characteristics of all no-attribute nodes and then selecting the most suitable completion operations according to semantic characteristics is infeasible. Thus, an automated attribute completion method that can search the optimal completion operations efficiently is required. Moreover, to improve the performance of the downstream graph learning task, the automated attribute completion and the training of the heterogeneous GNN should be jointly optimized rather than viewed as two separate processes. To address the above challenges, we propose a differentiable attribute completion framework called AutoAC1 for automated completion operation search in heterogeneous GNNs. AutoAC is a generic framework since it can integrate different heterogeneous GNNs flexibly. By revisiting the existing attribute completion methods, we first propose an expressive completion operation search space, including topology-dependent and topology-independent completion operations. Instead of searching over the discrete space (i.e., candidate completion operations for each no-attribute node), we propose a continuous relaxation scheme by placing a weighted mixture of candidate completion choices, which turns the search task into an optimization problem regarding the weights of choices (i.e., completion parameters). Thus, due to the continuous search space, the search process becomes differentiable and we can perform completion operation search via gradient descent. Footnote 1: AutoAC is available at [https://github.com/Pasal_ab/AutoAC](https://github.com/Pasal_ab/AutoAC) To further improve the search efficiency, we formulate the search of attribute completion operations and the training of GNN as a constrained bi-level joint optimization problem. Specifically, we keep the search space continuous in the optimization process of completion parameters (i.e., upper-level optimization) but enforce attribute completion choices being discrete in the optimization process of weights in the heterogeneous GNN (i.e., lower-level optimization). In this way, there is only one activated completion operation for each no-attribute node during the training of GNN, removing the need to perform all candidate completion operations. Inspired by NASP [19], we employ proximal iteration to solve the constrained optimization problem efficiently. Finally, to reduce the dimension of the attribute completion parameters, we further leverage an auxiliary unsupervised graph node clustering task with the spectral modularity function during the process of GNN training. To summarize, the main contributions of this paper can be highlighted as follows: * We are the first, to the best of our knowledge, to model the attribute completion problem as an automated search problem for the optimal completion operation of each no-attribute node. * We propose an expressive completion operation search space and further propose a differentiable attribute completion framework where the completion operation search is formulated as a bi-level joint optimization problem. * To improve search efficiency, we enforce discrete constraints on completion parameters in the training of heterogeneous GNN. Moreover, we leverage an auxiliary unsupervised graph node clustering task to reduce the dimension of the attribute completion parameters. * Extensive experimental results on real-world datasets reveal that AutoAC is effective to boost the performance of heterogeneous GNNs and outperforms the SOTA attribute completion method in terms of performance and efficiency. ## II Related Work ### _Heterogeneous Graph Neural Network_ Graph neural network [8][20][1][21][22][9] aims to extend neural networks to graphs. Since heterogeneous graphs are more common in the real world [5], heterogeneous GNNs have been proposed recently. Part of the work is based on meta-paths. HAN [10] leverages the semantics of meta-paths and uses hierarchical attention to aggregate neighbors. MAGNN [14] utilizes RotatE [23] to encode intermediate nodes along each meta-path and mix multiple meta-paths using hierarchical attention. Another part of the work chooses to extract rich semantic information in heterogeneous graphs. GTN [11] learns a soft selection of edge types and composite relations for generating useful multi-hop connections. HetGNN [13] uses Bi-LSTM to aggregate node features for Fig. 1: (a) Example of heterogeneous graphs with incomplete attributes, i.e., the IMDB dataset. (b) Different attribute completion operations for the actor node, i.e., local attribute aggregation, message-passing based multi-hop attribute aggregation, and one-hot representation. each type and among types. As the state-of-the-art model, SimpleHGN [17] revisits existing methods and proposes a simple framework using learnable edge-type embedding and residual connections for both nodes and edges. Recently, AS-GCN [24] employs the heterogeneous GNN to mine the semantics for text-rich networks. Different from the above methods, HGNN-AC [18] notices that most of the nodes in the real heterogeneous graph have missing attributes, which could cause great harm to the performance of heterogeneous models, and proposes an attention-based attribute completion method. However, HGNN-AC needs to get node embeddings based on network topology using metapath2vec [25], which is a time-consuming process. Moreover, the attribute completion in HGNN-AC is coarse-grained and supports only one completion operation for all no-attribute nodes. HGCA [26] unifies attribute completion and representation learning in an unsupervised heterogeneous network. MRAP [27] performs node attribute competition in knowledge graphs with multi-relational propagation. ### _Neural Architecture Search (NAS)_ NAS [28] that designs effective neural architectures automatically has received more attention. The core components of NAS contain search space, search algorithm, and performance estimation strategy. Recently, many works use NAS to design GNN models due to the complexity of GNN [29]. PolicyGNN [30] uses reinforcement learning to train meta-strategies and then adaptively determines the choice of aggregation layers for each node. SANE [31] and SNAG [31] search for aggregation functions using microscope-based and reinforcement learning-based strategies, respectively. The architecture-level approaches such as GraphNAS [32], AutoGNN [33], and PSP [34] aim to search for architectural representations of each layer, including sampling functions, attention computation functions, aggregation functions, and activation functions. The above works are based on homogeneous graphs. Due to the rich semantic and structural information in heterogeneous graphs, applying NAS to heterogeneous graphs is more challenging. Recently, there exist some excellent attempts. GEMS [35] uses the evolutionary algorithm to search for meta-graphs between source and target nodes. DiffMG [36] uses differentiable methods to find the best meta-structures in heterogeneous graphs. However, the above works only focus on the GNN model and ignore the heterogeneous graph data itself, which is even more important in practice. ### _Proximal Iteration_ Proximal iteration [37] is used to handle the optimization problem with a constraint \(\mathcal{C}\), i.e., \(\min_{x}f(x)\), s.t. \(x\in\mathcal{C}\), where \(f\) is a differentiable objective function. The proximal step is: \[\begin{split}& x^{(k+1)}=\text{prox}_{\mathcal{C}}\left(x^{(k)}- \epsilon\nabla f\left(x^{(k)}\right)\right)\\ &\text{prox}_{\mathcal{C}}(x)=\operatorname*{arg\,min}_{z}\frac{1 }{2}(\left\|z-x\right\|)^{2},\text{s.t.}\ z\in\mathcal{C}\end{split} \tag{1}\] where \(\epsilon\) is the learning rate. Due to the excellent theoretical guarantee and good empirical performance, proximal iteration has been applied to many deep learning problems (e.g., architecture search [19]). ## III Preliminaries _Heterogeneous Graph._ Given a graph \(G=\langle V,E\rangle\) where \(V\) and \(E\) denote the node set and the edge set respectively, \(G\) is heterogeneous when the number of node and edge types exceeds 2. Each node \(v\in V\) and each edge \(e\in E\) are associated with a node type and an edge type respectively. _Attribute Missing in Heterogeneous Graph._ Let \(x_{v}\in\mathbb{R}^{d}\) denote the original \(d\)-dimensional attribute vector in the node \(v\). In practice, the attributes of some types of nodes are not available. Thus, the node set \(V\) in \(G\) can be divided into two subsets, i.e., \(V^{+}\) and \(V^{-}\), which denote the attributed node-set and no-attribute node-set. _Attribute Completion._ Let \(X=\{x_{v}\mid v\in V^{+}\}\) denote the input attribute set. Attribute completion aims to complete the attribute for each no-attribute node \(v\in V^{-}\) by leveraging the available attribute information \(X\) and the topological structure of \(G\). Let \(x_{v}^{C}\) denote the completed attribute. Thus, after completion, the node attributes for the training of heterogeneous GNN is \(X^{new}=X\cup X^{C}=\{x_{v}\mid v\in V^{+}\}\cup\{x_{v}^{c}\mid v\in V^{-}\}\). In this paper, we aim to search for the optimal completion operation for each no-attribute node to improve the prediction performance of GNN models. ## IV The Proposed Methodology In this section, We first present the proposed completion operation search space and then introduce the differentiable search strategy. Moreover, we introduce the optimization techniques including discrete constraints and the auxiliary unsupervised graph node clustering task for further improving the search efficiency. ### _Search Space of Attribute Completion Operation_ Due to the semantic differences between nodes, using a single attribute completion operation for all no-attribute nodes belonging to the same node type is not reasonable. The available completion operations should be diverse and we can select the most suitable completion operation for each node with missing attributes. Thus, to capture both the node semantics and the topological structure information during the attribute completion process, we first propose an expressive completion operation search space, which consists of topology-dependent and topology-independent operations. Specifically, the topology-dependent operations employ the topology information of the graph to guide the attribute completion. Inspired by the node aggregation operations in typical GNNs (e.g., GraphSage [1], GCN [8], APPNP [38]), we design three topology-dependent attribute completion operations, i.e., mean, GCN-based, PPNP-based operations. In contrast, the topology-independent operation directly uses one-hot encoding to replace the missing attribute. AutoAC aims to search the optimal operation for each no-attribute node from the general and scalable search space where we can draw on more node aggregation operations in GNNs as attribute completion operations. #### Iii-A1 Topology-Dependent Completion Operation Such type of completion operations can be further divided into two categories: local attribute aggregation and global (i.e., multi-hop) attribute aggregation. _Local Attribute Aggregation._ Similar to the node aggregation in GraphSage [1], we first propose mean attribute aggregation. _Mean Attribute Aggregation._ For the node \(v\in V^{-}\), we calculate the mean of neighbors' attributes to complete the missing attribute. The completed attribute \(x_{v}^{C}\) is as follows: \[x_{v}^{C}=W\cdot\mathrm{mean}\left\{x_{u},\forall u\in N_{v}^{+}\right\} \tag{2}\] where \(N_{v}^{+}\) denotes the local (i.e, 1-hop) neighbors of node \(v\) in set \(V^{+}\). \(W\) is the trainable transformation matrix. _GCN-based Attribute Aggregation._ Similar to spectral graph convolutions in GCN [8], we complete the missing attribute with the following renormalized graph convolution form. \[x_{v}^{C}=\sum_{u\in N_{v}^{+}}(\mathrm{deg}(v)\cdot\mathrm{deg}(u))^{-1/2} \cdot x_{u}\cdot W \tag{3}\] _Global Attribute Aggregation._ Motivated by the node aggregation in APPNP [38], we propose PPNP-based completion operation for global attribute aggregation. _PPNP-based Attribute Aggregation._ Besides the GCN-based attribute completion, we use another popular node aggregation method PPNP (i.e., Personalized PageRank [38]) for attribute completion. Specifically, let \(A\in\mathbb{R}^{n\times n}\) denote the adjacency matrix of the graph \(G\). \(\tilde{A}=A+I_{n}\) denotes the adjacency matrix with added self-loops. The form of PPNP-based attribute completion is: \[\begin{split}& X^{ppnp}=\alpha\left(I_{n}-(1-\alpha\hat{\tilde{A}}) \right)^{-1}\cdot X^{\prime},X^{\prime}=X\cdot W\\ & X_{C}=\{X_{i}^{ppnp}\ |\ \forall i\in V^{-}\}\end{split} \tag{4}\] where \(\hat{\tilde{A}}=\tilde{D}^{-1/2}\tilde{A}\tilde{D}^{-1/2}\) is the symmetrically normalized adjacency matrix with self-loops, with the diagonal degree matrix \(\tilde{D}\). \(\alpha\in(0,1]\) is the restart probability. Note that the missing attributes are filled with zeros in \(X\). After PPNP-based attribute aggregation, we complete the attributes of the nodes in \(V^{-}\) with \(X^{ppnp}\). #### Iii-A2 Topology-Independent Completion Operation For the no-attribute nodes that have few neighbors or are less affected by the neighbor information, we can directly use one-hot encoding to replace the missing attributes. The one-hot representation of a specific node type is also a commonly used handcrafted attribute completion method [17]. For example, there are \(K\) distinct actors in IMDB. The one-hot representation for the actor node is a \(K\)-dimensional vector. For a specific actor, the element in the corresponding index is 1 and the others are 0. Then, the one-hot representation is transformed linearly for dimension alignment. #### Iii-A3 Search Space Size Analysis In summary, the proposed search space \(\mathcal{O}\) contains a diverse set of attribute completion operations. Let \(N^{-}\) denote the total number of nodes with missing attributes. Thus, the space size can be calculated by \(\left|\mathcal{O}\right|^{N^{-}}\), which is exponential to \(N^{-}\). In practice, the attribute missing of some node types is a common problem, leading to huge search space. Thus, the block-box optimization-based search method (e.g., evolutionary algorithm) over a discrete search space is infeasible. To address this issue, we propose a differentiable search strategy to find the optimal completion operations efficiently. ### _Differentiable Search Strategy_ In this section, we first introduce a continuous relaxation scheme for the completion operation search space to make the search process to be differentiable. Then, we introduce the differentiable search algorithm and two optimization techniques to improve the search efficiency. #### Iii-B1 Continuous Relaxation and Optimization Inspired by the success of the differentiable NAS, we first design a continuous search space and then perform differentiable completion operation search via gradient descent. As shown in Equation 5, instead of searching over the discrete space, we view the completion operation as a weighted mixture of candidate choices. \[x_{v}^{C}=\sum_{o\in\mathcal{O}}\frac{\exp\left(\alpha_{o}^{(v)}\right)}{\sum_ {o^{\prime}\in\mathcal{O}}\exp\left(\alpha_{o^{\prime}}^{(v)}\right)}o\left(v\right) \tag{5}\] where \(v\) denotes the node with the missing attribute, \(o\) denotes the candidate operation in the search space \(\mathcal{O}\), \(o\left(v\right)\) denotes the completed attribute of node \(v\) with \(o\). \(\alpha^{(v)}\) indicates the mixing weight vector of dimension \(\left|\mathcal{O}\right|\) for node \(v\). Furthermore, we refer to \(\alpha=\{\alpha^{(v)}\ |\ v\in V^{-}\}\in\mathbb{R}^{N^{-}\times\left|\mathcal{O}\right|}\) as the completion parameters. After continuous relaxation, the search objective becomes the learning of the completion parameters \(\alpha\). To this end, we formulate the search problem as an optimization problem that can jointly learn the completion parameters \(\alpha\) and the weights \(w\) in the heterogeneous GNN by gradient descent. Let \(\mathcal{L}_{train}\) and \(\mathcal{L}_{val}\) denote the training loss and validation loss respectively. Since both losses are determined by the completion parameters \(\alpha\) and the weights \(w\), the search objective is a bi-level optimization problem. \[\begin{split}&\min_{\alpha}\mathcal{L}_{val}\left(\omega^{*}, \alpha\right)\\ &\text{s.t.}\ \omega^{*}=\operatorname*{argmin}_{w}\mathcal{L}_{train}( \omega,\alpha)\end{split} \tag{6}\] where the upper-level optimization is for the optimal completion parameters \(\alpha\) and the lower-level optimization is for the optimal weights \(w\) in the GNN model. #### Iii-B2 Overview Figure 2 shows the overall framework of automated attribute completion for heterogeneous graphs. First, we perform a continuous relaxation of the search space by placing a mixture of candidate completion operations. Then, the completion parameters \(\alpha\) are optimized. After determining the attribute completion operations for each no-attribute node, we view the completed attributes together with the raw attributes as the initial embedding for the training of the graph neural network. _Why not use the weighted mixture._ Although the continuous relaxation allows the search of completion operations to be differentiable, there still exist following limitations when directly using the weighted mixture of all completion operations: 1. _High computational overhead:_ After continuous relaxation, we need to perform all candidate completion operations for each no-attribute node when training heterogeneous GNNs, leading to huge computational overhead. Also, solving the bi-level optimization problem in Equation 6 incurs significant computational overhead. 2. _Performance gap:_ At the end of the search, continuous parameters \(\alpha\) needs to be discretized, i.e., \(\operatorname*{argmax}_{\alpha\in\mathcal{O}}\alpha_{\alpha}^{(v)}\), resulting in inconsistent performance between searched and final completion operations. 3. _Large dimension of \(\alpha\):_ The dimension of completion parameters \(\alpha\) is \(N^{-}\times|\mathcal{O}|\), which is proportional to the total number of nodes with missing attributes. The large dimension of \(\alpha\) leads to a slow convergence rate and low search efficiency. To address the first two issues (i.e., reducing computational overhead and avoiding performance gap), we first propose an efficient search algorithm with discrete constraints. Specifically, for each no-attribute node \(v\), the completion parameters satisfy the following constraints: \(\alpha^{(v)}\in\mathcal{C}=\mathcal{C}_{1}\cap\mathcal{C}_{2}\), where \(\mathcal{C}_{1}=\{\alpha^{(v)}\mid\|\alpha^{(v)}\|_{0}=1\}\), \(\mathcal{C}_{2}=\{\alpha^{(v)}\mid 0\leq\alpha_{i}^{(v)}\leq 1\}\). The constraint \(\mathcal{C}_{2}\) allows \(\alpha\) to be optimized continuously, and \(\mathcal{C}_{1}\) keeps the choices of completion operation to be discrete when training GNN. As shown in Figure 2, there is only one activated edge for each choice when training GNN, removing the need to perform all candidate completion operations. The final completion operation is derived from the learned completion parameter \(\alpha\). For node \(v\), the edge with the maximum completion parameter will be kept. We leverage proximal iteration [37] to solve the constrained optimization problem. Moreover, proximal iteration can improve the computational efficiency of optimizing \(\alpha\) without second-order derivative. Moreover, to address the third issue (i.e., reducing the dimension of \(\alpha\)), we propose an auxiliary unsupervised clustering task. In practice, the no-attribute nodes with similar semantic characteristics may have the same completion operation. Take the actor nodes in the IMDB dataset as an example. For the actors with a large number of representative movies, the average attribute aggregation operation is more suitable. Thus, we can cluster all no-attribute nodes into \(M\) clusters, where the nodes in each cluster have the same completion operation. The optimization goal becomes to search for the optimal attribute completion operation for each cluster. In this way, the size of the completion parameters \(\alpha\) is reduced from \(N^{-}\times|\mathcal{O}|\) to \(M\times|\mathcal{O}|\), \(M\ll N^{-}\). As shown in Figure 2, the auxiliary unsupervised clustering loss can be jointly optimized with the node classification loss (i.e., cross-entropy). The proposed framework AutoAC is composed of multiple iterations. In each iteration, the completion parameters \(\alpha\) and the weights in the GNN are optimized alternatively. Next, we introduce the search algorithm with discrete constraints and the auxiliary unsupervised clustering task in detail. ### _Search Algorithm with Discrete Constraints_ Equation 6 implies a bi-level optimization problem with \(\alpha\) as the upper-level variable and \(w\) as the lower-level variable. Fig. 2: The overall workflow of automated attribute completion for the heterogeneous graph neural network. Following the commonly used methods in meta learning [39] and NAS [40], we use a one-step gradient approximation to the optimal internal weight parameters \(\omega^{*}\) to improve the efficiency. Thus, the gradient of the completion parameters \(\alpha\) is as follows (we omit the step index \(k\) for brevity): \[\begin{split}&\nabla_{\alpha}\mathcal{L}_{val}\left(\omega^{*}, \alpha\right)\\ \approx&\nabla_{\alpha}\mathcal{L}_{val}\left(\omega- \xi\nabla_{\omega}\mathcal{L}_{train}(\omega,\alpha),\alpha\right)\\ =&\nabla_{\alpha}\mathcal{L}_{val}\left(\omega^{ \prime},\alpha\right)-\xi\nabla_{\alpha,\omega}^{2}\mathcal{L}_{train}( \omega,\alpha)\nabla_{\omega^{\prime}}\mathcal{L}_{val}\left(\omega^{\prime}, \alpha\right)\end{split} \tag{7}\] where \(\omega\) is the weights of the GNN, \(\xi\) is the learning rate of internal optimization, and \(\omega^{\prime}=\omega-\xi\nabla_{\omega}\mathcal{L}_{train}(\omega,\alpha)\) indicates the weights for a one-step forward model. We update the completion parameters \(\alpha\) to minimize the validation loss. In Equation 7, there exists a second-order derivative, which is expensive to compute due to a large number of parameters. Also, the continuous relaxation trick further leads to huge computational overhead since all candidate completion operations need to be performed when training the GNN. Moreover, the overall search process is divided into two stages: search and evaluation. In the evaluation stage, the continuous completion parameters \(\alpha\) need to be discretized for replacing every mixed choice as the most likely operation by taking the argmax, leading to performance gap between the search and evaluation stage. To optimize \(\alpha\) efficiently and avoid the performance gap, we propose a search algorithm with discrete constraints when optimizing completion parameters \(\alpha\). For the no-attribute node \(v\), let the feasible space of \(\alpha^{(v)}\) be \(\mathcal{C}=\{\alpha^{(v)}\mid\|\alpha^{(v)}\|_{0}=1\wedge 0\leq\alpha_{i}^{(v)}\leq 1\}\). We denote it as the intersection of two feasible spaces (i.e., \(\mathcal{C}=\mathcal{C}_{1}\cap\mathcal{C}_{2}\)), where \(\mathcal{C}_{1}=\{\alpha^{(v)}\mid\|\alpha^{(v)}\|_{0}=1\}\), \(\mathcal{C}_{2}=\{\alpha^{(v)}\mid 0\leq\alpha_{i}^{(v)}\leq 1\}\). The optimization problem under constraints can be solved by the proximal iterative algorithm. **Proposition 1**: _\(\text{prox}_{\mathcal{C}}(z)=\text{prox}_{\mathcal{C}_{2}}(\text{prox}_{ \mathcal{C}_{1}}(z))\)_ Inspired by Proposition 1[19, 37], in the \(k\)-th proximal iteration, we first get discrete variables constrained by \(\mathcal{C}_{1}\), i.e., \(\bar{\alpha}^{(k)}=\text{prox}_{\mathcal{C}_{1}}(\alpha^{(k)})\) (the node notation \(v\) is omitted for brevity). Then, we derive gradients w.r.t \(\bar{\alpha}^{(k)}\) and keep \(\alpha\) to be optimized as continuous variables but constrained by \(\mathcal{C}_{2}\). \[\alpha^{(k+1)}=\text{prox}_{\mathcal{C}_{2}}(\alpha^{(k)}-\epsilon\nabla_{ \bar{\alpha}^{(k)}}\mathcal{L}_{val}(\bar{\alpha}^{(k)})) \tag{8}\] The detailed search algorithm is described in Algorithm 1. First, we get a discrete representation of \(\alpha\) by proximal step (Line 3). Then, we view \(\omega^{(k)}\) as constants and optimize \(\alpha^{(k+1)}\) for continuous variables (Line 4). Since there is no need to compute the second-order derivative, the efficiency of updating \(\alpha\) can be improved significantly. After updating \(\alpha\), we further refine discrete choices and get \(\bar{\alpha}^{(k+1)}\) for updating \(\omega^{(k)}\) on the training dataset, which contributes to reducing the performance gap caused by discretizing completion parameters \(\alpha\) from continuous variables. Moreover, since only one candidate choice is activated for each no-attribute node, the computational overhead can also be reduced. The computational efficiency of updating \(\alpha\) can be significantly improved. ``` 1: Initialize completion parameters \(\alpha\) according to defined search space \(\mathcal{O}\); 2:while not convergedo 3: Get discrete choices of attribute completion operations: \(\bar{\alpha}^{(k)}=\text{prox}_{\mathcal{C}_{1}}(\alpha^{(k)})\) 4: Update \(\alpha\) for continuous variables: \(\alpha^{(k+1)}=\text{prox}_{\mathcal{C}_{2}}(\alpha^{(k)}-\epsilon\nabla_{ \bar{\alpha}^{(k)}}\mathcal{L}_{val}(\omega^{(k)},\bar{\alpha}^{(k)}))\) 5: Refine discrete choices after updating: \(\bar{\alpha}^{(k+1)}=\text{prox}_{\mathcal{C}_{1}}(\alpha^{(k+1)})\) 6: Update \(\omega^{(k)}\) by \(\nabla_{\omega^{(k)}}\mathcal{L}_{train}\big{(}\omega^{(k)},\bar{\alpha}^{(k+ 1)})\) 7:endwhile ``` **Algorithm 1** Search Algorithm in AutoAC ### _Auxiliary Unsupervised Clustering Task_ As mentioned before, the dimension of the completion parameters \(\alpha\) is \(N^{-}\times|\mathcal{O}|\left(|\mathcal{O}|\ll N^{-},|\mathcal{O}|=4\right)\). Take the DBLP dataset as an example, the number of nodes with missing attributes is about \(1.2\times 10^{4}\), leading to a large dimension of completion parameters \(\alpha\). As a result, optimizing \(\alpha\) with a limited size of validation dataset is very difficult. Inspired by the observation that the no-attribute nodes with similar explicit topological structure or implicit semantic characteristics, we further propose an auxiliary unsupervised clustering task to divide all no-attribute nodes into \(M\) clusters. In each cluster, all nodes share the same completion operation. In this way, the dimension of the completion parameters \(\alpha\) can be reduced to \(M\times|\mathcal{O}|\), \(M\ll N^{-}\), and optimizing \(\alpha\) becomes feasible and efficient. It is well known that the EM algorithm [41] is a commonly used method (e.g., K-Means [42]) to solve the problem of unsupervised clustering. In the scenario of graph node clustering, let \(h_{v}\) denote the hidden node representation learned by the heterogeneous GNN. The E-step is responsible for assigning the optimal cluster for each node \(v\) by calculating the distances between \(h_{v}\) and all cluster centers. The M-step is used to update the centers of all clusters. The E-step and M-step are performed alternately until convergence. Although the EM algorithm has a convergence guarantee, it is sensitive to the initial values, making it difficult to apply to the proposed automated completion framework. The main reason is that the bi-level optimization problem defined in Equation 6 is iterative. In the early optimization process, the weights of the GNN have not yet converged and the node representations learned in the GNN are less informative. Such low-quality representations lead to inaccurate clustering, which has a negative impact on the subsequent clustering quality and further leads to a deviation from the overall optimization direction. To address this issue, we first formulate the problem of unsupervised node clustering as a form of soft classification, and use the assignment matrix \(\mathbf{C}\) to record the probability of each node belonging to each cluster. Moreover, as shown in Figure 2, we embed the clustering process into the bi-level iterative optimization process. Motivated by graph pooling and graph module partitioning, we introduce the Spectral Modularity Function \(\mathcal{Q}\)[43][44]. From a statistical perspective, this function can reflect the clustering quality of graph node modules through the assignment matrix \(\boldsymbol{C}\)[45]: \[\mathcal{Q}=\frac{1}{2\left|E\right|}\sum_{ij}\left[\boldsymbol{A}_{ij}-\frac{ d_{i}d_{j}}{2\left|E\right|}\right]\delta\left(c_{i},c_{j}\right) \tag{9}\] where \(\left|E\right|\) is the number of edges in the graph, \(\delta(c_{i},c_{j})=1\) only if nodes \(i\) and \(j\) are in the same cluster, otherwise 0. \(d_{i}\) and \(d_{j}\) represent the degrees of node \(i\) and node \(j\) respectively. It can be known that in a random graph, the probability that node \(i\) and node \(j\) are connected is \(\frac{d_{i}d_{j}}{2\left|E\right|}\)[45]. Then, the optimization goal is converted into maximizing the spectral modularity function \(\mathcal{Q}\), but it is an NP-hard problem. Fortunately, this function can be represented by an approximate spectral domain relaxation form: \[\mathcal{Q}=\frac{1}{2\left|E\right|}\operatorname{Tr}\left(\boldsymbol{C}^{ \top}\boldsymbol{B}\boldsymbol{C}\right) \tag{10}\] where \(\mathcal{C}_{ij}\in[0,1]\) denotes the cluster probability. \(\boldsymbol{B}\) is the modular matrix \(\boldsymbol{B}=\boldsymbol{A}-\frac{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\ \ \ \ \ \\\\\ \ statistics of the four datasets are summarized in Table I. More details of datasets can be seen in Appendix A. Moreover, the handcrafted attribute completion methods for existing heterogeneous GNNs are provided by HGB. Micro-F1 and Macro-F1 are provided to evaluate the node classification performance, while the MRR and ROC-AUC metrics are used for link prediction. The evaluation metrics are obtained by submitting predictions to the HGB website2. Footnote 2: [https://www.bientdata.xyz/competition/hgb-1/](https://www.bientdata.xyz/competition/hgb-1/) ### _Implementation Details_ All experiments are performed in the transductive setting. We employ the Adam optimizer [46] to optimize both \(\omega\) and \(\alpha\). For optimizing \(\omega\), the learning rate and the weight decay are 5e-4 and 1e-4 respectively. For optimizing \(\alpha\), the learning rate and the weight decay are 5e-3 and 1e-5 respectively. We implement AutoAC based on the widely-used heterogeneous GNNs, i.e., MAGNN [14] and SimpleHGN [17]. The loss weighted coefficient \(\lambda\) and the number of clusters \(M\) are two hyperparameters of AutoAC. For MAGNN, we empirically set \(\lambda\) to 0.5 for all datasets, \(M\) to 4 for the DBLP and ACM datasets, 16 for the IMDB dataset. For SimpleHGN, \(\lambda\) is 0.4 for all datasets, and \(M\) is 8 for the DBLP dataset, 12 for the ACM and IMDB datasets. Moreover, all the GNN models are implemented with PyTorch. All experiments are run on a single GPU (NVIDIA Tesla V100) five times and the average performance and standard deviation are reported. ### _Effectiveness of AutoAC_ #### Iv-C1 Performance comparison with humancrafted heterogeneous GNNs Depending on whether or not the meta-path is used, we divide the humancrafted heterogeneous GNNs into two categories: * GNNs with meta-path: HAN [13], GTN [11], Het-SANN [16], MAGNN [14], HGCA [26]. * GNNs without meta-path: HGT [15], GATNE [47], Het-GNN [13], GCN [8] and GAT [20] (two commonly used general-purpose GNNs), as well as the current SOTA GNN model SimpleHGN [17]. The configurations of baselines can be seen in Appendix B. As a generic framework, AutoAC can integrate different GNNs. We select two representative GNN models from the two categories (i.e., MAGNN and SimpleHGN) from the perspective of performance and computational efficiency. Then, we combine AutoAC with the two models, denoted by MAGNN-AutoAC and SimpleHGN-AutoAC respectively. Table II shows the performance comparison between AutoAC and existing heterogeneous GNNs on node classification. AutoAC can improve the performance of MAGNN and SimpleHGN stably on all datasets. The performance gain obtained by AutoAC over MAGNN is around 0.7%-3% and the error rate is reduced by 2.87%-11.69%. Also, SimpleHGN-AutoAC outperforms SimpleHGN by 1%-3% and reduces the error rate by 1.59%-22.09%. By combining with the SOTA model SimpleHGN, SimpleHGN-AutoAC can achieve the best performance in all models. Moreover, Table II shows that AutoAC can bring significant performance improvement on the datasets where the classification target nodes have no raw attributes (e.g., DBLP). Besides, for the datasets where the target nodes already have raw attributes (e.g., ACM and IMDB), completing other non-target nodes using AutoAC can still promote the classification accuracy of target nodes. Especially, for the IMDB dataset, since there are too many non-target nodes with missing attributes (i.e., 77% of all nodes), the performance improvement with AutoAC is more significant. Note that the performance of MAGNN without attribute completion is not as good as other models, such as GTN and GAT. However, MAGNN-AutoAC performs better than GTN on DBLP and ACM, and outperforms GAT on DBLP and IMDB, which indicates that effective attribute completion for heterogeneous graphs can compensate for the performance gap introduced by the GNN model. By unifying attribute completion and representation learning in an unsupervised heterogeneous network, the recently proposed HGCA can also achieve competitive performance on DBLP and ACM. Such experimental results further verify the necessity of AutoAC. #### Iv-C2 Performance comparison with the existing attribute completion method HGNN-AC As the current SOTA attribute completion method, HGNN-AC [18] uses the attention mechanism to aggregate the attributes of the direct neighbors for the nodes with missing attributes. The attention information is calculated by the pre-learning of topological embedding. To be fair, both AutoAC and HGNN-AC are evaluated under the unified HGB benchmark. And, we also combine HGNN-AC with MAGNN and SimpleHGN, denoted by MAGNN-HGNNAC and SimpleHGN-HGNNAC respectively. Table III shows that AutoAC outperforms HGNN-AC on all datasets. Specifically, MAGNN-AutoAC achieves 1%-4% performance improvement over MAGNN-HGNNAC. For the SimpleHGN model, SimpleHGN-AutoAC outperforms SimpleHGN-HGNNAC by 0.4%-2%. Moreover, the performance improvement of HGNN-AC for attribute completion is not stable. As shown in Table III, after attribute completion with HGNN-AC, MAGNN-HGNNAC is instead inferior to MAGNN on the three datasets, while MAGNN-AutoAC can achieve significant performance improvement with attribute completion. Similarly, there is a degradation in performance on the DBLP dataset compared to SimpleHGN. ### _Efficiency Study_ Besides the effectiveness, we also evaluate the efficiency of AutoAC in the terms of runtime overhead. Table II and V show the runtime of AutoAC and other handcrafted HGNNs on node classification and link prediction tasks. Although the attribute completion and GNN training are jointly optimized in AutoAC, the computational efficiency of AutoAC is still competitive compared to other baselines. Also, we compare AutoAC with the existing attribute completion method HGNN-AC. Table IV shows the efficiency comparison between AutoAC and HGNN-AC. AutoAC contains the search and retraining stages, and HGNN-AC contains the pre-learning and training stages. We can see that AutoAC is much more efficient than HGNN-AC. The end-to-end runtime overhead of AutoAC can be reduced by 15\(\times\) to 465\(\times\). The main reason why HGNN-AC is inefficient is that the pre-leaning stage that learns a topological embedding for each node is very time-consuming. Especially for the DBLP dataset with a large number of nodes, the pre-learning overhead is up to 9 GPU hours. In contrast, there is no additional pre-leaning stage in AutoAC. Moreover, by introducing the discrete constraints and auxiliary unsupervised clustering task, the search efficiency can be improved significantly. In summary, AutoAC can not only achieve better performance but also demonstrate higher computational efficiency. ### _Ablation Study_ Iv-E1 Study on the necessity of searching attribute completion operations from a diverse search space We compare AutoAC with the following two methods: * **Single-operation attribute completion:** We complete all no-attribute nodes with the same single completion operation (i.e., GCN_AC, PPNP_AC, MEAN_AC, and One-hot_AC). * **Random attribute completion:** For each no-attribute node, we randomly select an attribute completion operation from the search space. Table VI and Table VII show the completion operation ablation study on SimpleHGN and MAGNN. Due to the differences in the data characteristics, there is no single completion operation that can perform well on all datasets. By searching the optimal attribute completion operations AutoAC can achieve the best performance on all datasets. Take SimpleHGN shown in Table VI for example. GCN_AC is more effective on DBLP and IMDB, while PPNP_AC performs better on ACM. Moreover, for a specific attribute completion operation, the performance is related to the dataset and the chosen GNN model. We take DBLP as an example. GCN_AC performs better on SimpleHGN. However, when the GNN model becomes MAGNN, GCN_AC is not as good as MEAN_AC. Additionally, the performance of the random attribute completion is not stable and can be even worse than the baseline model. Choosing an inappropriate completion operation can have a negative effect on the final performance. #### Iv-E2 Study on the search algorithm with discrete constraints When optimizing the attribute completion parameters \(\alpha\), we enforce discrete constraints on \(\alpha\) and solve the bi-level optimization problem with proximal iteration. To verify the effectiveness of discrete constraints, we further run AutoAC with and without discrete constraints in Table VIII. The search algorithm with discrete constraints can achieve better performance with less search time overhead on all datasets. Additionally, proximal iteration allows removing the need for second-order derivative in solving the bi-level optimization problem. Thus, the memory overhead can also be reduced significantly.As shown in Table VIII, the memory overhead of MAGNN-AutoAC without discrete constraints is huge and the out-of-memory error occurs on DBLP. #### Iv-E3 Study on the auxiliary unsupervised clustering To reduce the dimension of the completion parameters \(\alpha\), we leverage an auxiliary unsupervised clustering task. Figure 3 shows the performances of different clustering methods. * **w/o cluster:** We directly search the attribute completion operations for each no-attribute node without clustering. * **EM:** After each iteration of the optimization process, we adopt the EM algorithm for clustering according to node representation learned by the GNN model. * **EM with warmup:** a variant of the EM algorithm, which adds a warm-up process at the beginning of the clustering. In Figure 3, AutoAC can achieve the best performance on all datasets. Searching completion operations without clustering yields relatively poor performance. Reducing the dimension of \(\alpha\) with unsupervised clustering is very necessary. Moreover, Fig. 4: Convergence of \(\mathcal{L}_{GmoC}\) on three datasets. Fig. 3: Performance comparison between different clustering methods. the proposed unsupervised clustering method outperforms EM and its variant, indicating the effectiveness of the joint optimization of the unsupervised clustering loss and the classification loss. Figure 4 also shows the convergence of the unsupervised clustering loss \(\mathcal{L}_{GmoC}\), which exhibits a stable decreasing trend during the optimization process. ### _Distribution of Searched Completion Operations_ Figure 5 shows the proportion of attribute completion operations searched by SimpleHGN-AutoAC and MAGNN-AutoAC. For different models and datasets, the proportions of searched completion operations are quite different. In SimpleHGN-AutoAC, DBLP tends to select GCN_AC, while ACM prefers PPNP_AC. For the same dataset, different Fig. 5: Distribution of searched attribute completion operations. Fig. 6: Detailed distribution of searched completion operations for each no-attribute node type on the ACM dataset using SimpleHGN-AutoAC. Fig. 7: Detailed distribution of searched completion operations for each no-attribute node type on the IMDB dataset using SimpleHGN-AutoAC. GNNs also result in different distributions. Take DBLP as an example. MAGNN-AutoAC is more inclined to MEAN_AC than GCN_AC compared to SimpleHGN-AutoAC. The results further indicate the necessity of searching for suitable attribute completion operations under different datasets and GNNs. Figure 6 and Figure 7 show the proportion of searched completion operations for each no-attribute node type on ACM and IMDB. For ACM, multiple different completion operations are selected even for the same node type. Specifically, more than half of the author and subject nodes choose PPNP_AC, while the proportions of other three operations are quite similar. Most term nodes are assigned PPNP_AC (i.e., 94.74%), indicating that the term type is more likely to capture the global information. The main reason is that the target node type (i.e., paper) with raw attributes in ACM contains only the paper title. The high-order PPNP_AC operations are preferred. In contrast, GCN_AC accounts for the majority of completion operations on IMDB. This is because the target node type (i.e., movie) has raw attributes and contains rich features, such as length, country, language, likes of movies, and ratings. Thus, the local completion operation GCN_AC is appropriate. Next, we analyze the completion operations of concrete actor nodes. In IMDB, node No.10797 is the actor Leonardo DiCaprio, who has starred in 22 movies, and the neighborhood information is very rich. As a result, AutoAC chooses GCN_AC for him. In contrast, node No.10799 is the actor Leonie Benesch, who has appeared in only one movie. Thus, one-hot_AC is automatically selected by AutoAC. ### _Hyperparameter Sensitivity_ #### Iv-G1 Effect of the number of clusters \(M\) Figure 8 shows the performance of AutoAC under different \(M\). Both SimpleHGN-AutoAC and MAGNN-AutoAC can achieve stable performance, showing that AutoAC has sufficient robustness to \(M\). the IMDB dataset. Moreover, the performance of both models decreases as the masked edge rate increases. ## VI Conclusion In this paper, we proposed a differentiable attribute completion framework called AutoAC for automated completion operation search in heterogeneous GNNs. First, we introduced an expressive completion operation search space and proposed a continuous relaxation scheme to make the search space differentiable. Second, we formulated the completion operation search as a bi-level joint optimization problem. To improve search efficiency, we enforced discrete constraints on completion parameters and further proposed a proximal iteration-based search algorithm. Moreover, we leveraged an auxiliary unsupervised node clustering task to reduce the dimension of completion parameters. Extensive experimental results reveal that AutoAC is effective to boost the performance of heterogeneous GNNs and outperforms the SOTA attribute completion method in terms of performance and efficiency. ## Acknowledgment This work was supported by the National Natural Science Foundation of China (#62102177), the Natural Science Foundation of Jiangsu Province (#BK20210181), the Key R&D Program of Jiangsu Province (#BE2021729), Open Research Projects of Zhejiang Lab (#2022PG0AB07), and the Collaborative Innovation Center of Novel Software Technology and Industrialization, Jiangsu, China. Guanghui Zhu and Yihua Huang are corresponding authors with equal contributions.
2306.06196
ElectroCardioGuard: Preventing Patient Misidentification in Electrocardiogram Databases through Neural Networks
Electrocardiograms (ECGs) are commonly used by cardiologists to detect heart-related pathological conditions. Reliable collections of ECGs are crucial for precise diagnosis. However, in clinical practice, the assignment of captured ECG recordings to incorrect patients can occur inadvertently. In collaboration with a clinical and research facility which recognized this challenge and reached out to us, we present a study that addresses this issue. In this work, we propose a small and efficient neural-network based model for determining whether two ECGs originate from the same patient. Our model demonstrates great generalization capabilities and achieves state-of-the-art performance in gallery-probe patient identification on PTB-XL while utilizing 760x fewer parameters. Furthermore, we present a technique leveraging our model for detection of recording-assignment mistakes, showcasing its applicability in a realistic scenario. Finally, we evaluate our model on a newly collected ECG dataset specifically curated for this study, and make it public for the research community.
Michal Seják, Jakub Sido, David Žahour
2023-06-09T18:53:25Z
http://arxiv.org/abs/2306.06196v2
ElectroCardioGuard: Preventing Patient Misidentification in Electrocardiogram Databases through Neural Networks ###### Abstract Electrocardiograms (ECGs) are commonly used by cardiologists to detect heart-related pathological conditions. Reliable collections of ECGs are crucial for precise diagnosis. However, in clinical practice, the assignment of captured ECG recordings to incorrect patients can occur inadvertently. In collaboration with a clinical and research facility which recognized this challenge and reached out to us, we present a study that addresses this issue. In this work, we propose a small and efficient neural-network based model for determining whether two ECGs originate from the same patient. Our model demonstrates great generalization capabilities and achieves state-of-the-art performance in gallery-probe patient identification on PTB-XL while utilizing 760x fewer parameters. Furthermore, we present a technique leveraging our model for detection of recording-assignment mistakes, showcasing its applicability in a realistic scenario. Finally, we evaluate our model on a newly collected ECG dataset specifically curated for this study, and make it public for the research community. ## 1 Introduction Accurate interpretation of electrocardiogram (ECG) recordings is crucial for achieving high diagnostic accuracy and minimizing the risk of errors during diagnosis establishment and consecutive treatment. Unfortunately, in administrative practice, instances occur where physicians capture ECG recordings and assign them to incorrect patients by mistake. This issue increases the risk of inaccurate diagnoses and can have serious implications for patients. In recent years, medicine has witnessed significant advancements through the utilization of advanced technologies and automated systems to enhance diagnostic accuracy and treatment procedures. Artificial neural networks, as a technique of artificial intelligence, have gained increasing prominence in the healthcare sector and hold the potential for revolutionary changes. It should be emphasized that hospitals often face financial constraints that prevent them from acquiring expensive technologies or hardware not directly used for healthcare. For this reason, it is essential to utilize computationally inexpensive models that are affordable and easy to implement in the hospital environment. These models should be efficient in the analysis of ECG recordings and could provide an alternative to more expensive technologies without compromising the quality of diagnosis and patient care. This work is motivated by a hospital seeking a solution to address the issue of errors that occur in their database of ECG recordings, which can have significant implications for patient diagnosis and subsequent treatment. In this paper, we focus on the development and evaluation of an automated system for detection of misclassification of ECG recordings using a modern convolutional network architecture (CDIL-CNN [7]) suitable for processing sequential data. Our objective is to minimize the number of patient and recording mix-ups prior to ECG evaluations, which can lead to inaccurate diagnoses and compromised patient care. We believe the system could significantly reduce the number of errors in administrative ECG evaluations and contribute to higher accuracy and reliability of diagnoses. This work makes significant contributions in several key areas: I. The development of an exceptionally efficient system with low computational requirements allows for practical implementation in resource-constrained settings, specifically benefiting hospitals with limited computational resources. This aspect ensures the model's viability and effectiveness in real-world healthcare environments. II. We achieved improved performance by significantly reducing the number of model parameters 760+ times in comparison with concurrent works. This enhancement increases the model's efficiency without compromising its accuracy, providing a more streamlined and effective solution for ECG analysis. III. Additionally, we present an anonymized version of a real-world dataset specifically curated for this study. This contribution not only advances the field of ECG analysis but also provides a valuable resource for future research in this domain. The availability of this dataset fosters collaboration and facilitates further investigations into improving diagnostic accuracy and patient care. IV. The work further simulates the deployment of the developed model under realistic conditions, replicating its application within a clinical setting. By mimicking real-world scenarios, this simulation provides insights into the model's performance and feasibility in practical use, offering valuable perspectives on its potential impact in improving ECG evaluations and supporting healthcare professionals in their decision-making process. In summary, this work's main contributions include the utilization of an efficient model for resource-constrained settings, achieving improved performance through parameter reduction, the publication of an anonymized real dataset, and simulating real-world deployment. These advancements offer promising avenues for enhanced ECG analysis, opening doors to more accurate diagnoses and improved patient care. ## 2 Related Work Early approaches to ECG-based patient identification rely on hand-crafted feature extraction methods (FEMs) completely [1, 47]. Some recent works, while utilizing the representational power of neural networks or other function approximators, still utilize preliminary feature extraction methods [26, 11, 42, 30, 3]. These methods include R-peak detection [44, 27, 45, 15, 32] and segmentation [29], measuring onsets and durations of different beat phases [1, 30, 41], Kalman filter transformation [53] or PCA transformation [3]. FEMs are more resistant to noise in the original signal and therefore less prone to over-fitting. However, the choice and tuning of a specific FEM depends on prior expert domain knowledge and is therefore subjected to possible human error and bias. Moreover, transforming the input signal to a different domain using FEMs is likely to - in addition to noise - remove a part of the information relevant to the task we are trying to solve. Suffice to say, designing and optimizing FEMs automatically is difficult. With the rise of deep-learning methods and availability of better hardware, researchers have applied deep neural networks to the task of of identifying patients using the ECG signal directly [31, 25, 16, 24]. Such models not only require large amounts of data for training and are prone to over-fitting, but are also sensitive to noise, which commonly occurs in the ECG signal; wandering baseline, interference caused by power lines and other electrical devices, muscle contractions, motion artifacts and other high-frequency noise to name a few [39]. To address the presence of noise, researchers pre-process the signal [34, 36, 54, 23, 43] using various high-pass, low-pass or band-stop/pass filters implemented using Fourier transformations, [50] wavelet transformations [36, 5], or other techniques [31]. Regarding neural network architectures, prior art consists mostly of biometric systems based on convolutional neural networks (CNNs) [31, 10, 17, 16, 37] and recurrent neural networks implemented using long-short term memory cells (LSTMs) [29, 25, 24]. LSTM networks represent a natural solution to signal processing as they are designed specifically to handle long, sequential data [20]. However, data cannot be passed through LSTM networks in parallel due to their sequence-processing nature. That motivates the use of CNNs, whose convolutional filters can be parallelized on GPUs very efficiently. In the context of ECG biometrics, CNNs can be either applied to electrocardiogram images using two-dimensional filters [18, 33, 31] or directly to the input signals using one-dimensional filters [17]. A noticeable feature of many previous works is the size of datasets commonly used for training and evaluation of patient identification systems. Such datasets include the MIT-BIH Normal Sinus Rhythm (NSRDB) and MIT-BIH Arrhythmia (MITDB) [38, 13], CYBHi [8], ECG-ID [35] and PTB [4], which contain tens or hundreds of patients at most. Patient ECGs in small datasets are very unlikely to be representative and independent samples of the whole population either due to biases introduced during the selection of patients to monitor or due to specific features of the devices or processes used to collect ECG signals. Evaluation on such data therefore provides limited proof of a system or a model generalizing well beyond the study's scope. Furthermore, the patient identification task is generally phrased as a direct multi-class classification problem [37, 31, 17, 25, 24, 12], which may be motivated by the presence of small patient databases. In other words, researchers build models that directly map ECG signals to patients using only their trained parameters. Researchers commonly achieve this by designing neural network architectures whose output is a soft-max layer with one "neuron" for each patient in their dataset. The downside of this approach is the fact that such model - in its full form - is incapable of generalization, because the set of patients it works with is already predetermined by its design. The implications are mostly practical: the model cannot be shared for or among different hospital environments, nor can it handle changes in the set of patients; in that case, it has to be fine-tuned again or retrained from scratch. Works such as [24] amend this by discarding the soft-max layer when attempting to generalize beyond the known set of patients, and instead think of the network's output at the last-but-one layer as an embedding of the ECG signal in some D-dimensional ECG vector space. We call the embedding an ECG vector. To confirm the identity of a patient whose ECG readings are already known, a new ECG is embedded in the vector space (creating an ECG vector \(p_{a}\)) and compared to the average of their other ECG vectors \(\bar{p_{b}}=\frac{1}{k}\cdot(p_{b1}+p_{b2}+...+p_{bk})\) through cosine similarity. The decision whether the new ECG actually originates from the patient whom \(p_{b1}\) to \(p_{bk}\) belong to is obtained by comparing the result of the cosine similarity to some pre-determined threshold value [24]. Although this configuration appears satisfactory, it's uncertain whether cosine similarity can effectively compare ECG vectors of both familiar and unfamiliar patients, given the added complexity of the soft-max layer in transforming an ECG vector into a patient ID. Furthermore, limiting ourselves to the use of cosine similarity can cause our embedding model to "run out" of space on the hyper-sphere of ECG vectors to allocate to new patients. Even if the model was capable of perfectly embedding previously unseen patient ECGs, then for any threshold we choose, we can find a number of patients large enough for which the model starts failing to discriminate well between individuals. Oh et al. [40] have designed a model based on convolutional networks and Transformers [55] for representing electrocardiogram recordings as vectors and then fine-tuned it for patient classification using the ArcFace loss [9]. As this model is trained using metric learning [28], it does not require model retraining after the set of patients changes. However, since their model is quite large, it may be difficult to deploy it in a hardware-constrained environment. In summary, the issues we have discovered in related work are the following: multi-class patient classification, the use of large models, and evaluation on small datasets. Our study addresses these issues by: * designing a small model which decides whether two ECG recordings originate from the same individual * more importantly - evaluating our model on large public datasets (_cca_. 1000x larger than PTB and others) ## 3 Method Instead of treating the patient identification task as a multi-class classification problem, we approach the task of patient identification indirectly. Instead of having a neural network classify ECG signals into patient classes, we created a model to decide whether two different ECG signals originate from the same patient or not. Such a model does not require us to know the amount of patients beforehand and the set of patients for identification can change over time without the need to retrain the model. We then train our model in two phases. In the first phase, an embedding model learns to build the ECG vector space through metric learning [28], which is a technique of building vector representations of inputs based on a metric function, such as the Euclidean distance. In the second phase, the embedding model is augmented by a discriminator head and fine-tuned on ECG pairs. The discriminator head's inputs are two ECG vectors created by the embedding model, and its output is a probability estimate of whether those vectors belong to the same individual. Finally, to verify that a specific patient indeed owns a new, previously unseen ECG signal, we utilize a database of patients, which are clusters of previously classified ECG vectors (embedding model outputs). First, we convert the signal to an ECG vector using the embedding model and then estimate the probability that the vector belongs to the selected cluster using the discriminator head. The overview of this whole process is captured by Figure 1. We publish our code-base as a GitHub repository at [https://github.com/CaptainTrojan/electrocardioguard](https://github.com/CaptainTrojan/electrocardioguard) for the purpose of reproducibility. ### Datasets Before delving into the specifics of our training and evaluation procedures, we would like to introduce the datasets used for this purpose. Our study makes use of four distinct datasets, one of which connects us to the prior art, PTB [4]. This dataset not only contains 12-lead ECG signals, but is also the largest and most relevant among all datasets mentioned in Section 2. The remaining three datasets are CODE-15% [46], PTB-XL [56], and a private collection of electrocardiogram data provided by the Institute for Clinical and Experimental Medicine (Prague, Czech Republic), which we refer to as "IKEM" in the context of our study. Table 1 shows their **(a)** The first phase of training, which creates an ECG embedding model based on convolutional operations. The model maps ECG recordings to ECG vectors, thus describing an ECG vector space. In an ideal state, vectors (points) from the same patient are close to one another and far away from other patients - ECG vectors from the same patient have the same color. This model is described in Section 3.2. **(b)** The second phase of training, which produces a small discriminator head. It compares two vectors from a pair using linear combinations of weighted distances between them and outputs the probability that the recordings, which were converted to these vectors by the embedding model, originate from the same patient. The discriminator learns to output 1 for positive pairs (same patient) and 0 for negative pairs (different patients). This model is described at the end of Section 3.2. **(c)** Schema of the intended deployment scenario. The hospital staff captures a new ECG recording (blue) and assigns it to a given patient (red). First, our embedding model converts the ECG signal to an ECG vector. Then, all ECG vectors belonging to the selected patient are loaded from a database. Finally, we use the discriminator head to calculate the likelihood that this is indeed the patient who the new ECG recording belongs to. The procedure is described in detail in Section 3.3, paragraph "Overseer simulation". **Figure 1:** The overview of our ECG-based patient identification method. respective sizes. Notably, the newly created IKEM dataset is roughly four times larger than PTB-XL and contains more samples per patient on average, which is very useful for tasks involving intra-patient ECG comparison or matching. Each dataset contains raw electrocardiogram signals paired with an anonymized unique patient ID. We store the signals as 16-bit integers in HDF5 files with granularity 4.88 \(\mu V\). We also remove redundant augmented leads and lead III, which can be calculated from leads I and II. These efforts to save storage space have resulted in size decrease of roughly 60%, saving more than 50 gigabytes of data in total. When we input the signal to our model, we expand the reduced 8 leads back to the original 12, producing 12 voltage values for each time instance. All leads, regardless of sampling frequency (400-500 Hz) or length (8-10 seconds) were bidirectionally truncated or padded with zeroes to 4096 voltage measurements over time. In the exceptional case of PTB, which contains approx. 30-100 second long signals sampled at 1000 Hz, we down-sampled the input signal to 500 Hz before applying bidirectional truncation. Hence, all our ECG recordings are matrices of size \((4096,12)\). The train/dev/test split is applied to the sequence of ECG signals, which means that - in small numbers - patients may be shared across splits. ### Architecture In the following sections, we describe the training and evaluation procedure captured by Figure 1 in greater detail. Pre-processingBefore an electrocardiogram recording is input to our embedding model, it is pre-processed. As we have stated in Section 2, the raw ECG signal contains various types of noise. We have implemented a baseline wander (Figure 2) and high-frequency noise removal filters (Figure 2) using _ptwt_ (PyTorch Wavelets, [2]). Regarding power-line interference, both power-lines and ECG signals have varying frequencies across countries and data sources, so although the power-line interference noise may be present in our data as well, we have decided to omit this filter. Finally, we normalize the input signal to z-scores (Figure 2) both in order to stabilize the gradient descent procedure and to calibrate the possibly different scales of electrocardiograms across different recording devices. Each of these transformations is applied to all leads separately and is a part of the embedding model, which makes the system resistant to noisy input. Embedding modelWe have experimented with two different embedding model architectures: a one-dimensional residual convolutional network (1D-RN, Table 2) [19] and a circular dilated convolutional network (CDIL-CNN, Table 2) [7], which is a novel architecture specifically designed for processing long sequences. These models pose an advantage over LSTM networks, Transformers \begin{table} \begin{tabular}{l l l l l} \hline \hline Title & \(\mathcal{N}\) ECGs & \(\mathcal{N}\) patients & Size & Train/dev/test split \\ \hline PTB & 549 & 290 & 69MB & 0/0/100 \\ PTB-XL & 21799 & 18869 & 2.1GB & 0/50/50 \\ IKEM & 98130 & 30290 & 6.3 GB & 0/50/50 \\ CODE-15\% & 345106 & 233479 & 22GB & 70/10/20 \\ \hline \hline \end{tabular} \end{table} Table 1: The datasets we use in our study. All datasets were separated into three non-overlapping sections for training, validation, and testing of our methods. The exact numbers can be retrieved using the _dataset_stats.py_ script. [55] and state-space models [51] as embedding models mostly thanks to their small size and short forward-pass duration. This enables hospitals to deploy our models on readily accessible hardware without the need to invest in high-end dedicated graphical processing units. Metric learningTo initialize the ECG vector space defined by the embedding model in a way that involves good separation of patients, we examine two deep metric learning approaches: _triplet loss_[48, 21] and _circle loss_[52]. Both loss functions accept three ECG vectors: anchor (A), positive (P), and negative sample (N), where the anchor and positive sample come from one patient and the negative from a different patient. Minimizing these losses corresponds to embedding the anchor and positive samples close to one another and far away from the negative sample. In other words, the goal is to maximize inter-patient distances and minimize intra-patient distances with respect to the individual ECG recordings. Figure 3 helps visualize what the end result might look like. Discriminator headAfter we finish training the embedding model, then, in order to utilize its ECG vector space initialization, we connect it to a small discriminator _head_ (a neural network whose inputs are the outputs of another neural network, see Figure 3(a)). The joint model resembles a Siamese setting (see Figure 3(b)), where the embedding model creates ECG vectors and the discriminator head decides whether they originate from the same patient. \begin{table} \end{table} Table 2: Summaries of evaluated embedding model structures. For further details regarding architecture or implementation of these models, see the original publications for ResNet [19] and CDIL-CNN [7]. The input ECG signal has 12 voltage values for 4096 samples through time. The output ECG vector dimension is 256. \[l_{1}(p,q) =\sum_{i=1}^{N}|p_{i}-q_{i}| \tag{1}\] \[l_{2}(p,q) =\sum_{i=1}^{N}(p_{i}-q_{i})^{2}\] (2) \[csim(p,q) =\sum_{i=1}^{N}\frac{p_{i}\cdot q_{i}}{\sqrt{\sum\limits_{j=1}^{N }p_{j}^{2}}\cdot\sqrt{\sum\limits_{j=1}^{N}q_{j}^{2}}} \tag{3}\] More specifically, the discriminator head's prediction is based on a linear combination of the distances between two input ECG vectors (Equations 1, 2 and 3) that the embedding model creates. In order to let the network adapt to the embedding space's shape, we experiment with swapping the sum in the above formulas for a weighted sum instead whilst training those weights by gradient descent. The result is passed through a sigmoid layer in order to truncate the output to the range \((0,1)\), which is interpreted as probability. The Siamese model is trained on _pairs_ of ECG signals and learns to predict which pairs belong to the same patient and which belong to different patients, which is a binary classification problem. This model can be trained either end-to-end or with a frozen embedding model (its weights are never updated), so we experiment with both variants. ### Evaluation Finally, we evaluate our model on two tasks relevant to patient identification. The first task called gallery-probe matching is commonly used for this purpose in the context of identification based on similarity of representations [6, 22, 14, 49], including electrocardiogram-based patient identification [40]. Second, in order to evaluate the applicability of our model for ECG misclassification detection, we have designed a task we call the _overseer simulation_. Both of these tasks are described in this section and our GitHub repository contains a stand-alone script pt_evaluate.py that can be used for evaluating future models under the same conditions. Gallery-probe matchingThe main objective of this task is to analyze the similarity of ECG vectors obtained from the same patient. To achieve this, we select a sample of N patients from a dataset, ensuring that each patient has at least two different ECG recordings, denoted as A and B. We divide these recordings into two distinct sets: the _gallery_ set, which contains all the A recordings, and the _probe_ set, which contains all the B recordings. Each set contains exactly one recording per patient. Our aim is to identify, for each probe element, the most similar gallery element, with the desired outcome of both elements belonging to the same patient. We measure the fraction of correctly matched pairs and report it as _accuracy_. Here, the embedding model is first used for converting all \(2N\) ECG recordings to ECG vectors, and the discriminator head \(f_{\theta}(u,v)\) is used to calculate the similarity between members of all \(N^{2}\) pairs. Overseer simulationIn the context of this task, we maintain a database of ECG vectors where each belongs to a specific patient from a dataset. This database is initialized by inserting at least one ECG vector (created by the embedding model) for N random patients from a dataset. Then, similarly to the previous task, we select another K random ECG recordings in total from some of those N patients, and aim to assign them correctly to their owners. The differences between this setup and the setup in gallery-probe matching are that both the database (gallery) and the probe set can contain multiple ECG recordings from one patient, and that both sets can have different size: database has at least N recordings, whereas the probe set has exactly K recordings. Since we attempt to approximate the real use-case of our model, we simulate a hospital staff member whose task is to classify the probe ECG vectors instead of using the model directly. And to replicate the errors observed in practice, our simulated hospital staff member makes a _mistake_ (random patient chosen instead of the correct one) with some small probability \(p\). Our model's role in this scenario is that of an _overseer_, whose task is to detect these mistakes. Furthermore, as the elements from the probe set are classified, they are _inserted_ into the database, influencing the overseer's future decisions. If the staff makes a mistake and the overseer does not detect it, the database becomes corrupted. In summary, at each step of the overseer simulation, we are provided with an ECG vector and a patient selected by the staff, and our goal is to decide whether it is likely that this vector truly originates from the selected patient. Our notation for the \(j\)-th ECG vector of patient number \(i\) is \(p_{ij}\). One patient \(P_{i}\), then, is a set of vectors \(p_{i1},\ p_{i2},\...,\ p_{in}\), where \(n\) is the number of vectors originating from this patient. The vector being classified is called \(v\), which is the output of an embedding model as seen in Figure 0(c). We aim to calculate the measure of likelihood of \(v\) originating from the patient \(P\) selected by the staff, called \(l(v,P)\), and if this value is smaller than some likelihood _threshold_, we claim that the overseer has detected a mistake. This decision should be based on the knowledge learned by the discriminator head \(f_{\theta}(u,v)\). Nevertheless, we can only use the discriminator head when comparing \(v\) to a single vector \(u\), as in the gallery-probe matching task, but comparing \(v\) to a _set_ of vectors \(\{p_{ij}\}\) is not trivial. Therefore, we experiment with several different approaches of using the discriminator to calculate the likelihood, which are described in the remained of this section. VecAvg : Average of vectorsAn initial approach to calculating \(l(v,P_{i})\) is to replicate the approach by Jyotishi et al. [24], which is to take an average of each vector \(p_{ij}\) and use it as a single representative of \(P_{i}\), as shown in Equation 4. \[l(v,P_{i}) =f_{\theta}(v,\bar{p_{i}}), \tag{4}\] \[\bar{p_{i}} =\frac{1}{n}\sum_{j=1}^{n}p_{ij}\] Such approach is however vulnerable to outlier vectors \(p_{ij}\). Consider the case where an ECG signal measurement \(p_{ik}\) is projected very far away from the general proximity of \(P_{i}\)'s other vectors: taking an average may now represent a completely different patient. DiscAvg : Average of discriminator outputsAnother simple approach is to take the average of the individual discriminator outputs between \(v\) and every other \(p_{ij}\), as shown in Equation 5. \[l(v,P_{i})=\frac{1}{n}\cdot\sum_{j=1}^{n}f_{\theta}(v,p_{ij}) \tag{5}\] This approach is more resistant to outliers, as each vector is processed individually and contributes to the overall likelihood independently of others. However, it does not eliminate the threat of outlier noise completely, as distant vectors still have a large impact on the value of \(l\). WeightedDiscAvg : Average of discriminator outputs with quality weightingTo amend this issue, we can instead estimate the _quality_ of each \(p_{ij}\) as a representative of \(P_{i}\) and weigh the discriminator outputs by this quality value; see Equation 6. \[\begin{split} l(v,P_{i})&=\frac{1}{\sum\limits_{j=0}^ {n}q_{j}}\cdot\sum\limits_{j=1}^{n}f_{\theta}(v,p_{ij})*q_{j},\\ q_{j}&=q(p_{ij},P_{i})=\sum\limits_{\begin{subarray} {c}k=1\\ k\neq j\end{subarray}}^{n}f_{\theta}(p_{ij},p_{ik})\end{split} \tag{6}\] This method guarantees that ECG vectors that the discriminator considers as poor representatives of the patient group \(P_{i}\) will have a smaller impact on the general likelihood measure \(l(v,P_{i})\). Notice that \(q_{j}\) does not have to be normalized itself, because \(l(v,P_{i})\) is normalized by the sum of \(q_{j}\) regardless. Note that the difference between the previous and current approach is not recognizable until the cluster size is at least 3. A possible downside of this approach is perhaps the fact that it fails to utilize the knowledge that some clusters are coherent (according to the discriminator) and some are not. Although the weighting itself allocates more impact to representative vectors \(p_{ij}\), it does not capture the fact that such a weighting was even necessary. WeightedConsistency : WeightedDiscAvg with cluster consistency weightingThe main idea behind WeightedConsistency is that we calculate an estimate of a consistency measure \(c_{i}\) of \(P_{i}\) and should think of the consistency as an indicator of how much can we rely on the information provided by the intra-cluster and cluster-to-\(v\) discriminator outputs. The exact formulation is captured by Equation 7, where \(l^{\prime}\) is Equation 6. \[\begin{split} l(v,P_{i})&=c_{i}\cdot l^{\prime}(v,P _{i}),\\ c_{i}&=\frac{1}{n(n-1)}\sum\limits_{\begin{subarray} {c}p,q\in P_{i}\\ p\neq q\end{subarray}}f_{\theta}(p,q)\end{split} \tag{7}\] Imagine a situation where we are deciding between two clusters, \(P_{1}\) and \(P_{2}\), all elements in both \(P_{1}\) and \(P_{2}\) are very close to \(v\) according to \(f_{\theta}\), and \(P_{2}\) contains an outlier. If we simply weigh down the influence of the outlier, the likelihoods for \(P_{1}\) and \(P_{2}\) would be roughly equivalent, but the mere fact that \(P_{2}\) even contains an outlier means that the patient's data is inconsistent or mislabeled. In any case, we lose confidence that assigning \(v\) to \(P_{2}\) is the right decision. Therefore, in order to decide to add \(v\) to \(P_{2}\), the values of \(f_{\theta}\) between \(v\) and other cluster elements should be large enough to outweigh this. Note that normalizing \(c_{i}\) itself inside one cluster is redundant, as any positive (not sign-changing) linear transformation of \(l(v,P)\) is monotonic and therefore does not influence the value of its argument maxima. \(c_{i}\) simply has to be normalized by any scalar multiple of \(\frac{1}{n(n-1)}\) so that it is normalized with respect to the size of \(P_{i}\), as different patients can have different amounts of ECG vectors. Experiments For training of our model in both phases, we use the largest dataset available to us - the CODE-15%. Given the fact that the PTB and PTB-XL datasets were collected by the same institute, it is reasonable to assume higher dependence between their samples. Hence, we have opted out of including any part of PTB in the validation set. Since we aim to produce a model that generalizes well, we never train it on data outside of CODE-15% and use that data for verification of its ability to generalize. If we had trained the model using a combination of CODE-15% and other large datasets, we might have obtained better results. However, doing so could have undermined our confidence in the validity of the results, as it would have been difficult to ascertain whether the model wasn't simply adapting to the unique features and systematic noise of those specific three datasets. The amount of both possible A/P/N _triplets_ and _pairs_ is (due to the ratio between patients and signals) approximately quadratic in terms of the number of patients, making its size impractically large for enumeration. Hence, during training and validation, we _sample_ the sets of triplets and pairs instead of enumerating them sequentially. Sampling of positive (same-patient) and negative (different-patients) pairs is strictly balanced. To ensure deterministic validation, we seed the triplet and pair generators with a fixed value and early-stop the training procedure on the validation loss of CODE-15%. The specific part of the dataset that we sample from is defined by the train/dev/test split. ### Optimal configuration Since our goal is to find out which configuration generalizes the best, we tune various model hyperparameters and look for the combination that maximizes the _minimal_ AUC measure across all three large datasets. The results of this hyper-parameter search are captured by Tables 3, 4 and 4. Due to the vast range of possible combinations, we have first identified an optimal configuration for the embedding model using a full discriminator without a hidden layer, and then optimized the configuration for the discriminator separately. Regarding the embedding model, we found that the filtering methods we have experimented with have not been beneficial to the model's performance, especially the baseline wander removal filter, which actually hurts the performance (\(-0.03\) AUC). The CDIL-CNN architecture is already resistant to noise, and thus the filters are largely redundant and may even corrupt the original signal in special occasions (for example when iterative filters, such as our wavelet-based BWR, fail to converge). It is also impossible to say whether freezing the embedding model after the first stage helps or not; it would certainly be beneficial to freeze it if overall training times were a major factor. We have trained our models on various GPU cards in the Metacentrum cloud computing grid service and the overall training time never surpassed 18 hours. The same goes for the model's embedding size, which has turned out to be an insignificant parameter in our setting. We have settled on 256, but reducing the size to 128 should have negligible impact on the system's performance while reducing the required storage space for patient vectors by half. Note the large value of \(\Delta_{AUC}\) for the discriminator \(l_{1}\) distance parameter. It shows that not letting the model adjust the distance member weights - that is, literally computing the distance between the input ECG vectors - significantly hurts its performance. We can also see that it is likely not optimal to incorporate further distance metrics into the discriminator head, as that causes the model to over-fit slightly and worsen its generalization capabilities. However, this claim is not backed by statistical testing, as we fail to reject the hypothesis that the best setting should exclude the cosine distance completely under confidence level 95%. But even under the assumption that the discriminator indeed should use only a single distance measure, in Table 4, we can see that \(l_{1}\) is not clearly superior to \(l_{2}\) in this sense. In an attempt to further regularize the model and thus cause it to generalize better, we have \begin{table} \begin{tabular}{l l l l l l} \hline \hline **DCOS** & **DHS** & **DL1** & **DL2** & **min AUC** & **SEM** \\ \hline merge & 16 & full & merge & 0.958763 & 0.001611 \\ exclude & 16 & exclude & full & 0.959258 & 0.001055 \\ merge & 16 & full & exclude & 0.959435 & 0.001657 \\ full & 16 & full & exclude & 0.959533 & 0.000492 \\ exclude & 16 & full & exclude & 0.961165 & 0.001002 \\ \hline \hline \end{tabular} \end{table} Table 4: The best 5 configurations of both models. SEM stands for Standard Error of the Mean = \(\frac{\sigma}{\sqrt{n}}\), which can be used for statistical testing. All discriminator model settings use the best embedder model hyper-parameters. The full table can be found at our GitHub repository. \begin{table} \begin{tabular}{l l l l l} \hline \hline **Hyper-parameter** & **Options** & **Best** & \(\Delta_{AUC}\) & **p-value** \\ \hline Model & 1D-RN, CDIL & CDIL & 0.013 & 1e-8 \\ Embedding loss (EL) & triplet, circle & triplet & 0.016 & 3e-6 \\ Embedding size (ES) & 128, 256, 384, 512 & 256 & 0.002 & 0.151 \\ Embedding fine-tuning (EF) & freeze, end-to-end & end-to-end & 0.004 & 0.037 \\ Normalization (NORM) & apply, exclude & apply & 0.031 & 3e-8 \\ Baseline wander removal (BWR) & apply, exclude & exclude & 0.026 & 4e-10 \\ High-frequency noise removal (HFNR) & apply, exclude & apply & 0.001 & 0.322 \\ \hline Discriminator hidden size (DHS) & 0 (exclude), 16 & 16 & 0.007 & 1e-3 \\ Discriminator \(l_{1}\) distance (DL1) & exclude, merge, full & full & 0.412 & 2e-7 \\ Discriminator \(l_{2}\) distance (DL2) & exclude, merge, full & exclude & 0.002 & 0.044 \\ Discriminator \(cos\) distance (DCOS) & exclude, merge, full & exclude & 0.001 & 0.086 \\ \hline \hline \end{tabular} \end{table} Table 3: All examined hyper-parameters and their options. Embedding fine-tuning represents a setting where, in the Siamese model, the embedding model is either frozen or allowed to be trained end-to-end during the second phase of training. The last column, \(\Delta_{AUC}\), is the difference in performance between the best configuration and the best alternate configuration across all other options for each specific hyper-parameter. For example, the best configuration’s performance average is better by 0.031 (AUC) than the same configuration but without normalization. The p-value is that of a one-tailed t test between those corresponding hyper-parameter setting experiments, where \(H_{A}\) is that the best configuration is better than the alternative one. All models were trained using the Adam optimizer with learning rate 0.001, without any LR scheduler or weight regularization. shortly experimented with shuffling the electrocardiogram leads and adding a small amount of Gaussian noise to them. However, these techniques have led to a significant decrease of performance (-0.02 to -0.06 AUC) and have thus been abandoned. ### Evaluation Finally, we measure the model's performance on the two evaluation tasks: gallery-probe matching, and overseer simulation. The results are shown in Table 6. For gallery-probe matching, we have selected a random sample of patients with at least two recordings from the test set of each respective dataset. The exception is PTB-XL, where the gallery and probe sets were built from the whole dataset like in the study by Oh et al. [40] for the purpose of meaningful and fair comparison. It should be noted that although the optimal model configuration was selected based on the minimum dev set performance across all three large datasets, the minimum performance was never that of PTB-XL, which allows us to perform gallery-probe evaluation on the entirety of PTB-XL without compromising the validity of our results. The sample size is included in the aforementioned table. For overseer simulation, we select 10000 patients to initialize the database with and 1000 further electrocardiograms to classify (probe) under staff mistake rate 2%, meaning that there are 20 mistakes to detect among 980 correct classifications. The mistake rate of 2% is our best estimate of the real mistake rate occurring in practice, as it is the difference in pairwise accuracy of our model on PTB-XL and IKEM. Due to the size requirements, PTB was excluded from the experiments, as it contains only 290 patients, and the number of initial patients for PTB-XL was only \(\approx 9400\). Since we find out that weighting the discriminator outputs by representational quality (WeightedDiscAvg ) performs the best (Table 5), we consequently report results that were obtained under this approach. We hypothesize that the drop of performance in WeightedConsistency compared to WeightedDiscAvg is caused by this approach being biased towards clusters of size 1 (where consistency is maximized) regardless of whether they are a good fit for the classified ECG vectors or not. Notably, we can see that the evaluation tasks are much harder than the training objective due to the vast amount of ECG vector pairs the model must correctly recognize as true negatives. During pairwise training and evaluation of the discriminator, positive (same patient) and negative pairs (different patients) are sampled in a balanced manner. However, in the gallery-probe matching task, the discriminator must assign a probability value to a single positive pair that is higher than those of thousands of other negative pairs, which are created between the probe element and the entire gallery. The situation is similar in the overseer simulation task except that there are a few more positive pairs. Our efforts to minimize the difference in accuracy in the pairwise task across all datasets have resulted in a model with the highest degree of generalization across all three datasets. However, we can notice that even a small decrease in pairwise accuracy has a significant impact on performance in the evaluation tasks, which are much harder. It follows that, for optimal performance in production, institutions should invest in fine-tuning our model on their specific datasets using our published version as a robust starting point instead. In such cases, according to Table 6, if we wanted to detect \begin{table} \begin{tabular}{l l l l l} \hline \hline **Approach** & **VecAvg** & **DiscAvg** & **WeightedDiscAvg** & **WeightedConsistency** \\ \hline P@R95 (median) & 3.66\% & 18.35\% & **22.35\%** & 7.14\% \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of likelihood calculation approaches in the overseer simulation task aggregated across all datasets (CODE-15%, IKEM, PTB-XL) and mistake rates (50%, 5%, 2%, 1%). 19 out of 20 mistakes made in 1000 classifications, our model would cause only 1 false alarm in roughly 23 classifications, which the hospital staff would have to dismiss. Our model's capacity for generalization is also demonstrated by its performance in gallery-probe matching and overseer simulation (F1) on PTB-XL. Despite never encountering a single example from PTB-XL, our model achieved comparable performance to CODE-15%, which encompasses the entire training dataset. The significant performance drop observed in IKEM could be caused by its different patient-to-recording ratio (roughly twice as much recordings per patient on average, see Table 1) or noisier labels. Furthermore, our model's performance in the gallery-probe task on PTB-XL matches the state of the art set by Oh et al. [40] (12 leads, best variant), but using a significantly smaller number of parameters. It is difficult to specify precisely how much smaller our model is without knowing the exact number of parameters of the state of the art model, but seeing that it contains BERT-base (110M parameters) without the embedding layer (24M parameters) and includes additional convolutional layers, we can determine that our model is at least 760 times smaller, if not more. Our model even surpasses the state of the art results by 0.6%, but this small improvement could be attributed to the fact that the current version of PTB-XL (1.0.3) contains 16 less patients with at least two recordings than the older version used by Oh et al. ## 5 Discussion In summary, our work has contributed to the field of automatic electrocardiogram processing and patient identification in multiple ways. First, we publish a large brand-new electrocardiogram patient identification dataset with many recordings per patient on average, which is suitable for training and evaluation of models based on electrocardiogram representations. The dataset is in the stage of negotiating legal terms for publication. Once the legal arrangements are completed, it will be located in the datasets folder of our public GitHub project. \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Task** & **Metric** & **Dataset** & & & \\ & & CODE-15\% & IKEM & PTB-XL & PTB \\ \hline \hline pairwise (training objective) & AUROC & 0.990 & 0.971 & 0.984 & 0.982 \\ & Accuracy & 95.8\% & 92.1\% & 94.3\% & 93.1\% \\ \hline gallery-probe & Accuracy & 60.3\% & 46.0\% & 58.3\% & 77.0\% \\ & Sample size & 2127 & 2127 & 2111 & 113 \\ \hline overseer simulation & P@R95 & \(0.28\pm 0.09\) & \(0.08\pm 0.02\) & \(0.17\pm 0.08\) & – \\ & F1 & \(0.59\pm 0.04\) & \(0.39\pm 0.06\) & \(0.56\pm 0.06\) & – \\ & CR & \(45\%\pm 6\%\) & \(7\%\pm 3\%\) & \(19\%\pm 5\%\) & – \\ \hline \hline \end{tabular} \end{table} Table 6: Overview of the results obtained on test sets across all datasets and evaluation tasks, including the original training objective. In the context of pairwise matching, the decision threshold is optimized for maximizing accuracy on the dev set. For overseer simulation, we report 95% confidence intervals of the mistake detection rates, where P@R95 stands for precision at recall 95, F1 is the F-measure between precision and recall under threshold obtained by achieving recall 95 on the dev set, and CR is the fraction of corrected mistakes (the patient with highest likelihood was the real owner) out of all detected mistakes. Second, we publish a tiny (700 kB) neural-network model utilizing state-of-the-art techniques for sequence processing capable of deciding whether two ECG recordings originate from the same individual. We further show how can our model be used for misclassification detection by employing it in a streamed clustering process, and evaluate it in a simulation mimicking the real application. Both the simulation and various clustering approach implementations are published as a stand-alone part of our GitHub project. This was the primary goal of our study, which was primarily motivated by the Institute for Clinical and Experimental Medicine's sponsorship, as they are actively preparing to implement our model in their production environment. Our model demonstrates applicability not only in the intended scenario but also in a variety of other cases. For instance, our model can be employed to address the issue of fixing existing databases, rectifying inconsistencies and improving data quality. Moreover, it provides a solution for situations where a patient's record is missing within a specific time frame, allowing us to identify the inconsistency by cross-referencing the date of addition and determining which 100 patients were present on that day. Another valuable application is the ability to detect potential patient swaps that may occur on a particular day, which is a task similar to gallery-probe matching, but much easier in difficulty. This feature can be especially useful in quickly identifying and resolving any mix-ups, thanks to the capabilities of our system. Lastly, our model can address challenges encountered in remote areas where doctors may not have access to their own electrocardiogram machines. By verifying the integrity of electrocardiogram measurements obtained through external requests, it helps mitigate the risk of local mixing of recordings. ## Acknowledgement Computational resources were supplied by the project "e-Infrastruktura CZ" (e-INFRA LM2018140) provided within the program Projects of Large Research, Development and Innovations Infrastructures. This work has been supported by Grant No. SGS-2022-016 Advanced methods of data processing and analysis. ## Conflicts of interest The authors declare the following potential conflicts of interest: Michal Sejak and David Zahour received funding from the Institute for Clinical and Experimental Medicine (IKEM) to conduct this research project on detection of patient misclassifications using electrocardiogram recordings. IKEM provided financial support for the research and covered data collection. The funding from IKEM did not involve any restrictions on study design, data analysis, or result interpretation. It is important to note that despite the funding received from IKEM, the research was conducted independently and the authors maintained full control over the study design, data analysis, and decision to publish. The authors affirm that the study was conducted with scientific rigor, objectivity, and integrity, adhering to established research protocols and methodologies. The evaluation of the model was performed using publicly available datasets, ensuring transparency and minimizing potential biases. We would like to acknowledge the support provided by IKEM and express our gratitude for their financial assistance in conducting this research. However, the funders had no role in the study design, data analysis or pre-processing, manuscript preparation, or decision to submit for publication. Please note that all authors have reviewed and approved the contents of this disclosure.
2302.03519
Efficient Parametric Approximations of Neural Network Function Space Distance
It is often useful to compactly summarize important properties of model parameters and training data so that they can be used later without storing and/or iterating over the entire dataset. As a specific case, we consider estimating the Function Space Distance (FSD) over a training set, i.e. the average discrepancy between the outputs of two neural networks. We propose a Linearized Activation Function TRick (LAFTR) and derive an efficient approximation to FSD for ReLU neural networks. The key idea is to approximate the architecture as a linear network with stochastic gating. Despite requiring only one parameter per unit of the network, our approach outcompetes other parametric approximations with larger memory requirements. Applied to continual learning, our parametric approximation is competitive with state-of-the-art nonparametric approximations, which require storing many training examples. Furthermore, we show its efficacy in estimating influence functions accurately and detecting mislabeled examples without expensive iterations over the entire dataset.
Nikita Dhawan, Sicong Huang, Juhan Bae, Roger Grosse
2023-02-07T15:09:23Z
http://arxiv.org/abs/2302.03519v2
# Efficient Parametric Approximations of ###### Abstract It is often useful to compactly summarize important properties of model parameters and training data so that they can be used later without storing and/or iterating over the entire dataset. As a specific case, we consider estimating the Function Space Distance (FSD) over a training set, i.e. the average discrepancy between the outputs of two neural networks. We propose a Linearized Activation Function TRick (LAFTR) and derive an efficient approximation to FSD for ReLU neural networks. The key idea is to approximate the architecture as a linear network with stochastic gating. Despite requiring only one parameter per unit of the network, our approach outcompetes other parametric approximations with larger memory requirements. Applied to continual learning, our parametric approximation is competitive with state-of-the-art nonparametric approximations, which require storing many training examples. Furthermore, we show its efficacy in estimating influence functions accurately and detecting mislabeled examples without expensive iterations over the entire dataset. ## 1 Introduction As machine learning models are trained on increasingly large quantities of data or experience, it can be useful to compactly summarize information contained in a training set. In continual learning, an agent continues interacting with its environment over a long time period -- longer than it is able to store explicitly. A natural goal is to avoid overwriting previously learned knowledge as it learns new tasks (Goodfellow et al., 2013) while controlling storage costs. Even in cases where it is possible to store the entire training set, a compact representation circumvents the need for expensive iterative procedures over the full data. We focus on the problem of estimating _Function Space Distance (FSD)_ for neural networks: the amount by which the outputs of two networks differ, in expectation over the training distribution. Benjamin et al. (2018) observed that regularizing FSD over the previous task data is an effective way to prevent catastrophic forgetting. Other tasks such as influence estimation (Bae et al., 2022), model editing (Mitchell et al., 2021), unlearning (Bourtoule et al., 2021) and second-order optimization (Amari, 1998; Bae et al., 2022) have also been formulated in terms of FSD regularization or similar locality constraints. Methods for summarizing the training data can be categorized as parametric or nonparametric. In the context of preventing catastrophic forgetting, parametric approaches typically store the parameters of a previously trained network, along with additional information about the importance of different directions in parameter space for preserving past knowledge. The canonical example is Elastic Weight Consolidation (Kirkpatrick et al., 2017, EWC), which uses a diagonal approximation to the Fisher information matrix. Nonparametric approaches explicitly store, in addition to network parameters, a collection (coreset) of training examples, often optimized directly to be the most important or memorable ones (Rudner et al., 2022; Pan et al., 2020; Titsias et al., 2019). Currently, the most effective approaches to prevent catastrophic forgetting are nonparametric due to the lack of sufficiently accurate parametric models. However, this advantage comes at the expense of high storage requirements. In this paper, we formally formulate neural network FSD estimation and propose novel parametric approximations. To motivate our approach, notice that several parametric approximations, like EWC, can be interpreted as a second-order Taylor approximation to the FSD. This leads to a quadratic form involving the Fisher information matrix \(\mathbf{F_{\theta}}\) or some other metric matrix \(\mathbf{G_{\theta}}\), where \(\mathbf{\theta}\) denotes the network parameters. Second-order approximations are practically useful because one can estimate \(\mathbf{F_{\theta}}\) or \(\mathbf{G_{\theta}}\) by sampling vectors from a distribution with these matrices as the covariance (Martens and Grosse, 2015). Then, tractable probabilistic models can be fit to these samples to approximate the corresponding distribution. Unfortunately, these tend to be inaccurate for continual learning compared to nonparametric approaches. We believe the culprit is the second-order Taylor approximation: we show in several examples that even the exact second-order Taylor approximation can perform poorly in terms of average classification accuracy and backward transfer in sequentially learned tasks. Since such an approximation can be interpreted as network linearization (Grosse, 2021), this finding is consistent with a recent line of results that find linearized approximations of neural networks to be an inaccurate model of their behavior (Seleznova and Kutyniok, 2022, 2022, 2022, 2019, 2020). In Section 2, we present this network linearization perspective of some existing approaches for regularization in function space. Our method, based on a _Linearized Activation Function TRick (LAFTR)_, does _not_ make a second-order Taylor approximation in the parameter space, and hence is able to capture nonlinear interactions between parameters of the network. Specifically, it linearizes each step of the network's forward pass with respect to its inputs. In the case of ReLU networks, our approximation yields a linear network with stochastic gating, which we refer to as the _Bernoulli Gated Linear Network (BGLN)_. We derive a stochastic and a deterministic estimate of FSD, both of which rely only on the first two moments of the data. This allows the application of our methods in different scenarios where stochasticity is or isn't desirable. We evaluate our BGLN approximation in the contexts of continual learning and influence function estimation. Our method significantly outperforms previous parametric approximations despite being much more memory-efficient. For continual learning tasks, our method is competitive with nonparametric approaches. For influence function estimation tasks, it closely matches an oracle estimate of a network's loss after a data point is removed, but without having to iterate over the whole dataset. The key contributions and findings of this work are: * We introduce LAFTR, an idealized FSD approximation, which improves over parameter space linearization by capturing nonlinear interactions between weights in different layers. * We propose the Bernoulli Gated Linear Network (BGLN), an efficient parametric FSD approximation for Figure 1: Comparison of FSD regularization on a 1-D regression task. **(Left)** Training sequentially on two tasks (blue yields \(f_{1}\), then yellow yields \(f_{2}\)) results in catastrophic forgetting. The LAFTR approximation more closely matches the true function \(f_{2}\) than its NTK approximation does. **(Right)** BGLN retains performance on task 1 after training on task 2 more accurately than EWC and NTK. ReLU networks based on LAFTR, which stores only aggregate statistics of the data and the activations. * In continual learning, BGLN outcompetes state-of-the-art methods on sequential MNIST and CIFAR100 tasks, with at least significantly lower memory requirements than nonparametric methods. * For influence function estimation, BGLN accurately approximates the effect of removing a single data point without iterating over or storing the full dataset. ## 2 Background Let \(\mathbf{z}=f(\mathbf{x},\mathbf{\theta})\) denote the function computed by a neural network, which takes in inputs \(\mathbf{x}\) and parameters \(\mathbf{\theta}\). Consistent with prior works, we use FSD to refer to the expected output space distance1\(\rho\) between the outputs of two neural networks (Benjamin et al., 2018; Grosse, 2021; Bae et al., 2022) with respect to the training distribution, as defined in equation 1. When the population distribution is inaccessible, the empirical distribution is often used as a proxy: Footnote 1: Note that we use the term _distance_ throughout since we focus on Euclidean distance in our derivation. However, other metrics like KL divergence can also be used, as shown in Section 5. \[D(\mathbf{\theta}_{0},\mathbf{\theta}_{1},p_{\text{data}}) =\mathbb{E}_{\mathbf{x}\sim p_{\text{data}}}[\rho(f(\mathbf{x}, \mathbf{\theta}_{0}),f(\mathbf{x},\mathbf{\theta}_{1}))] \tag{1}\] \[\approx\frac{1}{N}\sum_{i=1}^{N}\rho(f(\mathbf{x}^{(i)},\mathbf{ \theta}_{0}),f(\mathbf{x}^{(i)},\mathbf{\theta}_{1})), \tag{2}\] where \(p_{\text{data}}\) is the data-generating distribution. Constraining the FSD term has been successful in preventing catastrophic forgetting (Benjamin et al., 2018), computing influence functions (Bae et al., 2022), training teacher-student models (Hinton et al., 2015), and fine-tuning pre-trained networks (Jiang et al., 2019; Mitchell et al., 2021). Natural choices for \(\rho\) are Euclidean distance for networks trained using mean-squared error (e.g. regression) and KL divergence for those trained with cross-entropy loss (e.g. classification). Consider the continual learning setting as a motivating example. Common benchmarks (Normandin et al., 2021) involve sequentially learning tasks \(t\in\{1,\dots,T\}\), using loss function \(\mathcal{L}\) and a penalty on the FSD between the parameters \(\mathbf{\theta}\), and the parameters \(\{\mathbf{\theta}_{i}\}\) fit to previous tasks. The penalty is computed over the previously seen data distribution \(p_{i}\) and then scaled by a hyperparameter \(\lambda_{\text{FSD}}\): \[\mathbf{\theta}_{t}=\arg\min_{\mathbf{\theta}}\mathcal{L}(\mathbf{\theta})+\lambda_{ \text{FSD}}\sum_{i=1}^{t-1}D(\mathbf{\theta},\mathbf{\theta}_{i},p_{i}). \tag{3}\] Continuing with the notation in equation 2, one way to regularize the FSD is to store the training set and explicitly evaluate the network outputs using both \(\mathbf{\theta}_{0}\) and \(\mathbf{\theta}_{1}\) (perhaps on random mini-batches). However, this has the drawbacks of having to store and access the entire training set throughout training (precisely the thing continual learning research tries to avoid) and necessarily estimating FSD stochastically. Instead, we would like to compactly summarize information about the training set or distribution. Many (but not all) practical FSD approximations are based on a second-order Taylor approximation: \[D(\mathbf{\theta}_{0},\mathbf{\theta}_{1},p_{\text{data}})\approx\frac{1}{2}(\mathbf{ \theta}_{1}-\mathbf{\theta}_{0})^{\top}\mathbf{G}_{\mathbf{\theta}}(\mathbf{\theta}_{1}-\mathbf{ \theta}_{0}), \tag{4}\] where \(\mathbf{G}_{\mathbf{\theta}}=\nabla_{\mathbf{\theta}}^{2}D(\mathbf{\theta}_{0},\mathbf{\theta},p_{ \text{data}})\) is the corresponding Hessian. In the case where the network outputs parametrize a probability distribution and \(\rho\) corresponds to KL divergence, \(\mathbf{G}_{\mathbf{\theta}}\) reduces to the more familiar Fisher information matrix \(F_{\mathbf{\theta}}=\mathbb{E}_{\mathbf{x}\sim p_{\text{data}},\mathbf{y}\sim P_{ \mathbf{y}|\mathbf{x}}(\mathbf{\theta})}[\nabla_{\mathbf{\theta}}\log p(\mathbf{y}|\bm {\theta},\mathbf{x})\nabla_{\mathbf{\theta}}\log p(\mathbf{y}|\mathbf{\theta},\mathbf{ x})^{\top}]\), where \(P_{\mathbf{y}|\mathbf{x}}(\mathbf{\theta})\) represents the model's predictive distribution over \(\mathbf{y}\). It is possible to sample random vectors in parameter space whose covariance is \(\mathbf{G}_{\mathbf{\theta}}\)(Martens et al., 2012; Grosse and Martens, 2016; Grosse, 2021) and some parametric FSD approximations work by fitting simple statistical models to the resulting distribution. For instance, assuming all coordinates are independent gives a diagonal approximation (Kirkpatrick et al., 2017), and more fine-grained independence assumptions between network layers yield a Kronecker-factored approximation (Martens and Grosse, 2015; Ritter et al., 2018). In practice, instead of sampling vectors whose covariance is \(\mathbf{G_{\theta}}\), many works use the empirical gradients during training, whose covariance is the empirical Fisher matrix. We caution the reader that the empirical Fisher matrix is less well motivated theoretically and can result in different behavior (Kunstner et al., 2019). ## 3 A Parametric Estimate with LAFTR We introduce and apply **LAFTR** (Linearized Activation Function TRick) to linear ReLU networks and propose **BGLN** (Bernoulli Gated Linear Network) which approximates a given model architecture as a linear network with stochastic gating. While it is applicable to different architectures, we first explicitly derive our approximation for multilayer perceptrons (MLPs) with \(L\) fully-connected layers and ReLU activation function \(\phi\). We also discuss its generalization to convolutional networks and empirically evaluate the same in Section 5. For MLPs with inputs \(\mathbf{x}\) drawn from \(p_{\text{data}}\), layer \(l\) weights and biases \((\mathbf{W}^{(l)},\mathbf{b}^{(l)})\), and outputs \(\mathbf{z}\), the computation of preactivations and activations at each layer is recursively defined as follows: \[\mathbf{s}^{(l)}=\mathbf{W}^{(l)}\mathbf{a}^{(l-1)}+\mathbf{b}^{(l)},\ \mathbf{a}^{(l)}=\phi(\mathbf{s}^{(l)}) \tag{5}\] with \(\mathbf{a}^{(0)}=\mathbf{x}\), and \(\mathbf{s}^{(L)}=\mathbf{z}\). We denote \(\mathbf{z}_{0}\) and \(\mathbf{z}_{1}\) to be samples of the output distribution obtained with parameters \(\mathbf{\theta}_{0}\) and \(\mathbf{\theta}_{1}\), respectively. ### Linearized Activation Function TRick Given parameters \(\mathbf{\theta}_{0}\) and \(\mathbf{\theta}_{1}\) of two networks, we linearize _each step of the forward pass_ around its value under \(\mathbf{\theta}_{0}\). For an MLP that alternates between linear layers and non-linear activation functions, the linear transformations are unmodified while the activation functions are replaced with a first-order Taylor approximation around their inputs. Hence, the network's computation becomes linear in \(\mathbf{x}\) (but, importantly, remains nonlinear in \(\mathbf{\theta}\)). Let \((\mathbf{W}_{i}^{(l)},\mathbf{b}_{i}^{(l)})\) denote the weights and biases of layer \(l\) in network \(i\). \[\mathbf{s}_{0}^{(l)} =\mathbf{W}_{0}^{(l)}\mathbf{a}_{0}^{(l-1)}+\mathbf{b}_{0}^{(l)} \tag{6}\] \[\mathbf{a}_{0}^{(l)} =\phi(\mathbf{s}_{0}^{(l)})\] (7) \[\mathbf{s}_{1}^{(l)} =\mathbf{W}_{1}^{(l)}\mathbf{a}_{1}^{(l-1)}+\mathbf{b}_{1}^{(l)}\] (8) \[\mathbf{a}_{1}^{(l)} =\phi(\mathbf{s}_{0}^{(l)})+\phi^{\prime}(\mathbf{s}_{0}^{(l)})\odot(\bm {s}_{1}^{(l)}-\mathbf{s}_{0}^{(l)}), \tag{9}\] where \(\phi^{\prime}\) is the derivative of the activation function. We define some additional notation for differences between preactivations and activations. For \(\Delta\mathbf{s}^{(l)}=\mathbf{s}_{1}^{(l)}-\mathbf{s}_{0}^{(l)}\), \(\Delta\mathbf{a}^{(l)}=\mathbf{a}_{1}^{(l)}-\mathbf{a}_{0}^{(l)}\), and \(\Delta\mathbf{W}^{(l)}=\mathbf{W}_{1}^{(l)}-\mathbf{W}_{0}^{(l)}\), we have the following formulae: \[\Delta\mathbf{s}^{(l)} =\Delta\mathbf{W}^{(l)}\mathbf{a}_{0}^{(l-1)}+\mathbf{W}_{1}^{(l)}\Delta\mathbf{ a}^{(l-1)}+\Delta\mathbf{b}^{(l)} \tag{10}\] \[\Delta\mathbf{a}^{(l)} =\phi^{\prime}(\mathbf{s}_{0}^{(l)})\odot\Delta\mathbf{s}^{(l)}. \tag{11}\] Here, base cases are \(\mathbf{a}_{0}^{(0)}=\mathbf{x}\) and \(\Delta\mathbf{a}^{(0)}=0\). Written this way, the \(\Delta\) terms at each step of computation rely linearly on corresponding terms from the immediately preceding step, making LAFTR conducive to implementation via propagation of the base cases through the network. Observe that with \(\mathbf{W}_{0}\) held fixed, the model parametrized using \(\mathbf{W}_{1}\) is a linear network, i.e. the network's exact outputs are a linear function of its inputs. There are two significant differences between LAFTR and parameter space linearization (referred to as NTK (Jacot et al., 2018) henceforth), which confer an advantage to the former. First, our linearization is with respect to inputs instead of parameters (Lee et al., 2019), hence capturing nonlinear interactions between the parameters in different layers. Second, the only computations that introduce linearization errors into our approximation are those involving activation functions, in contrast with other methods which suffer linearization errors for each layer, including linear layers, where nonlinear parameter dependencies exist. In fact, our method is exact for linear networks, whereas NTK is only approximate. We note that linear networks are commonly used to model nonlinear training dynamics of neural networks (Saxe et al., 2013). Hence, we regard our weaker form of linearity as a significant advantage over Taylor approximations in parameter space. LAFTR applies to any linear (including fully-connected and convolutional) network with nonlinear activations. Next, we use the intuition above to motivate two probabilistic approximations which enable a memory-efficient implementation of this algorithm. ### Bernoulli Gating In the specific case of ReLU networks, our approximation depends on the training data through the signs of preactivations, which we denote as a mask \(\mathbf{m}=\mathds{1}\{\boldsymbol{s}>0\}\), since \(\phi(\boldsymbol{s})=\mathbf{m}\odot\boldsymbol{s}\) and \(\phi^{\prime}(\boldsymbol{s})\odot\Delta\boldsymbol{s}=\mathbf{m}\odot\Delta \boldsymbol{s}\). Here, \(\phi^{\prime}\) is the piecewise derivative of the ReLU function given by \(\phi^{\prime}(\boldsymbol{s})=\mathds{1}\{\boldsymbol{s}>0\}\). Given this structure, we model \(\mathbf{m}\) as a vector of independent Bernoulli random variables and fit its mean vector \(\boldsymbol{\mu}\) using maximum likelihood estimation (i.e., computing the fraction of times the unit is activated). We can accordingly rewrite equations 7 and 11 as \(\boldsymbol{a}_{0}^{(l)}=\mathbf{m}^{(l)}\odot\boldsymbol{s}_{0}^{(l)}\) and \(\Delta\boldsymbol{a}^{(l)}=\mathbf{m}^{(l)}\odot\Delta\boldsymbol{s}^{(l)}\), respectively, where \(\odot\) denotes element-wise multiplication and \(\mathbf{m}^{(l)}\sim Ber(\boldsymbol{\mu}^{(l)})\). For efficiency, we compute this average over the last epoch of training. We call this approximation the Bernoulli Gated Linear Network (BGLN). Note that this gating technique is not specific only to MLPs. It can be implemented in ReLU convolutional networks by replacing activations with Bernoulli random variables. ### Propagating Moments of the Activations A key insight enabling efficient computation is that when the output distance \(\rho\) is chosen to be (squared) Euclidean distance, the FSD depends only on the first and second moments of the output difference \(\Delta\boldsymbol{z}:=\boldsymbol{z}_{1}-\boldsymbol{z}_{0}\): \[\mathbb{E}\left[\tfrac{1}{2}||\Delta\boldsymbol{z}||^{2}\right]=\tfrac{1}{2}|| \mathbb{E}[\Delta\mathbf{z}]||^{2}+\tfrac{1}{2}\mathit{tr}\;(\mathrm{Cov}( \Delta\mathbf{z})). \tag{12}\] We compute these terms recursively by propagating the first two moments of \(\boldsymbol{a}_{0}^{(l)}\) and \(\Delta\boldsymbol{a}^{(l)}\) through the network. Using equations 10 and 11, we obtain the following equations: \[\mathbb{E}[\boldsymbol{s}_{0}^{(l)}] =\boldsymbol{W}_{0}^{(l)}\mathbb{E}[\boldsymbol{a}_{0}^{(l-1)}]+ \boldsymbol{b}_{0}^{(l)}\] \[\mathbb{E}[\boldsymbol{a}_{0}^{(l)}] =\boldsymbol{\mu}^{(l)}\odot\mathbb{E}[\boldsymbol{s}_{0}^{(l)}]\] \[\mathbb{E}[\Delta\boldsymbol{s}^{(l)}] =\Delta\boldsymbol{W}^{(l)}\mathbb{E}[\boldsymbol{a}_{0}^{(l-1)}] +\boldsymbol{W}_{1}^{(l)}\mathbb{E}[\Delta\boldsymbol{a}^{(l-1)}]+\Delta \boldsymbol{b}^{(l)}\] \[\mathbb{E}[\Delta\boldsymbol{a}^{(l)}] =\boldsymbol{\mu}^{(l)}\odot\mathbb{E}[\Delta\boldsymbol{s}^{(l)}]\] \[\mathrm{Cov}(\boldsymbol{s}_{0}^{(l)}) =\boldsymbol{W}_{0}^{(l)}\mathrm{Cov}(\boldsymbol{a}_{0}^{(l-1)}) \boldsymbol{W}_{0}^{(l)T}\] \[\mathrm{Cov}(\Delta\boldsymbol{s}^{(l)}) \approx\Delta\boldsymbol{W}^{(l)}\mathrm{Cov}(\boldsymbol{a}_{0}^ {(l-1)})\Delta\boldsymbol{W}^{(l)T}+\boldsymbol{W}_{1}^{(l)}\mathrm{Cov}( \Delta\boldsymbol{a}^{(l-1)})\boldsymbol{W}_{1}^{(l)T}\] \[\mathrm{Cov}(\boldsymbol{a}_{0}^{(l)}) =(\boldsymbol{\mu}^{(l)}\boldsymbol{\mu}^{(l)T})\odot\mathrm{ Cov}(\boldsymbol{s}_{0}^{(l)})\] \[\mathrm{Cov}(\Delta\boldsymbol{a}^{(l)}) =(\boldsymbol{\mu}^{(l)}\boldsymbol{\mu}^{(l)T})\odot\mathrm{ Cov}(\Delta\boldsymbol{s}^{(l)})\] We assume that \(\text{Cov}(\mathbf{a}_{0},\Delta\mathbf{a})\) is close to \(0\). This assumption is tested empirically in our experiments and we find that it does not severely move the FSD estimate away from the true empirical FSD (see Figure 2). Hence, the only information that needs to be stored about the data itself are the first and second moments. While the choice to store only the moments is justified only for squared Euclidean distance, we find that it also works well empirically for other output space metrics such as KL divergence. ### BGLN-D and BGLN-S The most straightforward way to use the BGLN approximation is to draw Monte Carlo samples of the random variables. When only the first and second moments matter, we are free to assume Gaussianity of the inputs. We denote this method BGLN-S (S for "stochastic"). This is sufficient in situations where FSD is used as a regularization term in stochastic gradient-based optimization (as we do in our continual learning experiments). In other situations, it is advantageous to have a deterministic computation; for instance, optimization with nonlinear conjugate gradient requires a deterministic objective. In the case of Euclidean distance as the output space metric, we can exactly compute the BGLN approximation by propagating the first and second moments of all random variables through the forward pass. We call the deterministic estimator BGLN-D. **BGLN-S** is outlined in Algorithm 1 and **BGLN-D** in Algorithm 2. We describe the analogous BGLN-S computations for convolutional networks in Appendix C. We present the above algorithms using the mean and covariance of the data. Note that storing moments of the data usually has less memory cost than storing sufficient subsets of the data itself. We can further reduce the memory requirement by approximating the covariance matrix as a diagonal matrix, i.e., using only the variance of each dimension of the inputs. This is equivalent in cost to storing two data points per task. We empirically investigate the effect of this approximation on continual learning benchmarks. Furthermore, capturing the expected FSD over the training distribution in a single term, via a single forward pass, is far less computationally expensive than iterative alternatives. ### Class-conditional Estimates In the classification setting, it is also possible to a extend our method to a more fine-grained, class-conditional approximation. In particular, we can fit a mixture model for our probabilistic approximations, with one component per class. In this case, each class has its own associated input moments and Bernoulli mean parameters. The lower memory cost of our method allows for this when number of classes is not too large. We refer to their variant as BGLN-CW. As expected, it boosts performance in our continual learning experiments at the expense of a slightly higher memory requirement, as shown in Tables 1, 2 and 3. ## 4 Related Works Several works (Benjamin et al., 2018; Bernstein et al., 2020; Bae et al., 2022) have highlighted the importance of measuring meaningful distances between neural networks. Benjamin et al. (2018) contrast training dynamics in parameter space and function space and observe that function space distances are often more useful than, and not always correlated with, parameter space distances. Bae et al. (2022) propose an Amortized Proximal Optimization (APO) scheme that regularizes an FSD estimate to the previous iterate for second-order optimization. Natural gradient descent (Amari et al., 1995; Amari, 1998) can also be interpreted as a steepest descent method, using a second-order Taylor approximation to the FSD (Pascanu and Bengio, 2014). Parisi et al. (2019); De Lange et al. (2021); Ramasesh et al. (2020); Normandin et al. (2021) have reviewed and surveyed the challenge of catastrophic forgetting in continual learning, along with benchmarks and metrics to evaluate different methods. Parametric methods focus on different approximations to the weight space metric matrix, like diagonal (Kirkpatrick et al., 2017, EWC) or Kronecker-factored (Ritter et al., 2018, OSLA). As described in Section 2, we interpret these as second-order Taylor approximations to the FSD with further structured approximations to the Hessian. Several methods are motivated as approximations to a posterior Gaussian distribution in a Bayesian setting (Ebrahimi et al., 2019), for instance through a variational lower bound (Nguyen et al., 2017) or via Gaussian process inducing points (Kapoor et al., 2021). Non-parametric methods (Kapoor et al., 2021; Titsias et al., 2019; Pan et al., 2020; Rudner et al., 2022; Kirichenko et al., 2021) usually employ some form of experience replay of stored or optimized data points. Some of these methods (Pan et al., 2020) can also be related to the Neural Tangent Kernel (Jacot et al., 2018, NTK), or in other words, network linearization. Doan et al. (2021) directly study forgetting in continual learning in the infinite width NTK regime. Mirzadeh et al. (2022) further study the impact of network widths on forgetting. In this paper, we also examine influence functions (Cook, 1979; Hampel, 1974) which is another application that involves the FSD between networks. Influence functions are a classical robust statistics technique that has since been used in machine learning (Koh and Liang, 2017). Bae et al. (2022) formally study influence functions in neural networks and show that they approximate an objective called the proximal Bregman response function (PBRF). This approximation depends on a FSD term that is typically computed by iterating through the full training dataset. ## 5 Experiments We empirically assess the effectiveness of LAFTR (our idealized method) and the BGLN (our practical algorithm) in approximating FSD as well as their usefulness for downstream tasks: continual learning and influence function estimation. The experiments investigate the following questions: * Can LAFTR outperform NTK in approximating FSD? * Does LAFTR improve performance and memory cost on continual learning benchmarks relative to existing methods? * How do choice of output space metric \(\rho\), or the use of the Gaussian input and the Bernoulli activation approximations, impact empirical performance? * Can the BGLN perform competitively with iteration-based influence function estimators without requiring iteration over the dataset? ### Comparing FSD Estimators To conduct further empirical analysis of our methods' estimation and minimization of the true (empirical) FSD, we use tasks and models from standard continual learning settings which are prone to forgetting. These include Split MNIST, Permuted MNIST and Split CIFAR100 (Pan et al., 2020; Rudner et al., 2022). In addition to directly evaluating the continual learning performance, we also use a collection of networks trained in the course of this experiment (with varying hyperparameter settings) to directly evaluate the accuracy of the FSD estimates. Specifically, we vary the learning rate and the number of training iterations, and consider the set of trained networks that result; on each pair of these networks, we compare the FSD estimates against the true empirical FSD computed using the full training set. Figure 2 (Left) shows that BGLN-S and BGLN-D consistently estimate the true FSD more accurately Figure 3: **(Left)** While training on task 2, FSD from the optimal task 1 parameters increases with task 2 accuracy. Optimizing BGLN-D and class-conditioned BGLN-D-CW effectively minimizes the true FSD. **(Right)** LAFTR has a higher correlation with true FSD than NTK, with a more significant advantage as network depth (and hence a number of nonlinear interactions) increases. Figure 2: Compared to NTK, LAFTR-based approximations consistently give closer FSD values to the true empirical FSD when evaluated on Permuted MNIST **(Left)** and Split CIFAR100 **(Right)**. Quantified in terms of Spearman rank-order and Kendall’s Tau correlation coefficients, LAFTR (96.36 and 79.19, respectively) outperforms NTK (86.42 and 71.71, respectively). than NTK. Analogous analysis for LAFTR using CIFAR100 in Figure 2 (Right) shows a similar trend when the estimators are given access to the same coreset of inputs. We can also measure how the true FSD changes when different FSD estimates are optimized during training, as in Figure 3 (Left), where BGLN methods more effectively minimize true FSD as new task accuracy increases. Finally, we train networks of varying depths on CIFAR100 tasks and measure correlation (Spearman rank-order Spearman, 1961) and Kendall's Tau (Kendall, 1938)) with true FSD. Figure 3 (Right) shows that LAFTR has a higher correlation using both metrics, and its advantage over NTK increases with network depth, and hence with a number of nonlinear interactions between parameters. This corroborates our intuition that LAFTR captures nonlinearities that NTK is unable to account for. ### Continual Learning Recall the formulation of continual learning in terms of FSD, as described in equation 3. We visualize our method's comparative performance on 1-D regression with two sequential tasks shown in Figure 1. More realistically, we test our methods on standard benchmarks used in prior works (Pan et al., 2020; Rudner et al., 2022), with standard architectures for a fair comparison. See Appendix D.2 for details on the datasets, architectures, and hyperparameters. We evaluate average final accuracy across tasks, backward transfer (Lopez-Paz and Ranzato, 2017) and memory cost. **Toy Regression.** Figure 1 shows the functions learned by different methods when sequentially trained on two one-dimensional regression tasks. LAFTR gives a better approximation of the learned function than NTK. When used to regularize the network, BGLN retains good predictions on both tasks, while EWC and exact parameter space linearization (NTK) suffer catastrophic forgetting. We hypothesize that this difference in performance is due to important nonlinearities between network parameters that EWC and NTK approximations are unable to capture. **Split and Permuted MNIST.** As shown in Tables 1 and 2, our LAFTR-based methods outperform other parametric methods (EWC, OSLA, and VCL) on Split and Permuted MNIST tasks and are competitive with the state-of-the-art (SOTA) nonparametric methods, in terms of average accuracy. Class-conditional approximations further boost performance and the diagonal approximation to input covariance (BGLN-S-Var, BGLN-D-Var) does not harm it significantly. With respect to the backward transfer, a more direct measure of forgetting, BGLN methods significantly outperform the SOTA. Finally, they are also amenable to successful adaptation to the nonparametric setting when a coreset is available for use in place of Gaussian samples. **Split CIFAR100.** We consider the much more challenging Split CIFAR100 task to compare our method to existing approaches and tease apart the effects of our algorithmic choices and approximations. Table 3 summarizes these results and analytically compares the memory costs associated with each method (see Appendix D.1 for details). **Coreset** refers to a coreset of real inputs vs. Gaussian samples, **Bernoulli** refers \begin{table} \begin{tabular}{l|c c} \hline \hline **Method** & **Split MNIST** & **Permuted MNIST** \\ \hline Nonparametric & & \\ \hline **VCL (coreset)** & 98.40 & 95.50 \\ **VAR-GP (coreset)** & \(90.57\pm 1.06\) & \(\mathbf{97.20\pm 0.08}\) \\ **FROMP (coreset)** & \(99.00\pm 0.04\) & \(94.90\pm 0.04\) \\ **-FSVI (coreset)** & \(99.54\pm 0.04\) & \(95.76\pm 0.02\) \\ **NTK (coreset)** & \(99.50\pm 0.09\) & \(96.46\pm 0.11\) \\ **BGLN-S (coreset)** & \(99.50\pm 0.03\) & \(96.36\pm 0.13\) \\ \hline Parametric & & \\ \hline **EWC** & 63.10 & 84.00 \\ **OSLA** & 80.56 & 95.73 \\ **VCL** & 97.00 & \(87.50\pm 0.61\) \\ **BGLN-D** & \(99.72\pm 0.03\) & \(96.03\pm 0.20\) \\ **BGLN-D-CW** & \(99.78\pm 0.02\) & \(96.85\pm 0.02\) \\ **BGLN-S** & \(99.64\pm 0.04\) & \(96.36\pm 0.12\) \\ **BGLN-S-CW** & \(99.77\pm 0.05\) & \(96.99\pm 0.07\) \\ **BGLN-D-Var** & \(99.64\pm 0.04\) & \(94.98\pm 0.18\) \\ **BGLN-S-Var** & \(99.50\pm 0.03\) & \(96.36\pm 0.13\) \\ \hline \hline \end{tabular} \end{table} Table 1: Average accuracies of nonparametric and parametric approaches on Split and Permuted MNIST datasets. \begin{table} \begin{tabular}{l|c|c} \hline \hline **Method** & **Split MNIST** & **Permuted MNIST** \\ \hline **FROMP** & \(-0.50\pm 0.20\) & \(-1.00\pm 0.10\) \\ **S-FSVI** & \(-0.21\pm 0.06\) & \(-0.65\pm 0.21\) \\ \hline **BGLN-S** & \(\mathbf{-0.04\pm 0.03}\) & \(\mathbf{-0.41\pm 0.08}\) \\ **BGLN-D** & \(\mathbf{-0.09\pm 0.04}\) & \(-0.56\pm 0.04\) \\ **BGLN-S-CW** & \(-0.18\pm 0.06\) & \(\mathbf{-0.37\pm 0.04}\) \\ **BGLN-D-CW** & \(\mathbf{-0.07\pm 0.07}\) & \(-1.17\pm 0.07\) \\ \hline \hline \end{tabular} \end{table} Table 2: Backward transfer on Split and Permuted MNIST. Lower is better. to Bernoulli activations vs. simply passing preactivations through the ReLU function (termed LAFTR here) and **CW** refers to the class-conditonal estimate. Our empirical analysis shows that given a random coreset of the same size as comparable methods, LAFTR outperforms the SOTA as well as the NTK baseline on average accuracy and backward transfer significantly. Other LAFTR and BGLN variants remain competitive with prior methods while incurring lower memory costs. We also observe that performance is hurt to some extent by the Gaussian and Bernoulli modeling assumptions, while it is improved by class-conditioning. We present a more fine-grained task-wise comparison of accuracies in Figure 5 of Appendix D.3. ### Influence Function Estimation To further assess BGLN's applicability to other settings involving FSD estimation and regularization, we consider influence function estimation (Cook, 1979; Hampel, 1974; Koh and Liang, 2017). Given parameters \(\mathbf{\theta}_{0}\) trained on dataset \(\mathcal{D}_{\text{train}}\) of size \(N\), influence functions approximate the parameters \(\mathbf{\theta}_{-}\) that would be obtained by training without a particular point \((\mathbf{x},\mathbf{y})\in\mathcal{D}_{\text{train}}\). The difference in loss between \(\mathbf{\theta}_{0}\) and \(\mathbf{\theta}_{-}\) is an indicator of the influence of \((\mathbf{x},\mathbf{y})\) on the trained network. Bae et al. (2022a) show that influence functions in neural networks can be formulated as solving for the proximal Bregman response function (PBRF): \[\mathbf{\theta}_{-}=\operatorname*{arg\,min}_{\mathbf{\theta}\in\mathbb{R}^{d}}- \frac{1}{N}\mathcal{L}(f(\mathbf{x},\mathbf{\theta}),\mathbf{y})+D_{B}(\mathbf{\theta},\mathbf{ \theta}_{0},p_{\text{train}})+\frac{\lambda}{2}\|\mathbf{\theta}-\mathbf{\theta}_{0} \|^{2}. \tag{13}\] Here, the first term maximizes the loss of the data point we are interested in removing. The second term is the Bregman divergence defined on network outputs and measures the FSD between \(\mathbf{\theta}\) and \(\mathbf{\theta}_{0}\) over training distribution \(p_{\text{train}}\) similar to the FSD term as defined in equation 1. For standard loss functions like squared error and cross-entropy, the Bregman divergence term is equivalent to the soft training error where the original targets are replaced with soft targets produced by \(\mathbf{\theta}_{0}\). Finally, the last term is a proximity term with strength \(\lambda>0\), which prevents large changes in weight space. Intuitively, the PBRF maximizes the loss of data we would like to remove while constraining the network in both function and weight space so that the predictions and losses of other training examples remain unaffected. Existing approaches for this optimization face two key challenges: (1) the entire training dataset must be stored and iterated over, requiring as many forward passes as there are mini-batches and (2) techniques like \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Method** & \(\mathbf{\rho}\) & **Corset** & **Bernoulli** & **CW** & **Average Accuracy \(\uparrow\)** & **Backward Transfor \(\downarrow\)** & **Memory Cost** \\ \hline Nonparametric & & & & & & & \\ **VCL (coreset)** & & & – & & \(67.40\pm 0.60\) & – & \(2P+Nd\) \\ **VAR-GP (coreset)** & & & – & & \(-\) & \(2P+Nd+C^{2}N^{2}\) \\ **FROMP (coreset)** & & & – & & \(76.20\pm 0.20\) & \(-2.60\pm 0.90\) & \(2P+Nd+C^{2}N^{2}\) \\ **S-PSVI (coreset)** & & & – & & \(77.60\pm 0.20\) & \(-2.50\pm 0.20\) & \(2P+Nd+C^{2}N^{2}\) \\ **NTK (coreset)** & KL & – & & & \(77.61\pm 0.20\) & \(-2.03\pm 0.04\) & \(2P+Nd\) \\ **LAFTR (coreset)** & KL & ✓ & ✗ & ✗ & \(78.33\pm 0.01\) & \(-0.78\pm 0.10\) & \(P+Nd\) \\ **BGLN-S (coreset)** & KL & ✓ & ✓ & ✗ & \(73.27\pm 0.01\) & \(-4.99\pm 0.28\) & \(P+Nd\) \\ **LAFTR (coreset)** & Euclidean & ✓ & ✗ & ✗ & \(76.22\pm 0.01\) & \(-2.64\pm 0.40\) & \(P+Nd\) \\ \hline Parametric & & & – & & \(71.60\pm 0.40\) & – & \(2P\) \\ **EWC** & & & – & & \(72.61\) & – & \(P+\frac{7L_{+}}{2}p_{1}^{2}\) \\ **OSLA** & & & – & & \(-\) & \(2P\) \\ **VCL** & & & – & & \(-\) & \(2P\) \\ **LAFTR** & KL & ✗ & ✗ & \(75.61\pm 0.01\) & \(-19.03\pm 0.59\) & \(P+d+d^{2}\) \\ **LAFTR-CW** & KL & ✗ & ✓ & \(76.22\pm 0.01\) & \(-1.45\pm 0.63\) & \(P+C(d+d^{2})\) \\ **BGLN-S** & KL & ✗ & ✓ & ✗ & \(72.37\pm 0.01\) & \(-8.20\pm 0.04\) & \(P+d+d^{2}\) \\ **BGLN-S-CW** & KL & ✗ & ✓ & ✓ & \(74.02\pm 0.01\) & \(-2.44\pm 0.15\) & \(P+C(A+d+d^{2})\) \\ **LAFTR** & Euclidean & ✗ & ✗ & \(75.51\pm 0.01\) & \(-3.12\pm 0.44\) & \(P+d+d^{2}\) \\ **BGLN-S** & Euclidean & ✗ & ✓ & ✗ & \(74.29\pm 0.01\) & \(-5.49\pm 0.05\) & \(P+4+d+d^{2}\) \\ **BGLN-S-CW** & Euclidean & ✗ & ✓ & ✓ & \(77.78\pm 0.01\) & \(-1.75\pm 0.50\) & \(P+C(A+d+d^{2})\) \\ \hline \hline \end{tabular} \end{table} Table 3: Split CIFAR100: Average Accuracy and Backward Transfer. Notation for memory cost: \(p_{l}=\#\) parameters in layer \(l\), \(P=\#\) parameters \(=\sum_{l=1}^{L}p_{l}\), \(A=\#\) activations \(<P\), \(d=\) data dimension, \(N=\) coreset size, \(C=\#\) classes. nonlinear Conjugate Gradient (CG) (Hager and Zhang, 2006) do not work well with the stochastic gradients produced by sampling batches of data. LAFTR enables estimating the PBRF (or FSD) by storing only the first two data moments, requires just a single forward pass to compute it and provides a deterministic function to optimize, implemented as BGLN-D. **Regression.** We first train a MLP with two hidden layers and ReLU activations for 200 epochs on regression datasets from the UCI benchmark (Dua and Graff, 2017). Then, we randomly select 50 independent data points to be removed. For each removed point, we sample batches and use a Stochastic Gradient Descent (SGD) optimizer to minimize the PBRF objective and compute the difference in loss after removing that data point, commonly referred to as the self-influence score (Koh and Liang, 2017; Schioppa et al., 2022). Next, we follow the same procedure as above but approximate the FSD term in the PBRF objective with EWC, CG (Koh and Liang, 2017) and BGLN-D. Since the direct minimization of PBRF can be considered as the ground truth for influence estimation, we compare the alignment of these methods' estimates with that of PBRF via Pearson correlation (Sedgwick, 2012) and Spearman rank-order correlation (Spearman, 1961). The results are shown in Table 4. Without having to iterate over or store the entire dataset, BGLN-D correlates with PBRF more strongly than EWC and CG (Koh and Liang, 2017), which can be seen as minimizing a linearized version of the PBRF objective. **Mislabeled Example Detection.** Influence function estimators are commonly evaluated in terms of their ability to identify mislabeled examples. Intuitively, if some fraction of the training labels is corrupted, they would behave as outliers and have a more significant influence on the training loss (self-influence score). One approach to efficiently detect and correct these examples is to prioritize and examine training inputs with higher self-influence scores. Following the evaluation setup from Bae et al. (2022), we use 10% of the MNIST dataset and corrupt 10% of it by assigning random labels to it. We train a two layer MLP with 1024 hidden units and ReLU activations using SGD with a batch size of 128. Then, we use EWC, CG and BGLN-D to approximate the FSD term in equation 13 and compute individual self-influence scores. We also compare these methods against a baseline of randomly sampling data points to check for corruption. The results are summarized in Figure 4. BGLN-D significantly outperforms the random baseline and EWC and closely matches the oracle PBRF and CG, while being much faster, cheaper and more memory-efficient. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Dataset** & \multicolumn{2}{c}{**EWC**} & \multicolumn{2}{c}{**CG**} & \multicolumn{2}{c}{**BGLN-D**} \\ \cline{2-7} & P & S & P & S & P & S \\ \hline Concrete & 0.78 & 0.57 & 0.92 & 0.94 & **0.96** & **0.97** \\ Energy & 0.68 & 0.39 & 0.97 & 0.98 & **0.99** & **0.98** \\ Housing & 0.86 & 0.33 & 0.92 & **0.89** & **0.95** & 0.83 \\ Kinetics & 0.36 & 0.30 & 0.88 & 0.86 & **0.99** & **0.99** \\ Wine & 0.97 & 0.70 & 0.99 & **0.94** & **0.99** & 0.90 \\ \hline \hline \end{tabular} \end{table} Table 4: Evaluation of training loss differences computed by EWC, CG and BGLN-D. We compare Pearson (P) and Spearman rank-order (S) correlations with the PBRF estimates. Figure 4: Effectiveness of BGLN in detecting mislabeled examples. BGLN can approximate the FSD term in the PBRF objective accurately and be used in applications involving influence functions without explicitly storing or iterating over the dataset. Conclusions In this work, we addressed the problem of compactly summarizing a model's predictions on a given dataset, and formulated it as approximating neural network FSD. We developed the Linearized Activation Function TRick as an improvement over network linearization in parameter space and proposed novel parametric methods, BGLN, to estimate FSD. Our methods capture nonlinearities between network parameters, are much more memory-efficient than prior works and are amenable to adaptation to the nonparametric setting when a coreset of data is available. We empirically show that LAFTR-based estimates are highly correlated with the true FSD across several settings. In continual learning, our methods outcompete existing methods without storing any data samples. Further, in influence function estimation, they estimate influence-scores with high correlation and can efficiently detect mislabeled examples without expensive iteration over the whole dataset. Extending the formulation of FSD approximation to other applications like model editing or unlearning are exciting research avenues. We hope that our work inspires methods to further enhance memory and computational efficiency in settings where estimating or constraining FSD is relevant. ## Acknowledgements We would like to thank Florian Shkurti for useful discussions, Cem Anil for feedback on the draft, and Gerald Shen for assistance with the compute environment. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute (www.vectorinstitute.ai/partners).
2302.08458
Solid State Neuroscience: Spiking Neural Networks as Time Matter
We aim at building a bridge between to {\it a priori} disconnected fields: Neuroscience and Material Science. We construct an analogy based on identifying spikes events in time with the positions of particles of matter. We show that one may think of the dynamical states of spiking neurons and spiking neural networks as {\it time-matter}. Namely, a structure of spike-events in time having analogue properties to that of ordinary matter. We can define for neural systems notions equivalent to the equations of state, phase diagrams and their phase transitions. For instance, the familiar Ideal Gas Law relation (P$v$ = constant) emerges as analogue of the Ideal Integrate and Fire neuron model relation ($I_{in}$ISI = constant). We define the neural analogue of the spatial structure correlation function, that can characterize spiking states with temporal long-range order, such as regular tonic spiking. We also define the ``neuro-compressibility'' response function in analogy to the lattice compressibility. We show that similarly to the case of ordinary matter, the anomalous behavior of the neuro-compressibility is a precursor effect that signals the onset of changes in spiking states. We propose that the notion of neuro-compressibility may open the way to develop novel medical tools for the early diagnose of diseases. It may allow to predict impending anomalous neural states, such as Parkinson's tremors, epileptic seizures, electric cardiopathies, and perhaps may even serve as a predictor of the likelihood of regaining consciousness.
Marcelo J. Rozenberg
2023-02-16T18:04:10Z
http://arxiv.org/abs/2302.08458v1
# Solid State Neuroscience: Spiking Neural Networks as Time Matter ###### Abstract We aim at building a bridge between to _a priori_ disconnected fields: Neuroscience and Material Science. We construct an analogy based on identifying spikes events in time with the positions of particles of matter. We show that one may think of the dynamical states of spiking neurons and spiking neural networks as _time-matter_. Namely, a structure of spike-events in time having analogue properties to that of ordinary matter. We can define for neural systems notions equivalent to the equations of state, phase diagrams and their phase transitions. For instance, the familiar Ideal Gas Law relation (\(\mathrm{P}v=\mathrm{constant}\)) emerges as analogue of the Ideal Integrate and Fire neuron model relation (\(I_{\mathrm{m}}\mathrm{ISI}=\mathrm{constant}\)). We define the neural analogue of the spatial structure correlation function, that can characterize spiking states with temporal long-range order, such as regular tonic spiking. We also define the "neuro-compressibility" response function in analogy to the lattice compressibility. We show that similarly to the case of ordinary matter, the anomalous behavior of the neuro-compressibility is a precursor effect that signals the onset of changes in spiking states. We propose that the notion of neuro-compressibility may open the way to develop novel medical tools for the early diagnose of diseases. It may allow to predict impending anomalous neural states, such as Parkinson's tremors, epileptic seizures, electric cardiopathies, and perhaps may even serve as a predictor of the likelihood of regaining consciousness. ## I Introduction The understanding of the mind is, arguably, the most mysterious scientific frontier. The ability of the mind to understand itself is puzzling. Nevertheless, it seems increasingly possible and within our reach. Neuroscience and Artificial Intelligence are making great progress in that regard, however, following very different paths and driven by very different motivations. In the first, the focus is to address fundamental questions of biology, while in the second, it is to develop brain inspired computational systems for practical applications for modern life. Evidently, there is also large overlap between them both. The basic units that conform the physical support of the mind, namely the brain and the associated neuronal systems, are neurons. These are cells with electrical activity that interact via electric spikes, called action potentials [1]. In animals, neurons conform networks of a wide range of complexity. Ranging from a few hundred units in jelly fish, to a hundred billion in humans. A fundamental question to answer is _how and why_ nature has adopted this electric signaling system. Its main functions are multifold: to sense and monitor the environment, then to produce behavior and decision making, and finally to drive the required motor actions to assure the survival of living beings. Neuroscience has already provided a good understanding of the electric behavior of individual neurons [2; 3]. A major milestone was the explanation by Hodgkin and Huxley of the physiological mechanism for the generation of the action potential [4]. At the other end, that of neural networks with large number of neurons, important contributions come from Artificial Intelligence. For instance, significant progress was made in the 80s following the pioneering work of Hopfield [5]. More recently, this area received a renewed boost of activity, enabled by the combination of new learning algorithms for Deep Convolutional Neural Networks [6] with the numerical power of modern computers [7]. However, the networks adopted in Artificial Intelligence overwhelmingly describe the neurons' activity by their firing rate and not by individual spikes. These are conceptually different, a spike is a discrete event, while the spiking rate is a continuous variable. Hence, modeling neurons in terms of the latter does not directly address the question posed above, namely, the why and how of Nature using discrete spikes. Here we propose to look at this problem under a different light, which to my knowledge has not been discussed before. Hopefully, this may bring new insights and perhaps help to develop our intuition for the challenging problem of understanding the mechanism of spiking neural networks. We shall postulate an analogy between matter states, as in organized spatial patterns of particles (or atoms or molecules) and that of neuronal states, as organized patterns of spikes in time. Since we are attempting to build a bridge between disconnected disciplines, we shall keep our presentation pedagogical. Ultimately, as with any definition, our analogy will be of value if it turns to be a useful besides the intrinsic academic interest. With this in mind, we shall later discuss an exciting perspective that the present approach may perhaps open. Namely, to develop novel screening tools that could allow to detect an enhanced risk to develop neural diseases that involve anomalous spiking states, such as Parkinson's, epilepsy, cardiopathies, unconsciousness. ## II The analogy As mentioned above, here we postulate an analogy between spatial matter states, such as solid, liquid and gas, and the dynamical states of spiking neurons and neural networks. More specifically, an analogy between the organization of matter in space and that of spiking events in time, which we call _temporal-matter_ states. To motivate this, in Fig.1 we show the familiar phase diagram of water as function of pressure and temperature (\(p\),\(T\)). Next to it we show another phase diagram, that of an electronic bursting spiking neuron, which we introduce recently [8]. In the diagram, we observe various phases that correspond to qualitatively different states: tonic spiking (TS), fast spiking (FS), and two types of bursting (IB1, IB2). The phase diagram is obtained as a function of two parameters, the excitatory current \(I\) and a circuit time constant \(\tau_{s}\) (the circuit is shown in Fig.4). To characterize the spiking states in the phase diagram, we need to consider the nature of the discrete spiking events in the time domain. For instance, tonic spiking is characterized by a sequence of spikes that occur at equally spaced time-intervals. In Neuroscience, the time between two consecutive spike events is called the inter-spike interval (ISI). At the transition from the TS to FS phase, one observes a sudden decrease of the ISI (i.e. a jump in the spiking frequency) as function of the parameter \(\tau_{s}\). The ISI(t) characterizes the organization of the spikes in the time domain and it indicates the "time-distance" between spikes. If a sequence of spike events at times \(t_{i}\) are indicated by the function \[s(t)=\sum_{i}\delta(t-t_{i}) \tag{1}\] Then, it may be tempting to establish the analogy by simply identifying the time-position of spike events with the positions of particles in a matter state. The matter state is characterized by its particle density function, \[n(x)=\sum_{i}\delta(x-x_{i}) \tag{2}\] where \(x_{i}\) denotes the position of the \(i^{th}\) particle. For simplicity, we assume point particles and one dimensional space. Similarly, in Eq.1 we have assume ideal spikes represented by \(\delta\)-functions, while in reality the action potentials have a typical duration of \(\sim 1ms\) From the analogy, the simple tonic spiking case with equispaced spikes (i.e. constant ISI) would correspond to a perfect crystal of equally spaced particles (atoms). Thus, the familiar ISI of neuroscience would be the analogue of the familiar lattice constant \(a\) (or specific volume \(v\)) for material science. If one applies a pressure \(P\) to matter, in general one observes the decrease of \(v\), such as in the case of the Ideal Gas law \(Pv=\) constant. On the other hand, in neuroscience, it is well known that the ISI can be reduced by increasing the excitatory input current. Hence, one may be tempted to extend our analogy to associate the \(P\) with the input \(I\) in neural systems. We can make this more precise by introducing the simplest theoretical model of an ideal spiking neuron, the Integrate and Fire (IF) model [2]. In a very schematic view, as shown in Fig.2, a biological neuron is composed of three main parts: the dendrites, the soma and the axon. The neuron is excited through the input of electric signals arriving to the dendrites, which are called the synaptic currents. This input is integrated in the cell's body, which leads to the increase in its electric potential with respect to its resting state. By means of intense excitation, the neuron eventually reaches a threshold potential value. At that point a dramatic event takes place, an electric spike is initiated, it propagates down the axon and eventually is communicated to the dendrites of a downstream neuron. This phenomenon is called the emission of an action potential, which was first described by Hodgkin and Huxley [4]. This qualitative description can be represented by the simple (leaky) IF model [2; 3] \[\frac{du}{dt}=-\frac{1}{\tau}u(t)+I;\ \ if\ u\geq u_{th}\ then\ spike\ and\ u=u_{rest} \tag{3}\] where \(u(t)\) represents the potential of the soma, \(u_{th}\) is the threshold potential, \(u_{rest}\) is the resting potential value and \(I\) is the input (synaptic) current. The time constant \(\tau\) is a characteristic relaxation time of the neuron that represents the leakage of charge out of the soma. If the leakage is negligible, \(\tau\rightarrow\infty\), and the integration is perfect, so one has an ideal IF model. The electric circuit representation of the model is straight forward. The soma is represented by a capacitor \(C\) that accumulates the charge of the input current, and the leakage by a resistor \(R\) in parallel. The threshold voltage can be represented by a switch that closes yielding the emission of the spike, which is the fast discharge of the charge accumulated in \(C\), as a delta function of current. Figure 1: Left: Phase diagram of states of ordinary matter (water) showing the solid, liquid and gas phases. \(P\) and \(T\) determine the specific volume \(v\) of the state, which is the inverse of the density \(n\). Right: Measured phase diagram of _temporal-matter_ states of an electronic neuron model. The system is a two-compartment bursting neuron, showing tonic spiking (TS), fast spiking (FS) and two different intrinsic bursting states (IB1 and IB2). The input current \(I\) and the circuit time-constant \(\tau_{s}\) determine the inter-spike interval of the state, which corresponds to the inverse of the frequency. The limit of zero leakage, i.e. the ideal Integrate and Fire is trivially solved. The potential \(u\) due to the integrated charge in \(C\) during an interval \(t\) is \(u(t)=Q/C=(I/C)t\). Thus, the spike fire time \(t_{f}\) is given by the condition \(u(t_{f})=u_{th}\). Which leads to \(It_{f}=u_{th}C\), or \(\Pi\mathrm{SI}=constant\). Hence, we may extend our analogy by noting that the equation of state of the Ideal Gas has the same form as that of an ideal IF neuron, namely, \[Pv\mathrm{=constant}\longleftrightarrow I\mathrm{SI}=\mathrm{constant} \tag{4}\] where \[P\longleftrightarrow I;\quad v\longleftrightarrow\mathrm{ISI} \tag{5}\] As the equation of state of a real gas departs from the ideal case, biological neurons and neuron models' equation of state, will depart from the ideal IF "neuronal equation of state" above. Interestingly, the notion of neuronal equation of state should not appear so strange. Indeed, it is nothing else than the familiar concept in neuroscience of the neuron's _activation function_, namely, the firing rate as a function of the excitatory input current, \(f=f(I)\). For instance, some popular models are: the rectified linear units, or ReLU, where \(f(I)=max[0,I]\); the sigmoid activation \(f(I)=1/(1+e^{-I})\); etc. These are examples of _mathematical_ neuron models, however, we may also include here _physical_ neuron models, namely, models that are defined by an electronic circuit. In physical neuron models the equation of state \(f(I)\) can be measured, as in real gases. We should mention that while the relation \(Pv\mathrm{=constant}\) is familiar from the Ideal Gas law, it is in general valid for any liquid or solid in the linear regime. Recently, we introduced a _minimal_ model of a physical spiking neuron, which is achieved by exploiting the memristive properties of an "old" conventional electronic component, the thyristor. The model is minimal because it provides a physical realization of the basic _leaky-integrate-and-fire_ (LIF) neuron model by associating exactly one component to each one of the three functions: a capacitor to integrate, a resistor to leak and the thyristor to fire. Qualitatively, the thyristor acts as the switch in the circuit of Fig.2. In Fig.3 we show the circuit that defines physical neuron model just described, where we identify the role of each of the three components to the functions of the LIF. We call this artificial neuron model the Memristive Spiking Neuron (MSN), which is implicitly defined by its electronic circuit. In the right panel of the figure we show the experimental neuron equation of state, which is nothing other than the activation function, as noted above. We can observe that near the excitation threshold the equation of state is well represented by the functional form of the activation function of the LIF mathematical model (red fitting line in Fig.3). At intermediate input currents, the behavior approaches the Ideal IF neuron, whose equation of state is \(f\propto I\) (see 4), since \(f=1/ISI\) The same methodology allows us to consider more complex neuron models, which are also defined by their respective circuit implementation. For instance, we may consider the case of bursting neurons. From theoretical neuroscience we know that a requirement to obtain bursting behavior is by adding a second dynamical variable, besides the potential of the cell body \(u(t)\), in Eq.4 and represented by the capacitor in the circuits of Fig.2 and 3. Thus, to do this we add a second \(RC\) pair to our basic MSN circuit. A simple option is to add a capacitor \(C_{L}\) in parallel to the (small) output resistor \(R_{L}\), introducing a new time constant \(\tau_{L}=R_{L}C_{L}\). The resulting circuit is shown in Fig.4a. In panel (b) we show the dynamical behavior that it now produces. We observe four qualitatively different spiking types: simple tonic spik Figure 2: (a) Schematic biological neuron. (b) Electric circuit of the IF model (for the case \(u_{res}\)=0). The membrane of the cell’s body (soma) is represented by the capacitor \(C_{m}\), which accumulates charge of the input current. If the resistor \(R\rightarrow\infty\) then the current integration is perfect, however for a finite value there is a “leaky” integration and the model is called LIF, for leaky-integrate-and-fire. Figure 3: (a) Electric circuit that defines the physical neuron model based on the memristive properties of the thyristor. The function of the thyristor is that of a voltage controlled switch, as shown in Fig.2 above. The small “load” resistor \(R_{L}\) is to transform the output spike current into a voltage action potential signal for measurement convenience. (b) The “neuron equation of state” or activation function \(f(I)\). Near the threshold, the equation of state follows closely the functional form of a LIF model (red line), \(-1/[log(1-I_{c}/I)]\) where \(I_{c}\) is the minimal activation current. The blue line corresponds to the Ideal IF behavior \(I\propto 1/\mathrm{ISI}=f\). ing, fast spiking, and two bursting modes. These four spiking types are realized in the respective regions of the phase diagram of Fig.1 presented before. As done before for the basic MSN, we may also obtain the equation of state of the Memristor Bursting Neuron (MBN) model. In the present case, we can consider that the time constant \(\tau_{L}\) plays the role of a third parameter, similarly as the temperature in the case of matter systems. Hence, in Fig.5 we show the curve \(f_{\tau_{L}}(I)\) measured at a fixed \(\tau_{L}\), indicated by the vertical purple line that crosses three phases. We observe the jumps in the frequency as the current drives the system from one phase to the other. This is reminiscent of changes in density when the pressure drives phase transitions at a fixed \(T\) in the phase diagram of water in Fig.1. It is interesting to mention that a biologically realistic theoretical model of bursting neurons introduced by Pinsky and Rinzel (PR) shows a qualitatively similar behavior [9]. The PR is an example of a two-compartment model, where both the soma and the dendrites are described. We may notice that in the MBN model the \(R_{L}C_{L}\) block (see Fig.4) can be considered as a second compartment, which is connected to the output of the first. In the right panel of Fig.5, we reproduce the activation function (i.e. the neuronal equation of state) of the PR model. It is interesting to observe that in the simpler limits of only one compartment (soma alone and dendrite alone) the behavior of the PR is qualitatively the same as that of our basic MSN, which is also a single compartment model (see bottom curve of Fig.5a). More importantly, for the relevant case of two compartments (i.e. finite \(g_{c}\)) we observe a sudden changes in the firing rate as a function of excitatory input current, also in qualitative agreement with the MBN. In fact, both PR and MBN traverse the same sequence as the excitatory current is increased: initially quiescent below a critical current, then bursting, and finally jump in firing rate to the fast spiking mode. Hence, the phase transitions are abrupt, through a steep or a discontinuous change in the activation function. As we shall discuss in the next section this feature may have interesting consequences. Moreover, and we shall show that the \(f(I)\) anomalies may be considered as the counterparts of certain phenomena occurring in phase transitions in matter systems. ## III Correlation and response functions Correlation and response functions are useful concepts in material science, where they serve to characterize different states of matter. For instance, a regularity in the arrangement of positions of atoms is revealed by Bragg peaks in the x-ray spectra, which are maxima of the structure factor. In real space, the regularity is revealed by the pair correlation function \[g(x)=\frac{\int n(x+x^{\prime})n(x^{\prime})dx^{\prime}}{\int n^{2}(x^{\prime}) dx^{\prime}} \tag{6}\] where \(n(x)\) indicates the particle density (such as electrons, atoms, molecules, etc) at position \(x\), and where we consider one dimension for simplicity. In the case of a crystalline order, \(g(x)\) shows structure with peaks. For a simple arrangement of particles along one dimension with a lattice constant \(a\), the peaks will be at \(a\), \(2a\), \(3a\),... In contrast, for a disordered state, such as gas or liquid, the \(g(x)\) is mostly featureless. The study of \(g(x)\) is routinely done in condensed matter physics for the study of phase transitions (see, for example, [10]). In our analogy, spiking systems are thought of as temporal-matter states, so it is natural to explore the behavior of the correlation-function analogue to \(g(x)\). Since Figure 5: (a) Activation function \(f(I)\) in semilog scale. The top curve corresponds to the MBN model along the right purple line at \(\tau_{L}\)=0.3ms indicated in the phase diagram in the inset. Following the definition of Pinsky and Rinzel, the frequency is defined as the inverse of the period between spike trains (bursts). The bottom \(f(I)\) curve is that of the MSN model discussed before, which corresponds to the limit \(\tau_{L}\to 0\) of the bursting neuron model (indicated with the left purple line in the inset). (b) Activation function of the Pinsky-Rinzel model reproduced from [9]. Figure 4: (a) Electric circuit that defines the physical model of a bursting neuron based on the MSN model and adding a second time constant \(\tau_{L}=R_{L}C_{L}\). (b) The various dynamical behaviors produced by the circuit. From top to bottom: tonic spiking (TS), fast spiking (FS) and two bursting types (IB1, IB2), which correspond to the four phases of the phase diagram shown in Fig.1. the position of particles correspond to position of spikes, by analogy we can define the neural correlation function \(g_{n}(t)\) as \[g_{n}(t)=\frac{\int s(t+t^{\prime})s(t^{\prime})dt^{\prime}}{\int s^{2}(t^{\prime })dt^{\prime}} \tag{7}\] where \(s(t)\) indicates a given spike trace. The function \(g_{n}(t)\) can characterize different spiking states of a neuron. In Fig.6 we provide a concrete example, which is realized in the basic MSN model described before (see Fig.3). We can observe the two qualitatively different tonic spiking behaviors in the two left panels of the figure. They correspond to two constant current inputs (\(36.4\mu\)A and \(45.3\mu\)A). In the first case, at higher input current (top panel), the spiking is perfectly regular. In contrast for a smaller current close to the threshold, the trace changes dramatically. The interval between spikes become very irregular. By analogy between spike and particles, we can think of the first case as that of a solid and the second as of the melting of the solid state. This qualitative description can be made more precise by the correlation function \(g_{n}(t)\), shown on the right side panels of Fig.6. The top panel shows a succession of delta functions at equally spaced at times, multiples of the constant interspike interval, \(t_{k}\)=\(k\)ISI. This indicates the long range order in time. It tells that given the presence of a spike at time \(t=0\) we have a high probability ( 1) of finding another spike event at times \(t_{k}\) (\(k\)=1, 2, 3,...). In contrast, the \(g_{n}(t)\) shown in the bottom panel is featureless, showing a small approximate constant value. This indicates total lack of order as the presence of a spike a time \(t=0\) does not permit to predict the presence of ulterior spikes events. The emission of spikes is random as the positions of particles in a liquid or a gas. The peaks of the \(g_{n}(t)\) are very narrow, delta-function-like, because the spikes are very narrow with respect to the duration of the ISI. In a solid, the atoms have a size that is smaller but of the same order as the lattice spacing, so instead of narrow deltas one observes broad peaks in the \(g(x)\)[10]. One may understand quite intuitively the physical origin of this dramatic changes in the time-structure. The key point is to realize that the "melted" state occurs in a regime where the activation function \(f(I)\) is very steep, at the onset of neuron excitability, i.e. near the threshold. Therefore, small variations of the input current will reflect on significant variations in the ISI. This observation motivates the following important insight. This enhanced sensitivity to current fluctuations is due to the large slope \(df/dI\) in the activation function. Then, what is this feature related to, if one follows the analogy back to the matter systems? We recall that ISI plays the role of the lattice spacing, or specific volume (see Eqs.4 and 5), then \(f\)=1/ISI corresponds the particle density \(n(x)=1/v(x)\). On the other hand, since the input current \(I\) is like the pressure, then it follows that the slope \(df/dI\) corresponds to \(dn/dP\). This last quantity is closely related to the compressibility of matter systems \[\beta=\frac{1}{n}\ (\frac{dn}{dP}) \tag{8}\] which is the inverse of the bulk modulus. We can therefore follow the analogy and introduce the concept of "neuro-compressibility", \[\beta_{n}=\frac{1}{f}\ (\frac{df}{dI}) \tag{9}\] It is important to mention that this quantity may be measured using experimental methods such as dynamic clamp, where a controlled synaptic current can be injected into a neuron while its activity is monitored [11]. Moreover, this definition may turn out to have important consequences, as we discuss next. Anomalies in the compressibility of materials are precursor signatures of structural phase transitions. A sudden increase in the compressibility of a solid indicates the "softening" of a vibrational mode (a phonon mode), which leads to a change in the structure, or possibly a phase change. Then, the question is, what would the analogue phenomenon be for a neuronal system? For a single neuron, the enhancement of \(\beta_{n}\) would indicate the proximity to a qualitative change in the spiking mode of a neuron, i.e. a "bifurcation" in its dynamics. This can in fact be seen in the panels of Fig.5. There, we observe that all the changes in the spiking modes for both, the MSN and for the MBN models, occur at current values where there are enhancements in \(df/dI\) or jumps. Most notably, this is not only a feature of our artificial neuron circuits, but also can be clearly seen in the biologically realistic Pinsky-Rinzel model activation functions that we reproduced in Fig.5[9]. The enhancements seen in the PR data occur at the onset of change from quiescent to spiking and also in the change from burst to tonic spiking (black circles and black triangles), in very good qualitative agreement to our electronic neuron model, Figure 6: (a) Measured spike traces \(s(t)\) of the MSN model at two values of the input current: a “solid” state measured at \(I=45.3\mu\)A (top) and a “melted” state at \(I=36.4\mu\)A. The states are indicated with the green and blue dots in the activation function of the neuron reproduced in the inset. (b) The neural correlation function \(g_{n}(t)\) computed for the respective traces shown in the left side panels (only a small portion of the measured traces is shown). We may then speculate on an important implication of our observations. It would be interesting to explore if neuro-compressibility anomalies are also found across the boundaries of qualitatively different states in _neuron networks_. If that is the case, an intriguing and exciting possibility would be to investigate if anomalies in \(\beta_{n}\) are also detected (by small current stimulation) in animal models of epilepsy and Parkinson's disease. If this were the case, then one may envision a pathway to a novel diagnose tool for early detection or a risk predictor of mental diseases associated to abnormal spike patterns in humans. In even further speculation, one may also search for anomalies at the onset of regaining, or loosing, consciousness, which is another challenging frontier of research [12]. ## IV Bursting spikes as an ananlogue of formation of clusters defects Here we describe another interesting connection between common a phenomena in spiking neurons and in material science. We shall show that missing spikes in a trace of a fast spiking state, can be thought of as the analogue of missing atoms, i.e, defects, in a crystal structure. From Fig.7 we observe that the proliferation of missing spikes in a fast spiking state is a route to generate bursting behavior. This is illustrated in the sequence of traces shown in the figure, which were obtained for a step-wise decreasing input current to the MBN. The thick purple arrow in the vertical path followed in phase diagram (from blue to green to grey). In the top trace we indicate with small purple arrows the missing arrows, showing that one may understand the onset of bursting as the result of skipping spike events, which are initially few (i.e. dilute). As the current intensity is further reduced, the missing spikes become more numerous (i.e dense) and occurring in _clusters_ of inactivity, which give rise to the stuttering mode bursts [13]. In our analogy, we think of spikes as of atoms in a lattice, therefore, the initial continuous fast spiking state is like a perfect crystal. The missing spikes then play the analogue role as vacancy defects, i.e. lacking atoms. Moreover, the missing spikes are the result of decreasing current, which in the analogy represents pressure. It is then interesting to observe that in thin-film deposition, which is a topic in material science, the partial pressure of oxygen \(P(\mathrm{O}_{2})\) is a relevant parameter for the quality of the growth of crystalline oxides. Moreover, it is well known that reducing the \(P(\mathrm{O}_{2})\) induces the creation of oxygen vacancy defects in the crystal structure [14; 15], which often cluster together forming dislocations [16], This is in full qualitative analogy to the spiking traces in the stuttering bursting mode shown in Fig.7. We would like to emphasize that the path of phase transformations, i.e. the evolution from fast spiking, to bursting, to quiescent does not seem to be just a peculiarity of our MBN circuit model. In the lower panel of the Fig.7 we illustrate the striking resemblance of the traces of the MBN with those measured in bursting neuron of rats [17]. Quite remarkably, the experimental traces were obtained by solely changing the intensity of the excitatory DC current. ## V Neural networks We now consider one important final aspect in our analogy that may eventually bring new light to the issue of how to think about inter-neuron coupling. So far we have considered essentially individual neurons, but we may ask what would it be to extend the analogy to multi-neuron systems, i.e. to neural networks. As a first glimpse into this question, we shall consider the simplest network case, namely, just two neurons that are mutually excitatory or inhibitory. We focus first in a system of two identical neurons, each excited with equal input currents. The currents are above threshold, so the neurons are active and theirs spikes are transformed via conductances into mutually injected synaptic currents that are positive for the excitatory case and negative for the inhibitory one. This is schematically shown in Fig.8. We shall see that our analogy takes an interesting twist, as the dynamical states of the two-neuron system can be considered as an analogue of a complex crystal, i.e. a crystal with two atoms in the unit cell. Moreover, the coupling between neurons is mediated by synaptic currents, which can be excitatory or inhibitory. Since in our analogy current plays the role of pressure, then the synaptic currents should also play such a role. More precisely, an excitatory synaptic current should correspond to a repulsive inter-particle interactions (positive pressure), and an inhibitory synaptic current should be an attractive interaction (negativ Figure 7: Top left: Measured spike traces of the MBN as a function of decreasing current in discrete steps (purple). The thin purple arrows indicate the missing spikes. Bottom left: Experimental trace of pre-Bötzinger bursting neurons from rats. The data is obtained by changing excitatory input current in discrete steps. Adapted from [17]. The circuit parameters may be easily adjusted to fit the experimental data [8]. Right: Phase diagram of the MBN where the purple arrow indicates the evolution from the fast spiking (blue) to the bursting type 1 phase (green) by decreasing current at constant \(\tau_{L}\). sider the two cases, which we study using realistic electronic circuit simulations (LTspice). We first consider two spiking neurons with mutual inhibition. When one neuron spikes it inhibits the other, and vise-versa, so they try to avoid spiking at unison since we are synaptic current are instantaneous. After a transient period, they expectedly find a stable dynamical state where they alternate to emit spikes as shown in the top panel of Fig.8. Using our analogy, this spiking pattern corresponds two a perfect molecular crystal where the unit cell has an A-B atom pair (or a basis). In the bottom panel of the figure we consider the second case, that of mutually excitatory neurons. Again after a transient time, the system adopts a periodic spiking pattern. However, in contrast to the previous case, now both neurons fire at unison. This is also intuitive, the excitatory synaptic current of emitted by a neuron that spiked promotes the spiking of the other one, and vise versa. So, naturally, the both spike at the same time, which is nothing else that excitatory synapses promote synchrony in neural networks [18]. Following our analogy, spiking at unison corresponds to a "dimerization" of the lattice. Namely, that the distance between the A-B pair atoms is reduced as due to an attractive interaction between the A and B atomic species. These two cases are consistent with our analogy, where current is interpreted as pressure. Indeed, the volume of the unit cell in the inhibitory case is large, as expected for positive effective pressure between A and B, while in the excitatory case the volume is fully collapsed to zero, as expected for a negative pressure within the unit cell. It is an interesting perspective for future work to consider increasingly complex networks of several neurons (motifs). The periodic states that emerge constitute spiking _sequences_, which are of great relevance for automatic motor behavior. By virtue of the analogy that we introduced in the present work, those periodic sequences should correspond to a variety of molecular crystals. It would be exciting to explore if new intuition for Neuroscience could be brought from from those traditional areas of Condensed Matter Physics and Chemistry [19]. ## VI Conclusion In this work we introduced the idea that the dynamical states of neural networks may be though of as realizations of "time matter" states. We started from the notion that a trace of spiking events of a neuron can be analogue to a snapshot of particles or atoms arranged in space. We then went on to explore and show that the analogy may be pushed far beyond that literary statement, and may provide new intuition in the challenging problem of understanding and designing spiking neural networks. We identified analogue roles of basic quantities of Physics with those of Neuroscience, such as pressure and volume with input currents and inter-spike intervals. We then logically build on this assumption to show connections between correlation and response functions in both fields. Perhaps most significant was the finding that a _neuro-compressibility_ can be defined, with possible far reaching consequences, including medical ones, that may be experimentally tested. An exciting new road of discovery may open ahead. ## VII Acknowledgments We acknowledge support from the French ANR "MoMA" project ANR-19-CE30-0020.
2308.15308
On-Device Learning with Binary Neural Networks
Existing Continual Learning (CL) solutions only partially address the constraints on power, memory and computation of the deep learning models when deployed on low-power embedded CPUs. In this paper, we propose a CL solution that embraces the recent advancements in CL field and the efficiency of the Binary Neural Networks (BNN), that use 1-bit for weights and activations to efficiently execute deep learning models. We propose a hybrid quantization of CWR* (an effective CL approach) that considers differently forward and backward pass in order to retain more precision during gradient update step and at the same time minimizing the latency overhead. The choice of a binary network as backbone is essential to meet the constraints of low power devices and, to the best of authors' knowledge, this is the first attempt to prove on-device learning with BNN. The experimental validation carried out confirms the validity and the suitability of the proposed method.
Lorenzo Vorabbi, Davide Maltoni, Stefano Santi
2023-08-29T13:48:35Z
http://arxiv.org/abs/2308.15308v1
# On-Device Learning with Binary Neural Networks ###### Abstract Existing Continual Learning (CL) solutions only partially address the constraints on power, memory and computation of the deep learning models when deployed on low-power embedded CPUs. In this paper, we propose a CL solution that embraces the recent advancements in CL field and the efficiency of the Binary Neural Networks (BNN), that use 1-bit for weights and activations to efficiently execute deep learning models. We propose a hybrid quantization of CWR* (an effective CL approach) that considers differently forward and backward pass in order to retain more precision during gradient update step and at the same time minimizing the latency overhead. The choice of a binary network as backbone is essential to meet the constraints of low power devices and, to the best of authors' knowledge, this is the first attempt to prove on-device learning with BNN. The experimental validation carried out confirms the validity and the suitability of the proposed method. Keywords:Binary Neural Networks On-device Learning Continual Learning. ## 1 Introduction Integrating a deep learning model into an embedded system can be a challenging task for two main reasons: the model may not fit into the embedded system memory and, the time efficiency may not satisfy the application requirements. A number of light architectures have been proposed to mitigate these problems (MobileNets [1], EfficientNets [2], NASNets [3]) but they heavily rely on floating point computation which is not always available (or efficient) on tiny devices. Binary Neural Networks (BNN), where a single bit is used to encode weights and activations, emerged as an interesting approach to speed up the model inference relying on packed bitwise operations [4]. However, almost no literature work addresses the problem of training (or tuning) such models on-device, a task which is still more complex than inference because: * quantization is known to affect back propagation and weights update * popular inference engines (e.g. Tensorflow Lite, pytorch mobile, ecc.) do not support model training This work proposes on-device learning of BNN to enable continual learning of a pre-trained model. We start from CWR* [5], a simple but effective continual learning approach that limits weight updates to the output head, and designs an ad-hoc quantization approach that preserves most of the accuracy with respect to a floating point implementation. We prove that several state of the art BNN models can be used in conjunction with our approach to achieve good performance on classical continual learning dataset/benchmarks such as CORe50 [6], CIFAR10 [7] and CIFAR100 [7]. ## 2 Related Literature ### Continual Learning The classical deep learning approach is to train a model on a large batch of data and then freeze it before deployment on edge devices; this does not allow adapting the model to a changing environment where new classes (NC scenario) or new items/variation of known classes (NI scenario) can appear over time. Collecting new data and periodically retraining a model from scratch is not efficient and sometime not possible because of privacy, so the CL approach is to adapt an existing model by using only new data. Unfortunately, this is prone to forgetting old knowledge, and specific techniques are necessary to balance the model stability and plasticity. For a survey of existing CL methods see [8]. In this work we focus on Single Object Recognition task addressing the two CL scenarios of NI and NC; in both cases, the learning phase of the model is usually splitted in _experiences_, each one containing different training samples belonging or not to known classes (this depends on the CL scenario). CWR* mantains two sets of weights for the output classification layer: \(\boldsymbol{cW}\) are the consolidated weights used during inference while \(\boldsymbol{tW}\) are the temporary weights that are iteratively updated during back-propagation. \(\boldsymbol{cW}\) are initialized to \(\boldsymbol{0}\) before the first batch and then updated according to Algorithm 1 (for more details see [5]), while \(\boldsymbol{tW}\) are reset to \(\boldsymbol{0}\) before each training mini-batch. CWR*, for each already encountered class (of current training batch), reloads the consolidated weights \(\boldsymbol{cW}\) at the beginning of each training batch and, during the consolidation step, adopts a weighted sum based on the number of the training samples encountered in the past batches and those of current batch. The consolidation step has a negligible overhead and can be quantized adopting the same quantization scheme used for CWR* weights. In CWR*, during the first training experience (supposed to be executed offline) all the layers of the model are trained but from the second experience, only the weights of the output classification layer are adjusted during the back-prop stage, to simulate a real case scenario (lines 9 - 12 of Algorithm 1). ### Binary Neural Networks Quantization is a technique that yields compact models compared to their floating-point counterparts, by representing the network weights and activations with very low precision. The most extreme quantization is binarization, where data can only have two possible values, namely **-1(0)** or **+1(1)**. By representing weights and activations using only 1-bit, the resulting memory footprint of the model is dramatically reduced and also the heavy matrix multiplication operations can be replaced with light-weighted bitwise XNOR operations and Bitcount operations. According to [9], that compared the speedups of binary layers w.r.t. the 8-bit quantized and floating point layers, a binary implementation can achieve a lower inference time from **9** to **12x** on a low power ARM CPU. Therefore, Binary Neural Networks combine many hardware friendly properties including memory saving, power efficiency and significant acceleration; for some network topologies, BNN can be executed on device without the usage of floating-point operations [10] simplifying the deployment on ASIC or FPGA hardware. For a survey on binary neural networks see [4]. ## 3 On-Device CWR Optimization ### Gradients Computation In this section we make explicit the weights update in the classification layer; without loss of generality, a neural network \(M(\cdot)\) is composed by a sequence of \(k\) layers represented as: \[M(\cdot)=f_{W_{k}}\left(f_{W_{k-1}}\left(\cdots f_{W_{2}}\left(f_{W_{1}}(\cdot )\right)\right)\right) \tag{1}\] where \(W_{l}\) represents the weights of the \(l^{th}\) layer. In CWR* the temporary weights \(tw_{k}\) (lines 10 and 12 of Alg. 1) are updated according to Equations 9 and 10, whose quantization is discussed in the next section. Denoting with \(\alpha_{l}\) and \(\alpha_{l+1}\)3 the input and output activations of the \(l^{th}\) layer respectively, with \(\mathcal{L}\) the loss function, the backpropagation process consists in the computation of two different sets of gradients: \(\frac{\partial\mathcal{L}}{\partial\alpha_{i}}\) and \(\frac{\partial\mathcal{L}}{\partial W_{i}}\). Footnote 3: Note that the output \(\alpha_{i+1}\) of level \(i\) corresponds to the input of level \(i+1\) In CWR* the on-device backpropagation algorithm is limited to the last layer which can be considered a linear layer (with a non-linear activation function) with the following forward formula: \[\alpha_{k+1}=f_{k}\left(o_{k+1}\right),\ o_{k+1}=\alpha_{k}W_{k}+b_{k} \tag{2}\] where \(\alpha_{k+1}\) represents the output of the neural network. Considering a classification task (with \(M\) classes) with an unitary batch size, the _Cross-Entropy_ loss function is formulated as: \[\mathcal{H}(y,\ a_{k+1})=-\sum_{i=0}^{M-1}y^{i}log\left(\alpha_{k+1}^{i}\right) \tag{3}\] where \(y^{i}\) represents the element of an one-hot encoded vector of ground truth and \(\alpha_{k+1}^{i}\) is the \(l^{th}\) output activation sample. Using the softmax as activation for the last layer, reported below: \[\alpha_{k+1}\left(o_{k+1}^{t}\right)=\frac{e^{o_{k+1}^{t}}}{\sum_{j=1}^{M}e^{ o_{k+1}^{j}}} \tag{4}\] , the gradient formulas for the last classification layer can be expressed using the chain rule: \[\frac{\partial\mathcal{H}}{\partial W_{k}} =\frac{\partial\mathcal{H}}{\partial\alpha_{k+1}}\frac{\partial \alpha_{k+1}}{\partial\alpha_{k+1}}\frac{\partial\alpha_{k+1}}{\partial W_{k}} \tag{5}\] \[\frac{\partial\mathcal{H}}{\partial b_{k}} =\frac{\partial\mathcal{H}}{\partial\alpha_{k+1}}\frac{\partial \alpha_{k+1}}{\partial\alpha_{k+1}}\frac{\partial\alpha_{k+1}}{\partial b_{k}} \tag{6}\] The final expression for Eq. 5 using the Eq. 3 as loss function and 4 as non-linear \(f_{k}\left(\cdot\right)\) is a well-known result, that can be easily derived: \[\frac{\partial\mathcal{H}}{\partial W_{k}} =\left(\alpha_{k+1}-y\right)\alpha_{k} \tag{7}\] \[\frac{\partial\mathcal{H}}{\partial b_{k}} =\left(\alpha_{k+1}-y\right) \tag{8}\] Using a stochastic gradient descent optimizer with learning rate \(\eta\), the weights update equation is: \[W_{k}^{i+1} =W_{k}^{i}-\eta\left(\alpha_{k+1}-y\right)\alpha_{k} \tag{9}\] \[b_{k}^{i+1} =b_{k}^{i}-\eta\left(\alpha_{k+1}-y\right) \tag{10}\] Therefore in CWR* the temporary weights \(tw_{k}\) (lines 10 and 12 of Alg. 1) are updated according to Equations 9 and 10, whose quantization is discussed in the next section. ### Quantization Strategy Our approach considers two different quantizations: the former uses 1-bit (also called binarization) to represent weights and activations employed by the pre-trained backbone; the latter is used in the last classification layer, to quantize both forward and backward operations. This solution both reduces the latency and simplifies the adaptation of the model on new item/classes encountered. In particular, for the last layer quantization we followed the scheme proposed in [11] and implemented in GEMMLOWP library [12]. The quantized output of a 32-bit floating point linear layer, reported in Eq. 2, can be represented as: \[\overline{o_{k+1}^{int}q}=cast_{-}to_{-}int_{-}qs_{k}^{int}{}^{-32}\left( \overline{W_{k}^{int}{}_{-}q}\alpha_{k}^{int}{}_{-}q+\overline{b_{k}^{int}{}_{- }q}\right)\right] \tag{11}\] The quantization Eq. 11 depends on the number of the quantization \(q\) bits used (8, 16, 32), \(\overline{\cdot}\) represents the quantized version of a tensor and \(\mathbf{s}^{int}{}_{-}\)[32] is Figure 1: Double quantization scheme that uses a different quantization level for weights/activations used in forward and backward pass. the fixed-point scaling factor having 32-bit precision, as shown in Fig. 2. Similarly to previous works [13, 14], we used the straight-through estimator (STE) approach to approximate differentiation through discrete variables; STE represents a simple and hardware-friendly method to deal with the computation of the derivative of discrete variables that are zero almost everywhere. Based on the results reported in [15, 16, 17], the quantization of the gradients in Equations 7 and 8 represents the main cause of accuracy degradation during training and therefore we propose to use two separate versions of layer weights \(\boldsymbol{W_{k}}\), one with low-precision (\(l\boldsymbol{p_{-}q}\)) and another with higher precision (\(\boldsymbol{hp_{-}q}\)). As shown in Fig. 1, the idea is to use the \(l\boldsymbol{p_{-}q}\) version of the weights for the computations that have strict timing deadlines (forward pass), while the \(h\boldsymbol{p_{-}q}\) version is adopted during the weight update step (Equations 9 and 10), which has typically more relaxed timing constraints (it can be executed also as a background process). Every time a new high-precision copy of weights is computed, a lower version is derived from it and stored. Gradient quantization inevitably introduces an approximation error that can affect the accuracy of the model; to check the amount of approximation for different quantization levels, for each mini-batch, we compute the Mean Absolute Error (MAE, in percentage) between the floating point gradient and the quantized one for the weight tensor of the CWR* layer (for the dataset CORe50 [6]). The MAE is then accumulated for all training mini-batches of each experience, as shown in Fig. 3. In order to evaluate only the quantization error introduced, both floating-point and quantized gradients are computed starting from the same \(\boldsymbol{W_{k}^{l}}\) weights (Eq. 9). The plot curves of Figures 3a and 3b refer respectively to the _quicknet_[9] and _realtobinary_[18] models; it is evident that the quantization error introduced using the \(l\boldsymbol{p_{-}q}\) with 8 bits is much larger compared to higher quantization schemes (16/32 bits or floating point) whose gap w.r.t. the floating point implementation is quite low, as pointed out in Section 5. ## 4 Experiments We evaluate the proposed approach on three classification datasets: CORe50, CIFAR10 and CIFAR100 with different BNN architectures. The BNN models employed for CORe50 have been pre-trained on ImageNet [19] and taken from Larq repository1; instead, the models used for CIFAR10 and CIFAR100 have Figure 2: Quantization scheme adopted using \(\boldsymbol{q}\) bits for weights and activations. been pre-trained on Tiny Imagenet5. For each dataset, we conducted several tests using a different number of quantization bits with the same training procedure. Our work is targeting a model that could continuously learn and therefore we limited the number of epochs to **10** for the first experience and to **5** for the remaining. The results of Eq. 7 and 9 require the adoption of the Cross Entropy as loss function and the Stochastic Gradient Descent (SGD) as optimizer; the choice of SGD is encouraged as it requires a simple computation with a limited overhead compared to the Adam [20] optimizer. The binarization of weights and activations always happens at training time using an approximation of the gradient (STE introduced in Section 3.2 or derived solutions that are model dependent) for _sign_ function. Footnote 5: [http://cs231n.stanford.edu/tiny-imagenet-200.zip](http://cs231n.stanford.edu/tiny-imagenet-200.zip) Hereafter we provide some details on the datasets and related CL protocols: **CORe50 [6]**: It is a dataset specifically designed for Continuous Object Recognition containing a collection of **50** domestic objects belonging to **10** categories. The dataset has been collected in **11** distinct sessions (**8** indoor and **3** outdoor) characterized by different backgrounds and lighting. For the continuous learning scenarios (NI, NC) we use the same test set composed of sessions **#3**, **#7** and **#10**. The remaining **8** sessions are split in batches and provided sequentially during training obtaining **9** experiences for NC scenario and **8** for NI. No augmentation procedure has been implemented Figure 3: Accumulation of gradient quantization errors (Mean Absolute Error in percentage) between quantized and floating-point versions for each experience. During the first experience the gradient computation is always executed in floating-point. since the dataset already contains enough variability in terms of rotations, flips and brightness variation. The input RGB image is standardized and rescaled to the size of \(\mathbf{128\times 128\times 3}\). **CIFAR10 and CIFAR100 [7]**: Due to the lower number of classes, the NC scenario for CIFAR10 contains 5 experiences (adding 2 classes for each experience) while 10 are used for CIFAR100. For both datasets the NI scenario is composed by 10 experiences. Similar to CORe50, the test set does not change over the experiences. The RGB images are scaled to the interval [-1.0;+1.0] and the following data augmentation was used: zero padding of 4 pixels for each size, a random 32x32 crop and a random horizontal flip. No augmentation is used at test time. On CORe50 dataset, we evaluated the three binary models reported below: **Realtobinary [18]**: This network proposes a real-to-binary attention matching mechanism that aims to match spatial attention maps computed at the output of the binary and real-valued convolutions. In addition, the authors proposed to use the real-valued activations of the binary network before the binarization of the next layer to compute scaling factors, used to rescale the activations produced after the application of the binary convolution. **Quicknet and QuicknetLarge[9]**: This network follows the previous works [21; 22; 18] proposing a sequence of blocks, each one with a different number of binary \(\mathbf{3\times 3}\) convolutions and residual connections over each layer. Transition blocks between each residual section halve the spatial resolution and increase the filter count. QuicknetLarge employs more blocks and feature maps to increase accuracy. For CIFAR10 and CIFAR100 datasets, whose input resolution is 32x32, we evaluated the following networks (pre-trained on Tiny Imagenet): **BiRealNet[21]**: It is a modified version of classical ResNet that proposes to preserve the real activations before the sign function to increase the representational capability of the 1-bit CNN, through a simple shortcut. Bi-RealNet adopts a tight approximation to the derivative of the non-differentiable sign function with respect to activation and a magnitude-aware gradient to update weight parameters. We used the instance of the network that uses _18-layers6_. Footnote 6: Refer to the following [https://github.com/liuzechun/Bi-Real-net](https://github.com/liuzechun/Bi-Real-net) repository for all the details. **ReactNet[23]**: To further compress compact networks, this model constructs a baseline based on MobileNetV1 [1] and add shortcut to bypass every 1-bit convolutional layer that has the same number of input and output channels. The \(\mathbf{3\times 3}\) depth-wise and the \(\mathbf{1\times 1}\) point-wise convolutional blocks of MobileNet are replaced by the \(\mathbf{3\times 3}\) and \(\mathbf{1\times 1}\) vanilla convolutions in parallel with shortcuts in React Net7. As for Bi-Real Net, we tested the version of React Net that uses _18-layers_. Our test were performed on both the NI and NC scenario (discussed in Section 2.1). Fig. 4, 5 and 6 summarize the experimental results. On CORe50 dataset (Fig. 4) NC scenario, the quantization scheme \(l\rho\_8\) gets a consistent accuracy drop over the experiences showing a limited learning capability; instead, the quantizations with \(l\rho\_16\) and \(l\rho\_32\) reach the same accuracy level of the floating point model. A similar situation can be observed in the NI scenario with the exception of the QuicknetLarge model where the lower quantization schemes are not able to increase the accuracy of the first experience. For datasets CIFAR10 and CIFAR100 (Fig. 5 and 6) we find similar results for the NI scenario, where the 8-bit quantization scheme limits the learning capability of the model during the experiences. Instead, in the NC scenario, both Bi-Realnet and Reactnet models with \(l\rho\_8\) quantization, are able to reach an accuracy result closed to the Figure 4: CORe50 accuracy results using different quantization methods. floating-point model. From our analysis it appears that the 8-bit quantization of the gradients limits noticeably the learning ability of a binary model when employed in a continual learning scenario for CWR* method. In order to reach accuracy comparable to a floating point implementation we devise the adoption of at least 16 bits both for _lp_ and _hp_; it is worth noting that the computational effort of 16 bits is anyway limited in CWR* because the quantization is confined to the last classification layer. ## 5 Conclusion On-device training (or adaptation) can play an essential role in the IoT, enabling the large adoption of deep learning solutions. In this work we focused on implementation of CWR* on edge-devices, relying on binary neural networks as backbone and proposing an ad-hoc quantization scheme. We discovered that 8-bit quantization degrades too much the learning capability of the model, while 16 bits is a good compromise. To the best of authors' knowledge, this work is the first to explore on-device continual learning with binary network; in the future work we intend to explore the application of binary neural networks in combination of CL methods relying on latent replay [24], which is particularly intriguing given the low memory footprint of 1-bit activation. Figure 5: CIFAR10 accuracy results using different quantization methods.
2308.08359
Membrane Potential Batch Normalization for Spiking Neural Networks
As one of the energy-efficient alternatives of conventional neural networks (CNNs), spiking neural networks (SNNs) have gained more and more interest recently. To train the deep models, some effective batch normalization (BN) techniques are proposed in SNNs. All these BNs are suggested to be used after the convolution layer as usually doing in CNNs. However, the spiking neuron is much more complex with the spatio-temporal dynamics. The regulated data flow after the BN layer will be disturbed again by the membrane potential updating operation before the firing function, i.e., the nonlinear activation. Therefore, we advocate adding another BN layer before the firing function to normalize the membrane potential again, called MPBN. To eliminate the induced time cost of MPBN, we also propose a training-inference-decoupled re-parameterization technique to fold the trained MPBN into the firing threshold. With the re-parameterization technique, the MPBN will not introduce any extra time burden in the inference. Furthermore, the MPBN can also adopt the element-wised form, while these BNs after the convolution layer can only use the channel-wised form. Experimental results show that the proposed MPBN performs well on both popular non-spiking static and neuromorphic datasets. Our code is open-sourced at \href{https://github.com/yfguo91/MPBN}{MPBN}.
Yufei Guo, Yuhan Zhang, Yuanpei Chen, Weihang Peng, Xiaode Liu, Liwen Zhang, Xuhui Huang, Zhe Ma
2023-08-16T13:32:03Z
http://arxiv.org/abs/2308.08359v1
# Membrane Potential Batch Normalization for Spiking Neural Networks ###### Abstract As one of the energy-efficient alternatives of conventional neural networks (CNNs), spiking neural networks (SNNs) have gained more and more interest recently. To train the deep models, some effective batch normalization (BN) techniques are proposed in SNNs. All these BNs are suggested to be used after the convolution layer as usually doing in CNNs. However, the spiking neuron is much more complex with the spatio-temporal dynamics. The regulated data flow after the BN layer will be disturbed again by the membrane potential updating operation before the firing function, i.e., the nonlinear activation. Therefore, we advocate adding another BN layer before the firing function to normalize the membrane potential again, called MPBN. To eliminate the induced time cost of MPBN, we also propose a training-inference-decoupled re-parameterization technique to fold the trained MPBN into the firing threshold. With the re-parameterization technique, the MPBN will not introduce any extra time burden in the inference. Furthermore, the MPBN can also adopt the element-wised form, while these BNs after the convolution layer can only use the channel-wised form. Experimental results show that the proposed MPBN performs well on both popular non-spiking static and neuromorphic datasets. Our code is open-sourced at MPBN. ## 1 Introduction Emerged as a biology-inspired method, spiking neural networks (SNNs) have received much attention in artificial intelligence and neuroscience recently [17, 13, 57, 56, 47, 58, 59]. SNNs deal with binary event-driven spikes as their activations and therefore the multiplications of activations and weights can be substituted for additions or only keeping silents. Benefitting from such a computation paradigm, SNNs derive extreme energy efficiency and run efficiently when implemented on neuromorphic hardware [1, 42, 5]. Despite the SNN has achieved great success in diverse fields including pattern recognition [12, 21, 19, 14], object detection [30], language processing [55], robotics [9], and so on, its development is deeply inspired by the experience of convolutional neural networks (CNNs) in many aspects. However, the spiking neuron model along with the rich spatio-temporal dynamic makes SNNs much different from CNNs, and directly transferring some experience of CNNs to SNNs without any modifications may be not a good idea. As one of the famous techniques in CNNs, the batch normalization(BN) technique shows great advantages. It can reduce the gradient exploding/vanishing problem, flatten the loss landscape, and reduce the internal covariate shift, thus being widely used in CNNs. There are also some works trying to apply normalization approaches in the SNN field to help model convergence. For example, inspired by BN in CNNs, NeuNorm [51] was proposed to normalize the data along the channel dimension. Considering that the temporal dimension is also important in SNNs, threshold-dependent batch normalization (tdBN) [62] then extended the scope of BN to the additional temporal dimension. Subsequently, to better depict the differences of data flow distributions in different time dimensions, the temporal batch normalization through time (BNTT) [31], postsynaptic potential normalization (PSP-BN) [28], and temporal effective batch normalization (TEBN) [10] that regulate the data flows with multiple BNs on different time steps were proposed. However, all these BNs proposed in SNNs are advised to be used after convolution layers as usually doing in CNNs. This ignores the fact that the nonlinear transformation in the SNN spiking neuron is much more complex than that of the ReLU neuron. In the spiking neuron, the data flow after the convolution layer will be first injected into the residual membrane potential (MP) coming from the previous time step to generate a new MP at the current time step. And then the neuron will fire a spike or keep silent still based on whether or not the new MP is up to the firing threshold. Obviously, though the data flow has been normalized by the BN after the convolution layer, it will be disturbed again by the residual MP in the membrane potential updating process. Therefore, we advocate also adding a BN layer after MP updating to regulate the data flow once again, called MPBN. Furthermore, we also propose a training-inference-decoupled re-parameterization technique in SNNs to fold the trained MPBN into the firing threshold. Hence, the MPBN will not induce any extra burden in the inference but a trivial burden in the training. The MPBN can be extended to channel-wised MPBN and element-wised MPBN further, which is very different from that of CNNs where only channel-wised normalization can be folded into weights. The difference between our SNN with MPBN and the vanilla SNN is illustrated in Fig. 1. Our main contributions are as follows: * We propose to add another BN layer after the membrane potential updating operation named MPBN to handle the data flow disturbance in the spiking neuron. The experiment shows that MPBN can flatten the loss landscape further, thus benefiting model convergence and task accuracy. * We also propose a re-parameterization method to decouple the training-time SNN and the inference-time SNN. In specific, we propose a method to fold the trained MPBN parameter into the firing threshold. Therefore, MPBN can be seen as only training auxiliary manner free from burdens in the inference-time. This re-parameterization method is suitable for both channel-wised MPBN and element-wised MPBN. * Extensive experiment results show that the SNN trained with the MPBN is highly effective compared with other state-of-the-art SNN models on both static and dynamic datasets, e.g., 96.47% top-1 accuracy and 79.51% top-1 accuracy are achieved on the CIFAR-10 and CIFAR-100 with only 2 time steps. ## 2 Related Work ### Learning of Spiking Neural Networks There are three kinds of learning algorithms of SNNs, including unsupervised learning [43, 24], converting ANN to SNN (ANN2SNN) [46, 25, 26], and supervised learning [19, 38, 20]. Unsupervised learning adopts some biological mechanism to update the SNN model, i.e., the spike-timing-dependent plasticity (STDP) approach [39], thus being considered a biologically plausible method. However, STDP cannot help train large-scale networks yet, thus it is usually limited to small datasets and non-ideal performance. The ANN-SNN conversion approach [22, 37] obtains an SNN by reusing well-trained homogeneous ANN parameters and replacing the ReLU neuron with a spiking neuron. Since the ANN model is easier to train and reach high performance, the ANN-SNN conversion method provides an interesting way to generate an SNN in a short time with competitive performance. However, the converted SNN will lose the rich temporal dynamic behaviors and thus cannot handle neuromorphic datasets well. Supervised learning [11, 50, 18] adopts the surrogate gradient (SG) approach to train SNNs with error backpropagation. It can handle temporal data and provide decent performance with few time steps on the large-scale dataset, thus having received much attention recently. For a more detailed introduction, please refer to the recent SNN survey [17]. Our work falls under the supervised learning. ### Normalization in Spiking Neural Networks The batch normalization technique was originally introduced as a kind of training auxiliary method by [29] in CNNs. It uses the weight-summed input over a mini-batch of training cases to compute a mean and variance and then uses them to regulate the summed input. This simple operation can derive many benefits. i) It reduces the internal covariate shift (ICS), thus accelerating the training of a deep neural network. ii) It makes the network insensitive to the scale of the gradients, thus a higher learning rate can be chosen to accelerate the training. iii) It makes the network suitable for more nonlinearities by preventing the network from getting stuck in the saturated modes. With these advantages, more kinds of BNs were proposed, including layer normalization [2], group normalization [52], instance normalization [48], and switchable normalization [40]. Figure 1: The difference between our SNN with MPBN and the vanilla SNN. We add another BN layer after membrane potential updating (MPU) operation in the training. The MPBN can be folded into the firing threshold and then the homogenous firing threshold will be transformed into different ones. There are also some works that modify and apply normalization approaches in the SNN field. For example, NeuNorm [51] also normalizes the feature map along the channel dimension like BN in CNNs. Recently, some methods were proposed to normalize the feature map from both the channel dimension and temporal dimension to take care of the spatio-temporal characteristics of the SNN, such as the threshold-dependent batch normalization (tdBN) [62]. It extends the scope of BN to the additional temporal dimension by adopting a 3DBN-like normalization method in CNNs. Note that, the tdBN can be folded into the weights, thus inducing no burden in the inference time. Nevertheless, NeuNorm and tdBN still use the shared parameters along the temporal dimension. Some works argued that the distributions of data in different time steps vary wildly and that using shared parameters is not a good choice. Subsequently, the temporal batch normalization through time (BNTT) [31], postsynaptic potential normalization (PSP-BN) [28], and temporal effective batch normalization (TEBN) [10] were proposed. These BNs regulate the data flow utilizing different parameters through time steps. Though these BNs with different BN parameters on different time steps can train more well-performed SNN models, their parameters can not be folded into the weights, thus will increase the computations and running time in the inference. Nevertheless, all these BNs in the SNN field are advised to be used after convolution layers. However, the data flow after the convolution layer will not be presented to the firing function directly but to the membrane potential updating function first. Hence, the data flow will be disturbed again before reaching the firing function. To this end, in this paper, we add another BN after the membrane potential updating function, called the MPBN to retain normalized data flow before the firing function. ## 3 Preliminary ### Leaky Integrate-and-Fire Model Different from CNNs, SNNs use binary spikes to transmit information. In the paper, we use the widely used Leaky-Integrate-and-Fire (LIF) neuron model [41] to introduce the unique spatial-temporal dynamic of the spiking model. First, we introduce the notation rules used here as follows. Vectors or tensors are denoted by bold italic letters, i.e., \(\mathbf{x}\) and \(\mathbf{o}\) represent the input and output variables respectively. Matrices are denoted by bold capital letters. For instance, \(\mathbf{W}\) is the weight matrix. The constant is denoted by small letters. In LIF, the membrane potential is updated by \[\mathbf{u}^{(t+1),\text{pre}}=\tau\mathbf{u}^{(t)}+\mathbf{c}^{(t+1)},\text{ where }\mathbf{c}^{(t+1)}=\mathbf{W}\mathbf{x}^{(t+1)}, \tag{1}\] where \(\mathbf{u}\) represents the membrane potential and \(\mathbf{u}^{(t+1),\text{pre}}\) is the updated membrane potential at time step \(t+1\), \(\mathbf{c}^{(t+1)}\) is the pre-synaptic input at time step \(t+1\), which is charged by weight-summed input spikes \(\mathbf{x}^{(t+1)}\), and \(\tau\) is a constant within \((0,1)\), which controls the leakage of the membrane potential. Then, when the updated membrane potential \(\mathbf{u}^{(t+1),\text{pre}}\) is up to the firing threshold \(V_{\text{th}}\), the LIF spiking neuron will fire a spike as bellow, \[\mathbf{o}^{(t+1)}=\begin{cases}1&\text{if }\mathbf{u}^{(t+1),\text{pre}}>V_{ \text{th}}\\ 0&\text{otherwise}\end{cases}, \tag{2}\] \[\mathbf{u}^{(t+1)}=\mathbf{u}^{(t+1),\text{pre}}\cdot(1-\mathbf{o}^{(t+1)}).\] After firing, the spike output \(\mathbf{o}^{(t+1)}\) at time step \(t+1\) will be transmitted to the next layer and become its input. At the same time, the updated membrane potential will be reset to zero and becomes \(\mathbf{u}^{(t+1)}\) to join the neuron processing at the next time step. **The Classifier in the SNN.** In a classification model, the final output is used to compute the \(\operatorname{Softmax}\) and predict the desired class object. In an SNN model, if we also use LIF neurons at the output layer to fire spikes and use the number of spikes to compute the probability, too much information will be lost. Therefore, we only integrate the output and do not fire them across time, as doing in recent work [20, 21, 12]. \[\mathbf{o}_{\text{out}}=\frac{1}{T}\sum_{t=1}^{T}\mathbf{c}_{\text{out}}^{(t)}=\frac{ 1}{T}\sum_{t=1}^{T}\mathbf{W}\mathbf{x}^{(t)}. \tag{3}\] Then, the cross-entropy loss is computed based on the true label and \(\operatorname{Softmax}(\mathbf{o}_{\text{out}})\). ### Batch Normalization in SNNs Batch normalization can effectively reduce the internal covariate shift and alleviate the gradient vanishing or explosion problem for training networks, thus having been widely used in CNNs. Fortunately, BN can also be used in SNNs. Considering a spiking neuron with input \(\mathbf{c}=\{\mathbf{c}^{(1)},\mathbf{c}^{(2)},\dots,\mathbf{c}^{(t)},\dots\}\), where \(t\) is the time step, BN regulate the input at each time step as follows, \[\tilde{\mathbf{c}}_{i}^{(t)}=\frac{\mathbf{c}_{i}^{(t)}-\mathbf{\mu}_{i}}{\sqrt{\mathbf{ \sigma}_{i}^{2}+\epsilon}}, \tag{4}\] where \(\mathbf{c}_{i}^{(t)}\) is the input in \(i\)-th channel at \(t\)-th time step, \(\mathbf{\mu}_{i}\) and \(\mathbf{\sigma}_{i}\) are the mean and variance of input in channel dimension, and \(\epsilon\) is a small constant to avoid denominator being zero. To ensure BN can represent the identity transformation, the normalized vector \(\tilde{\mathbf{c}}_{i}^{(t)}\) is scaled and shifted in a learning manner as follows, \[\operatorname{BN}(\mathbf{c}_{i}^{(t)})=\mathbf{\lambda}_{i}\tilde{\mathbf{c}}_{i}^{(t)}+ \mathbf{\beta}_{i}, \tag{5}\] where \(\mathbf{\lambda}_{i}\) and \(\mathbf{\beta}_{i}\) are channel-wised learnable parameters. ## 4 Methodology This section first introduces the specific form of membrane potential batch normalization. Then the re-parameterization technique that how to fold the MPBN into \(V_{\mathrm{th}}\) will be introduced in detail. Next, some key details for training the SNN and the pseudocode for the training and inference of our SNN will be given. Finally, we will provide plenty of ablation studies and the comparison of the loss landscape of the models with or without MPBN to show the effectiveness of the proposed method. ### Membrane Potential Batch Normalization As abovementioned, we argue that though the data flow has been normalized by the BN after the convolution layer, it will be disturbed again by the membrane potential updating operation. To better depict this, we give the vanilla form of LIF neuron with BN first as follows, \[\mathbf{u}^{(t+1),\text{pre}}=\tau\mathbf{u}^{(t)}+\mathrm{BN}(\mathbf{W}\mathbf{x}^{(t+1 )}), \tag{6}\] where \(\tau\) is 0.25 in the paper following [21, 38, 4]. To regulate the disturbed data flow once again, We further embed another BN after the membrane potential updating operation, called MPBN. The LIF neuron with MPBN can be updated as \[\tilde{\mathbf{u}}^{(t+1),\text{pre}}=\mathrm{MPBN}(\mathbf{u}^{(t+1),\text{pre}}). \tag{7}\] Obviously, \(\mathbf{u}^{(t+1),\text{pre}}\) will be scaled and sifted, and some \(\mathbf{u}^{(t+1),\text{pre}}\) less than \(V_{\mathrm{th}}\) may be greater than \(V_{\mathrm{th}}\) with MPBN and vice versa. This is abhorrent with the biology and MPBN will cause some extra computation burden in the inference compared with the vanilla one. To solve this problem, we also propose a training-inference-decoupled re-parameterization technique here. ### Re-parameterization With MPBN, the firing function will be updated as \[\mathbf{o}^{(t+1)}=\begin{cases}1&\text{if }\mathrm{MPBN}(\mathbf{u}^{(t+1),\text{pre}})>V _{\mathrm{th}}\\ 0&\text{otherwise}\end{cases}. \tag{8}\] If we unfold the MPBN, the above equation will be re-organized as \[\mathbf{o}_{i}^{(t+1)}=\begin{cases}1&\text{if }\mathbf{\lambda}_{i}\frac{\mathbf{u}_{i}^{(t+1), \text{pre}}-\mathbf{\mu}_{i}}{\sqrt{\mathbf{\sigma}_{i}^{2}}}+\mathbf{\beta}_{i}>V_{ \mathrm{th}}\\ 0&\text{otherwise}\end{cases}. \tag{9}\] By folding the MPBN to \(V_{\mathrm{th}}\), the firing function will be further updated as \[\mathbf{o}_{i}^{(t+1)}=\begin{cases}1&\text{if }\mathbf{u}^{(t+1),\text{pre}}>( \mathbf{\tilde{V}}_{\mathrm{th}})_{i}\\ 0&\text{otherwise}\end{cases}, \tag{10}\] \[\text{where }(\mathbf{\tilde{V}}_{\mathrm{th}})_{i}=\frac{(V_{ \mathrm{th}}-\mathbf{\beta}_{i})\sqrt{\mathbf{\sigma}_{i}^{2}}}{\mathbf{\lambda}_{i}}+\bm {\mu}_{i}.\] It can be seen that by absorbing some parameters from MPBN, \(V_{\mathrm{th}}\) will be transformed to another channel-wised \((\mathbf{\tilde{V}}_{\mathrm{th}})_{i}\). In this way, the extra computation burden caused by MPBN will be eliminated again in the inference time. Furthermore, the diversity of the spiking neuron will be improved with abundant firing parameters as the learnable firing threshold in other work [3, 49]. ### Training Framework In the paper, the spatial-temporal backpropagation (STBP) algorithm [51] is adopted to train the SNN mod els. STBP treats the SNN model as a self-recurrent neural network thus enabling an error backpropagation mechanism following the same principles as in CNNs. However, there is still a problem impeding the direct training of SNNs. To demonstrate this problem, we formulate the gradient at the layer \(l\) by the chain rule, given by \[\frac{\partial L}{\partial\mathbf{W}^{l}}=\sum_{t}(\frac{\partial L}{\partial \mathbf{\boldsymbol{o}}^{(t),l}}\frac{\partial\mathbf{\boldsymbol{o}}^{(t),l}} {\partial\mathbf{\boldsymbol{u}}^{(t),l}}+\frac{\partial L}{\partial\mathbf{ \boldsymbol{u}}^{(t+1),l}}\frac{\partial\mathbf{\boldsymbol{u}}^{(t+1),l}}{ \partial\mathbf{\boldsymbol{u}}^{(t),l}})\frac{\partial\mathbf{\boldsymbol{u}}^ {(t),l}}{\partial\mathbf{\boldsymbol{W}}^{l}}, \tag{11}\] where \(\frac{\partial\mathbf{\boldsymbol{o}}^{(t),l}}{\partial\mathbf{\boldsymbol{u}}^ {(t),l}}\) is the gradient of firing function at at \(t\)-th time step in \(l\)-th layer. Obviously, the non-differentiable firing activity of the spiking neuron will result in zero gradients everywhere, while infinity at \(V_{\mathrm{th}}\). Therefore, the gradient descent \((\mathbf{\boldsymbol{W}}^{l}\leftarrow\mathbf{\boldsymbol{W}}^{l}-\eta\frac{ \partial L}{\partial\mathbf{\boldsymbol{W}}^{l}})\) either freezes or updates to infinity in the backpropagation. To handle this problem, here, we also adopt the commonly used STE surrogate gradients as doing in other surrogate gradients (SG) methods [44, 20]. Mathematically, it is defined as: \[\frac{d\mathbf{\boldsymbol{o}}}{d\mathbf{\boldsymbol{u}}}=\left\{\begin{array} []{ll}1,&\text{if }0\leq\mathbf{\boldsymbol{u}}\leq 1\\ 0,&\text{otherwise}\end{array}\right.. \tag{12}\] Then, the SNN model can be trained end-to-end. The training and inference of our SNN are detailed in Algo. 1. ### Ablation Study To verify the effectiveness of the MPBN, a lot of ablative studies using spiking ResNet20 architecture along with different time steps were conducted on the CIFAR-10 and CIFAR-100 datasets. The results of top-1 accuracy of these models are shown in Tab. 1. It's can be seen that the test accuracy of the SNNs with MPBN is always higher than these vanilla counterparts. For example, the accuracy of baseline SNN with 1 time step is 90.40%, while with MPBN, it will increase up to 92.22%, which is a huge improvement (more than 2.0%) in the SNN field. Moreover, we also show the test accuracy curves of ResNet20 using MPBN and its vanilla counterpart with 2 time steps on CIFAR-10/100 during training in Fig. 2. It can be observed obviously that the SNNs with MPBN also perform better on convergence speed. To sum up, the proposed MPBN can both improve accuracy and convergence speed, which are very important aspects in deep learning. ### Loss Landscape We further inspect the 1D loss landscapes [36] of the SNNs with or without MPBN using spiking ResNet20 architecture in 2 time steps to show why the MPBN can improve accuracy and convergence speed in Fig. 3. It can be observed that the loss landscape of the SNN model with MPBN is flatter than that of the SNN without MPBN. This indicates that the MPBN makes the landscape of the corresponding optimization problem smoother [36], thus making the gradients more predictive and network faster convergence. The results here provide convincing evidence for ablation studies in Section 4.4. ## 5 Experiments In this section, abundant experiments were conducted to verify the effectiveness of the MPBN using widely-used spiking ResNet20 [44, 46], VGG16 [44], ResNet18 [11], \begin{table} \begin{tabular}{c c c c} \hline \hline Dataset & Method & Time step & Accuracy \\ \hline \multirow{5}{*}{CIFAR-10} & baseline & 1 & 90.40\% \\ & w/ MPBN & 1 & 92.22\% \\ \cline{2-4} & baseline & 2 & 92.80\% \\ & w/ MPBN & 2 & 93.54\% \\ \cline{2-4} & baseline & 4 & 93.85\% \\ & w/ MPBN & 4 & 94.28\% \\ \hline \multirow{5}{*}{CIFAR-100} & baseline & 1 & 67.94\% \\ & w/ MPBN & 1 & 68.36\% \\ \cline{1-1} \cline{2-4} & baseline & 2 & 70.18\% \\ \cline{1-1} \cline{2-4} & w/ MPBN & 2 & 70.79\% \\ \cline{1-1} \cline{2-4} & baseline & 4 & 71.77\% \\ \cline{1-1} \cline{2-4} & w/ MPBN & 4 & 72.30\% \\ \hline \hline \end{tabular} \end{table} Table 1: Ablation experiments for MPBN. Figure 3: The 1D loss landscape of spiking ResNet20 with and without MPBN. Figure 2: The accuracy curves of spiking ResNet20 with or without MPBN using 2 time steps on CIFAR-10 (left) and CIFAR-100 (right). The MPBN based SNNs obviously enjoy higher accuracy and easier convergence. ResNet19 [62], and ResNet34 [11] on both static datasets including CIFAR-10 [32], CIFAR-100 [32], and ImageNet [6], and one neuromorphic dataset, CIFAR10-DVS [35]. The specific introduction for these datasets has be detailed in many recent works [62, 44, 21, 38]. Here, we mainly introduce these hyper-parameters and data preprocessing in detail. We used the widely adopted LIF neuron in our SNN models as other works about direct training methods [44, 46]. These hyper-parameters for LIF neuron about the initial firing threshold \(V_{\mathrm{th}}\) and the membrane potential decaying constant \(\tau_{\mathrm{decay}}\) are \(0.5\) and \(0.25\) respectively. For static image datasets, since encoding the 8-bit RGB images into 1-bit spikes will lose too much information, we use an ANN-like convolutional layer and a LIF layer to encode the images to spikes for all the rest of the layers, as in recent works [62, 44, 21, 38]. ### Comparison with SoTA Methods **CIFAR-10.** On CIFAR-10, we trained our SNN model using the SGD optimizer with 0.9 momentum. The initial learning rate is 0.1 and decays to 0 in cosine form. The total training time is 400 epochs. To fairly compare with these recent SoTA methods [38, 20, 16], we also adopt data normalization, random horizontal flipping, cropping, and cutout [8] for data augmentation. We run three times for each experiment to report the "mean \(\pm\) std" in Tab. 2. It can be seen that our models can outperform other methods over all these chosen widely adopted architectures with \begin{table} \begin{tabular}{l l l l c c} \hline \hline Dataset & Method & Type & Architecture & Timestep & Accuracy \\ \hline \multirow{8}{*}{Diet-SNN [44]} & SpikeNorm [46] & ANN2SNN & VGG16 & 2500 & 91.55\% \\ & Hybrid-Train [45] & Hybrid training & VGG16 & 200 & 92.02\% \\ & Spike-basedBP [34] & SNN training & ResNet11 & 100 & 90.95\% \\ & Joint A-SNN [19] & SNN training & ResNet18 & 4 & 95.45\% \\ & GLIF [60] & SNN training & ResNet19 & 2 & 94.44\% \\ & PLIF [12] & SNN training & PLIFNet & 8 & 93.50\% \\ \hline \multirow{4}{*}{Diet-SNN [44]} & \multirow{2}{*}{SNN training} & VGG16 & 5 & 92.70\% \\ & & & 10 & 93.44\% \\ \cline{3-5} & & ResNet20 & 5 & 91.78\% \\ & & & 10 & 92.54\% \\ \hline \multirow{4}{*}{RecDis-SNN [20]} & \multirow{2}{*}{SNN training} & \multirow{2}{*}{ResNet19} & 2 & 93.64\% \\ & & & 4 & 95.53\% \\ & & & 6 & 95.55\% \\ \hline \multirow{4}{*}{CIFAR-10} & \multirow{2}{*}{SNN training} & \multirow{2}{*}{ResNet20} & 2 & 93.13\% \\ & & & 4 & 93.66\% \\ & & & 6 & 94.25\% \\ \hline \multirow{4}{*}{CIFAR-10} & \multirow{2}{*}{STBP-tdBN [62]} & \multirow{2}{*}{SNN training} & 2 & 92.34\% \\ & & & 4 & 92.92\% \\ & & & 6 & 93.16\% \\ \hline \multirow{4}{*}{TET [7]} & \multirow{2}{*}{SNN training} & \multirow{2}{*}{ResNet19} & 2 & 94.16\% \\ & & & 4 & 94.44\% \\ & & & 6 & 94.50\% \\ \hline \multirow{4}{*}{Real Spike [21]} & \multirow{2}{*}{SNN training} & \multirow{2}{*}{ResNet19} & 2 & 95.31\% \\ & & & 4 & 95.51\% \\ & & & 6 & 96.10\% \\ \hline \multirow{4}{*}{InfLoR-SNN [16]} & \multirow{2}{*}{SNN training} & ResNet20 & 5 & 93.01\% \\ & & & 10 & 93.65\% \\ \hline \multirow{4}{*}{**MPBN**} & \multirow{4}{*}{SNN training} & ResNet19 & 1 & **96.06\%\(\pm 0.10\)** \\ & & & 2 & **96.47\%\(\pm 0.08\)** \\ \cline{1-1} \cline{3-5} & & & 1 & **92.22\%\(\pm 0.11\)** \\ \cline{1-1} \cline{3-5} & & & 2 & **93.54\%\(\pm 0.09\)** \\ \cline{1-1} \cline{3-5} & & & 4 & **94.28\%\(\pm 0.07\)** \\ \cline{1-1} \cline{3-5} & & VGG16 & 2 & **93.96\%\(\pm 0.09\)** \\ \cline{1-1} \cline{3-5} & & & 4 & **94.44\%\(\pm 0.08\)** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison with SoTA methods on CIFAR-10. fewer time steps. For example, The accuracy of spiking ResNet19 trained with MPBN with only 1 time step can be up to 96.06%, while the Real Spike [21] needs 6 time steps to reach a comparable result and the RecDis-SNN [61] even still underperforms 0.51% with 6 time steps. this superiority can also be observed in the results regarding the spiking ResNet20 and VGG16. **CIFAR-100.** For CIFAR-100, we adopted the same settings as in CIFAR-10. The proposed MPBN also performs well on CIFAR-100. It can be seen that our method gets the best accuracy over all these networks even with fewer time steps. For instants, the ResNet19 trained with MPBN \begin{table} \begin{tabular}{l l l l l l} \hline \hline Dataset & Method & Type & Architecture & Timestep & Accuracy \\ \hline \multirow{8}{*}{ImageNet} & STBP-tdBN [62] & SNN training & ResNet34 & 6 & 63.72\% \\ & TET [7] & SNN training & ResNet34 & 6 & 64.79\% \\ & MS-ResNet [27] & SNN training & ResNet18 & 6 & 63.10\% \\ & OTTT [54] & SNN training & ResNet34 & 6 & 63.10\% \\ & Real Spike [21] & SNN training & ResNet18 & 4 & 63.68\% \\ \cline{2-6} & SEW ResNet [11] & SNN training & ResNet18 & 4 & 63.18\% \\ & & ResNet34 & 4 & 67.04\% \\ \cline{2-6} & **MPBN** & SNN training & ResNet18 & 4 & **63.14\%\(\pm 0.08\)** \\ & & ResNet34 & 4 & **64.71\%\(\pm 0.09\)** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison with SoTA methods on ImageNet. \begin{table} \begin{tabular}{l l l l l l} \hline \hline Dataset & Method & Type & Architecture & Timestep & Accuracy \\ \hline \multirow{8}{*}{ImageNet} & SpikeNorm [46] & ANN2SNN & ResNet20 & 2500 & 64.09\% \\ & RMP [23] & ANN2SNN & ResNet20 & 2048 & 67.82\% \\ & Hybrid-Train [45] & Hybrid training & VGG11 & 125 & 67.90\% \\ & IM-Loss [15] & SNN training & VGG16 & 5 & 70.18\% \\ \cline{2-6} & Joint A-SNN [19] & SNN training & ResNet18 & 4 & 77.39\% \\ & & ResNet34 & 4 & 79.76\% \\ \hline \multirow{8}{*}{Dspike [38]} & \multirow{2}{*}{SNN training} & \multirow{2}{*}{ResNet20} & 2 & 71.68\% \\ & & & & 4 & 73.35\% \\ & & & & 6 & 74.24\% \\ \hline \multirow{8}{*}{CIFAR-100} & \multirow{8}{*}{TET [7]} & \multirow{8}{*}{SNN training} & \multirow{2}{*}{ResNet19} & 2 & 72.87\% \\ & & & & 4 & 74.47\% \\ & & & & 6 & 74.72\% \\ \cline{1-1} \cline{2-6} & RecDis-SNN [20] & SNN training & ResNet19 & 4 & 74.10\% \\ & & VGG16 & 5 & 69.88\% \\ \cline{1-1} \cline{2-6} & InfLoR-SNN [16] & SNN training & ResNet20 & 5 & 71.19\% \\ & & VGG16 & 5 & 71.56\% \\ \cline{1-1} \cline{2-6} & Real Spike [21] & SNN training & ResNet20 & 5 & 66.60\% \\ & & VGG16 & 5 & 70.62\% \\ \cline{1-1} \cline{2-6} & GLIF [60] & SNN training & ResNet19 & 2 & 75.48\% \\ & & & & 4 & 77.05\% \\ \hline \multirow{3}{*}{TEBN [10]} & \multirow{3}{*}{SNN training} & \multirow{3}{*}{ResNet19} & 2 & 75.86\% \\ & & & & 4 & 76.13\% \\ \cline{1-1} & & & & 6 & 76.41\% \\ \hline \multirow{3}{*}{**MPBN**} & \multirow{3}{*}{SNN training} & \multirow{3}{*}{ResNet19} & \multirow{3}{*}{1} & **74.74\%\(\pm 0.11\)** \\ & & & & **78.71\%\(\pm 0.10\)** \\ & & & & 2 & **79.51\%\(\pm 0.07\)** \\ \cline{1-1} \cline{2-6} & & ResNet20 & 2 & **70.79\%\(\pm 0.08\)** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison with SoTA methods on CIFAR-100. can achieve 78.71% top-1 accuracy with only 1 time step, which outperforms other SoTA methods such as TET, GLIF, TEBN, and RecDis-SNN even with 4 or 6 time steps about 1.66%-3.99%. **ImageNet.** On ImageNet, we used standard data normalization, random horizontal flipping, and cropping for data augmentation and trained the networks for 320 epochs as in [11]. The optimizer setting also keeps the same with CIFAR datasets. The results for ImageNet are presented in Tab. 4. It can be seen that the accuracy of our method is better than that of these recent SoTA methods, only relatively smaller compared with SEW ResNet [11] for spiking ResNet34. However, SEW ResNet is not a typical SNN model. It adopts the activation before addition form-based ResNet and its blocks will fire positive integer spikes. In this way, the event-driven and multiplication-addition transform advantages of SNNs will be lost. While we adopt the original ResNet, which fires standard binary spikes. **CIFAR10-DVS.** We also adopted the neuromorphic dataset, CIFAR10-DVS in the paper to verify the effectiveness of the MPBN. We also split the dataset into 9K training images and 1K test images, and resize them to \(48\times 48\) for data augmentation as in [51, 21]. The learning rate is 0.01 and other settings are the same as CIFAR-10. It can be seen that the MPBN also shows superiority in this dataset. ### Extension of the MPBN In CNNs, the most widely used BN is channel-wised. This is because element-wised BNs are very time-consuming and can not be folded into weights, otherwise, the channel-wised weight-sharing mechanism will be destroyed. However, the MPBN adopts the firing threshold-folded manner and the firing threshold need not keep the same along the channels, therefore, MPBN can use the element-wised form freely. In this way, \(V_{\mathrm{th}}\) will be transformed to element-wised ones as follows, \[(\mathbf{\tilde{V}}_{\mathrm{th}})_{i,j,k}=\frac{(V_{\mathrm{th}}-\mathbf{\beta}_{i,j, k})\sqrt{\mathbf{\sigma}_{i,j,k}^{2}}}{\mathbf{\lambda}_{i,j,k}}+\mathbf{\mu}_{i,j,k}, \tag{13}\] where\((\mathbf{\tilde{V}}_{\mathrm{th}})_{i,j,k}\) is the transformed firing threshold of the neuron comes from \(i\)-th channel in the spatial position \((j,k)\). To investigate the performance of the element-wised MPBN, here we also provide a comparison of the vanilla MPBN and its extension. The results of top-1 accuracy of the spiking ResNet20 with 4 time steps on CIFAR datasets are shown in Tab. 6. Though the two versions all perform well, the element-wised MPBN is relatively better than the channel-wised MPBN. This may be because element-wised MPBN can learn more firing threshold values, which means a richer representation ability for SNNs. ## 6 Conclusion In the paper, we advocated adding the MPBN before the firing function to regulate the disturbed data flow again. We also provided a training-inference-decoupled re-parameterization technique to fold the trained MPBN into the firing threshold to eliminate the extra time burden induced by MPBN in the inference time. Furthermore, the channel-wised and element-wised MPBN in different granularities were explored. Extensive experiments verified that the proposed MPBN can consistently achieve good performance. ## Acknowledgment This work is supported by grants from the National Natural Science Foundation of China under contracts No.12202412 and No.12202413. \begin{table} \begin{tabular}{c c c c} \hline \hline Dataset & Method & Time step & Accuracy \\ \hline \multirow{2}{*}{CIFAR-10} & channel-wised & 4 & 94.28\% \\ & element-wised & 4 & 94.42\% \\ \hline \multirow{2}{*}{CIFAR-100} & channel-wised & 4 & 72.30\% \\ & element-wised & 4 & 72.49\% \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison with learnable firing threshold methods. \begin{table} \begin{tabular}{l l l l c c} \hline \hline Dataset & Method & Type & Architecture & Timestep & Accuracy \\ \hline \multirow{8}{*}{CIFAR10-DVS} & Rollout [33] & Rollout & DenseNet & 10 & 66.80\% \\ & LIAF-Net [53] & Conv3D & LIAF-Net & 10 & 71.70\% \\ & LIAF-Net [53] & LIAF & LIAF-Net & 10 & 70.40\% \\ & STBP-tdBN [62] & SNN training & ResNet19 & 10 & 67.80\% \\ & RecDis-SNN [20] & SNN training & ResNet19 & 10 & 72.42\% \\ \cline{2-6} & Real Spike [21] & SNN training & ResNet19 & 10 & 72.85\% \\ & & ResNet20 & 10 & 78.00\% \\ \cline{2-6} & **MPBN** & SNN training & ResNet19 & 10 & **74.40\%\(\pm 0.20\)** \\ & & ResNet20 & 10 & **78.70\%\(\pm 0.10\)** \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison with SoTA methods on CIFAR10-DVS.
2302.01440
Generalized Uncertainty of Deep Neural Networks: Taxonomy and Applications
Deep neural networks have seen enormous success in various real-world applications. Beyond their predictions as point estimates, increasing attention has been focused on quantifying the uncertainty of their predictions. In this review, we show that the uncertainty of deep neural networks is not only important in a sense of interpretability and transparency, but also crucial in further advancing their performance, particularly in learning systems seeking robustness and efficiency. We will generalize the definition of the uncertainty of deep neural networks to any number or vector that is associated with an input or an input-label pair, and catalog existing methods on ``mining'' such uncertainty from a deep model. We will include those methods from the classic field of uncertainty quantification as well as those methods that are specific to deep neural networks. We then show a wide spectrum of applications of such generalized uncertainty in realistic learning tasks including robust learning such as noisy learning, adversarially robust learning; data-efficient learning such as semi-supervised and weakly-supervised learning; and model-efficient learning such as model compression and knowledge distillation.
Chengyu Dong
2023-02-02T22:02:33Z
http://arxiv.org/abs/2302.01440v1
# Generalized Uncertainty of Deep Neural Networks: Taxonomy and Applications ###### Abstract Deep neural networks have seen enormous success in various real-world applications. Beyond their predictions as point estimates, increasing attention has been focused on quantifying the uncertainty of their predictions. In this review, we show that the uncertainty of deep neural networks is not only important in a sense of interpretability and transparency, but also crucial in further advancing their performance, particularly in learning systems seeking robustness and efficiency. We will generalize the definition of the uncertainty of deep neural networks to any number or vector that is associated with an input or an input-label pair, and catalog existing methods on "mining" such uncertainty from a deep model. We will include those methods from the classic field of uncertainty quantification as well as those methods that are specific to deep neural networks. We then show a wide spectrum of applications of such generalized uncertainty in realistic learning tasks including robust learning such as noisy learning, adversarially robust learning; data-efficient learning such as semi-supervised and weakly-supervised learning; and model-efficient learning such as model compression and knowledge distillation. ## 1 Introduction Despite the vast success of deep neural networks, their decision process is hard to interpret and is known as a black box. In real-world applications, it is necessary that a decision system is not only accurate but also trustworthy, in the sense that it must know when it is likely to make errors (Guo et al., 2017). The interpretability and transparency of the decision process of deep neural networks have thus gained increasing attention, around which the pivot is often a reliable uncertainty measure that users can judge and manage the decisions. A variety of uncertainty measures and strategies to improve them have been developed so far. In this review, we show that the uncertainty estimates of deep neural networks are important not only in improving their trustworthiness, but also in further advancing their performance, particularly in terms of robustness and efficiency. We will first review the possible uncertainty estimates we can leverage for deep neural networks. These include the classic definition of uncertainty, for example, the maximum probability or entropy of the predictive distribution, as well as strategies to improve it such as a diverse form of ensemble techniques. These ensemble techniques are mostly unique to deep neural networks, which either utilize the specific design in the network architecture such as MC-dropout (Gal and Ghahramani, 2016), or utilize the distinct optimization process of deep neural networks such as Snapshot ensemble (Huang et al., 2017). We will further investigate the uncertainty estimates that are defined beyond the predictive distribution. For this, we generalize the definition of uncertainty from an estimate associated with a model prediction to any number or vector associated with a data example, where the label can either be provided or missing. We see that under such a definition, multiple intriguing properties of deep neural networks can be leveraged to define an uncertainty estimate. These include measures based on inference dynamics of deep neural networks such as prediction depth (Baldock et al., 2021), and measures based on training dynamics of deep neural networks such as learning order (Arpit et al., 2017; Hacohen et al., 2020). Finally, we discuss the potential applications of these uncertainty estimates in various learning problems, particularly when robustness and efficiency are of major interest. We show that in realistic datasets where the labels are expensive to obtain or the label noise is pervasive, reliable uncertainty estimates can be utilized to improve the performance greatly. We also show that uncertainty estimates can be utilized to enhance the performance of deep neural networks under adversarial attacks (Goodfellow et al., 2015). We then move to learning problems where the computation cost is prohibitive. We show that reliable uncertainty estimates can improve both the training and inference efficiency of deep neural networks, when utilized in advanced efficient learning techniques such as knowledge distillation (Hinton et al., 2015) and adaptive inference time (Graves, 2016). This review will be structured as follows. In Section 2, we will introduce the necessary background such as the design, training and inference of deep neural networks, as well as the generalized definition of uncertainty. In Section 3, we will review various existing uncertainty estimates under such a definition, along with their problems and the strategies to improve them. In Section 4, we demonstrate how we can use the uncertainty estimates of deep neural networks to advance their performance in robust and efficient learning. Finally, Section 5 concludes our review and illuminates the potential opportunities in this direction. ## 2 Preliminaries ### Deep neural networks We consider a supervised learning setting where a deep neural network \(f_{\theta}\) is defined as a function mapping from input domain \(\mathcal{X}\) to output domain \(\mathcal{Y}\), namely \(f_{\theta}:\mathcal{X}\rightarrow\mathcal{Y}\). A deep neural network usually consists of multiple feed-forward layers, where each layer \(l\) is parameterized by a set of weights \(\theta_{l}\). We denote the function mapping of the deep neural network up to layer \(L\) as \(f_{\theta_{1:L}}\). With a slight abuse of notation, we denote the function mapping of all layers, or the entire deep neural network as \(f_{\theta}\). Note that it is not necessarily the case that \(f_{\theta_{1:L}}(x)\in\mathcal{Y}\), namely the output of an intermediate layer can have higher or lower dimensionality than the output domain. During inference, given any unseen example input-label pair \((x,y)\), where \(x\in\mathcal{X}\) and \(y\in\mathcal{Y}\), a deep neural network accepts \(x\) and produces a prediction \(f_{\theta}(x)\), where we expect \(f_{\theta}(x)=y\). \[y=f_{\theta}(x)\] Training a deep neural network usually requires a training sample, namely a set of input-label pairs \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{N}\). The most commonly used method to train a deep neural network is empirical risk minimization. We first define a distance function in the output domain \(l:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}\), which we typically refer as the loss function. It is desired that such a distance function suffices \(l(y_{1},y_{2})\geq 0\) for any \(y_{1},y_{2}\in\mathcal{Y}\) and \(l(y,y)=0\) for any \(y\in\mathcal{Y}\). Empirical risk minimization can be defined as \[\theta^{*}=\arg\min_{\theta}\sum_{(x,y)\in\mathcal{D}}l(f_{\theta}(x),y), \tag{1}\] namely we seek an optimal set of model weights that minimize the distance between the network outputs and the labels for all training examples. To solve such a minimization problem, the typically used optimization method is gradient descent with multiple updates. For a total of \(M\) updates, we repeatedly calculate the gradient of each weight with respect to the minimization objective and update the weights in the opposite direction of the gradient. In its simplest form, the update rule can be expressed as \[\theta_{t+1}=\theta_{t}-\alpha\nabla_{\theta}\sum_{(x,y)\in\mathcal{D}}l(f_{ \theta_{t}}(x),y), \tag{2}\] where \(\alpha\) is a scalar and is typically referred to as the learning rate. Here we have denoted the network weights after \(t\) updates as \(\theta_{t}\). ### Definition of the generalized uncertainty We define the uncertainty of a deep neural network as a function that maps a data example to any real number or vector, namely \(c_{\theta}:\mathcal{X}\times\mathcal{Y}\rightarrow\mathbb{R}^{K}\), where \(c_{\theta}\) means that such a function mapping is defined by the network \(f_{\theta}\). Note that sometimes such a function mapping can take the input only, with the label missing, in which case we denote it as \(c_{\theta}:\mathcal{X}\rightarrow\mathbb{R}^{K}\), with a slight abuse of notation. We note that such a definition can cover multiple classical definitions of uncertainty. For example, in a multi-class classification setting where each input is associated with a label selected from a set of \(K\) classes, a deep neural network's confidence of its prediction is simply defined as the maximum probability mass in its output vector, namely \(c_{\theta}(x)=\max_{k}f_{\theta}(x)[k]\), where \(f_{\theta}(x)[k]\) denotes the \(k\)-th entry of the prediction vector \(f_{\theta}(x)\). Here we have in fact interpreted the prediction vector in a probabilistic sense. Let \(\mathbf{1}_{k=k^{\prime}}\in\mathcal{Y}\subseteq\mathbb{R}^{K}\) denotes the vector where only the \(k^{\prime}\)-th entry is \(1\) and others are \(0\), we then have \(\Omega=\{\mathbf{1}_{k=1},\mathbf{1}_{k=2},\cdots,\mathbf{1}_{k=K}\}\) defined as the sample space, and \(f_{\theta}(x)[k^{\prime}]\) denotes the probability measure of the singleton set \(\{\mathbf{1}_{k=k^{\prime}}\}\). One may recognize that this is the standard definition of a categorical distribution. Note that in order to use the prediction vector as a valid probability measure, it is desired that the prediction vector lies in a \((K-1)\)-dimensional simplex, namely \(f_{\theta}(x)\in\Delta_{K-1}:=\{y\in[0,1]^{K}\mid\|y\|_{1}=1\}\). We will review more typical definitions of the uncertainty of deep neural networks in later sections. ## 3 Uncertainty Estimation of Deep Neural Networks Standard deep neural networks are usually deterministic inference systems, namely the prediction on a given input will not vary for multiple forward passes. It is possible to inject inherent randomness into a deep model's inference process by specifying a prior distribution over its weights \(\theta\). By the Bayesian theorem, the uncertainty of a prediction can then be captured by the posterior distribution of the prediction conditioned on the model weights. Such a deep neural network variation is called a Bayesian Neural Network (BNN) (Denker et al., 1987; Tishby et al., 1989; Buntine & Weigend, 1991). It is rather obvious that the inference of such neural networks would be computationally expensive, as multiple samplings of the model weights are required for one single inference. There have been a variety of attempts to make BNNs computationally tractable, such as Laplace approximation (Bridle et al., 2011) and Markov chain Monte Carlo (MCMC) methods (Neal, 1995). In this review, we will focus only on the uncertainty of standard deterministic deep neural networks, while skipping recent works on BNNs. The reasons are two folds. First, in practice, BNNs are still difficult to implement and computationally slow (Lakshminarayanan et al., 2017). Second and more importantly, in this review, we are not seeking the role of uncertainty in the transparency of the model decision process, but rather in aiding the model training. For example, a typical application of uncertainty is to select high-quality unlabeled data that is most relevant to the current task from a large corpus. We will discuss more of such applications in the following sections. ### Classic uncertainty In Section 2.2, we have mentioned the classic way to interpret the predictive vector of a deep neural network as a probability distribution. In practice, to create a valid probabilistic vector, we often need to apply a non-linear activation function on top of the network outputs. Let the original output vector of a network as \(z\in\mathbb{R}^{K}\), and \(\sigma:\mathbb{R}^{K}\rightarrow[0,1]^{K}\) as a non-linear function. We will then make \(f_{\theta}(x)=\sigma(z)\) for an input \(x\), where \(z\) is often referred as the logits. For example, in a binary classification task, we will use the sigmoid function, namely \[\sigma(z)=\frac{1}{1+\exp(-z)},\] and in a multi-class classification task, we will use the softmax function, namely \[\sigma(z)[k]=\frac{\exp(z[k])}{\sum_{k=1}^{K}\exp(z[k])}.\] A valid probabilistic vector can be directly interpreted as the uncertainty of the model prediction. One can also transform such an uncertainty vector into a number, which will be particularly handy in obtaining the ranking of a set of data examples based on an uncertainty measure. Various transformations are used in the literature, typically including maximum probability, entropy, and margin. * Maximum probability: \(c_{\theta}(x)=\max_{k}f_{\theta}(x)[k]\), * Entropy: \(c_{\theta}(x)=-f_{\theta}(x)\cdot\log f_{\theta}(x)\), * Margin: \(c_{\theta}(x)=\max_{k}f_{\theta}(x)[k]-\max_{k\neq k^{\prime}}f_{\theta}(x)[k]\), where \(k^{\prime}=\arg\max_{k}f_{\theta}(x)[k]\). ### Evaluation of classic uncertainty The desired properties of an uncertainty estimate may vary significantly across the tasks of interest, sometimes even orthogonally. Consequently, a variety of uncertainty evaluation metrics exists in the literature. Here we roughly group them into three types based on the property desired. **Recovering the true distribution.** Assuming the data examples are sampled from a joint distribution \(P_{X,Y}(x,y)=P_{X}(x)P_{Y|X}(y|x)\) defined over \(\mathcal{X}\times\mathcal{Y}\). Given any input \(x\), it is desired that the predictive uncertainty, namely the probabilistic vector \(f_{\theta}(x)\) produced by a deep neural network, can recover the true conditional distribution \(P_{Y|X}(y|x)\). Note that such recovery is a stricter concern compared to accuracy (Lakshminarayanan et al., 2017). A network's prediction can be very accurate, yet significantly deviate from the true conditional distribution. A typical example is that the network always outputs the one-hot label on an input \(x\). To evaluate the quality of the recovery, one can use commonly seen loss functions such as negative log-likelihood (NLL) loss and mean squared error (MSE). Interestingly, it is known in meteorology that these two loss functions have good properties 1, or they are known as proper scoring rules (Gneiting and Raftery, 2007). A proper scoring rule is one loss function where \(l(p,q)\geq l(q,q)\) with equality if and only if \(p=q\). This means that minimizing the distance between the predictive uncertainty and one-hot labels can in fact recover the true conditional distribution. To see that for NLL, we can apply the Gibbs inequality, namely Footnote 1: MSE is also known as the Brier score in meteorology \[-\mathbb{E}_{(x,y)\sim P(x,y)}\mathbf{1}_{y}\cdot\log f_{\theta}(x)=-\mathbb{ E}_{x\sim P(x)}P(y|x)\cdot\log f_{\theta}(x)\geq-\mathbb{E}_{x\sim P(x)}P(y|x) \cdot\log P(y|x), \tag{3}\] where the equality establishes if and only if \(f_{\theta}(x)=P(y|x)\). To see that for MSE, we can decompose the MSE and find that \[\|\mathbf{1}_{y}-f_{\theta}(x)\|_{2}^{2}=\|\mathbf{1}_{y}-P(y|x)\|_{2}^{2}+\| P(y|x)-f_{\theta}(x)\|_{2}^{2}\geq\|\mathbf{1}_{y}-P(y|x)\|_{2}^{2}, \tag{4}\] where the equality establishes if and only if \(f_{\theta}(x)=P(y|x)\). Therefore, in practice when there are only one-hot labels available, we can still use NLL or MSE to quantify the quality of the uncertainty in terms of its recovery to the true conditional distribution. **Calibration.** Loosely speaking, calibration of deep neural networks implicates the network uncertainty should reflect the probability that it makes errors (Guo et al., 2017). Here we are only interested in the network's argmax prediction, which we denote as \(y_{\theta}=\arg\max_{k}f_{\theta}(x)[k]\) for simplicity. We are also only interested in the uncertainty as a real number between \(0\) and \(1\), which reflects the probability of the pointwise prediction \(y_{\theta}\) being correct. Usually, this is simply the maximum probability, namely \(c_{\theta}(x)=\max_{k}f_{\theta}(x)[k]\). Formally, calibration then desires that (Guo et al., 2017) \[P(y_{\theta}=y\mid c_{\theta}(x)=p)=p,\ \forall\ p\in[0,1]. \tag{5}\] Here the left denotes the accuracy of the network prediction on all examples where the network reports uncertainty of \(p\), and the right is the value of the uncertainty. Therefore, calibration means that the uncertainty of a prediction should genuinely match the probability of correctness of such prediction. It may help the understanding of calibration if we interpret the probability \(P\) as the limit of the frequency. Therefore, an alternative way to formalize calibration is (Kuleshov et al., 2018), \[\lim_{N\rightarrow\infty}\frac{\sum_{i=1}^{N}1(y_{\theta}=y_{i})\cdot 1(c_{ \theta}(x_{i})=p)}{\sum_{i=1}^{N}1(c_{\theta}(x_{i})=p)}=p, \tag{6}\] where \(1(\cdot)\) is the indicator function. A legacy issue is that, calibration of deep neural networks is in fact a weaker requirement of the model uncertainty compared to the definition of calibration in standard statistical terminology (Zadrozny and Elkan, 2001), which can be denoted as \[P(y\in\cdot|f_{\theta}(x)=\mathbf{p})=\mathbf{p},\ \forall\ \mathbf{p}\in\Delta_{K-1}, \tag{7}\] namely for any predictive distribution of the network, the label should distribute exactly as that predictive distribution. Note that, although a true model producing the true conditional distribution is certainly calibrated, a (statistically) calibrated model does not necessarily have to recover the true conditional distribution and a model that is not close to the true model can nevertheless be calibrated (Vaicenavicius et al., 2019). To measure the uncertainty quality in terms of calibration, one can use the difference in expectation between the accuracy and the uncertainty, namely \[\mathbb{E}_{p}\left[|P(y_{\theta}=y\mid c_{\theta}(x)=p)-p|\right]. \tag{8}\] In practice, one can report the Expected Calibration Error (ECE) (Naeini et al., 2015), which approximates the expectation by binning. Specifically, the predictions on all test examples are partitions into several equally-spaced bins, where predictions with similar uncertainty will be assigned into the same bin. We can then approximate \(P(y_{\theta}=y\mid c_{\theta}(x)=p)\) in Equation (8) by the action of correct predictions in each bin, and approximate \(p\) in Equation (8) by the average uncertainty in this bin. When the calibration of all-class predictions instead of the argmax prediction is of interest, one can also use the Static Calibration Error (SCE) (Nixon et al., 2019) to quantify the uncertainty quality. All-class calibration measures are often more effective in assessing the calibration error (Nixon et al., 2019). Other variation includes the adaptive Expected Calibration Error (aECE) (Nixon et al., 2019), which partitions the predictions into several bins with an equal number of predictions in each bin. This can be more robust to the number of bins (Patel et al., 2021), since the uncertainty distribution is often far from a uniform one (Guo et al., 2017), and the number of bins is critical in evaluating the calibration genuinely (Kumar et al., 2019). Ordinal ranking.In many practical settings, the absolute distance between model uncertainty and the probability of correctness is often not necessary. Instead, the ranking of a set of predictions based on the uncertainty measure is more important. The major objective here is to distinguish correct from incorrect predictions. Therefore it is desired that correct predictions have higher confidence estimates than incorrect predictions. Formally, a prefect ordinal ranking means that (Moon et al., 2020) \[c_{\theta}(x_{i})\leq c_{\theta}(x_{j})\Longleftrightarrow P(y_{\theta,i}=y_{ i}|x_{i})\leq P(y_{\theta,j}=y_{j}|x_{j}). \tag{9}\] To measure the uncertainty quality in terms of ordinal ranking, one can specify an uncertainty threshold, such that predictions with uncertainty above this threshold are regarded as correct predictions. However, this often rises the problem of a trade-off between false negatives and false positives (Hendrycks and Gimpel, 2017). For a threshold-free evaluation, one can use metrics such as the Area Under the Receiver Operating Characteristic curve (AUROC) (Davis and Goadrich, 2006), the Area Under the Precision-Recall curve (AURC) (Manning and Schutze, 2002) and the Area under the Risk-Coverage curve (AURC) (Geifman et al., 2019). The underlying idea of all these measures is to aggregate the accuracy under all possible thresholding of the set of predictions. Ordinal ranking has a wide variety of applications in practice. For example, uncertainty with accurate ranking can effectively identify those examples that come from a distribution substantially different from the distribution that the model is trained on. Such examples are known as the out-of-distribution (OOD) examples. Ordinal ranking is also important in active learning (Settles, 2009), where the goal is to build a model knowing which examples should be labeled to improve its performance. Quality uncertainty ranking can thus greatly reduce human labeling efforts. Ordinal ranking is also crucial in selective classification (Geifman and El-Yaniv, 2017) or failure prediction (Hendrycks and Gimpel, 2017; Hecker et al., 2018), where the goal is to reject some predictions in test time that are likely to be incorrect. The rejected examples can be passed on to backup inference systems or humans, such that the overall prediction accuracy can be greatly improved. We will discuss more applications in later sections and frequently revisit the ranking measure of uncertainty. ### Problems and improvement of classic uncertainty It is well-known that the classic uncertainty of deep neural networks may have major drawbacks. It may be poorly calibrated (Guo et al., 2017), yields inconsistent ranking (Corbiere et al., 2019), and is vulnerable to perturbations such as adversarial attack (Szegedy et al., 2014; Goodfellow et al., 2015) and dataset shifts (Hendrycks and Gimpel, 2017; Ovadia et al., 2019). To overcome these drawbacks of classic uncertainty, a variety of methods have been proposed, which can be roughly divided into two veins. The first vein is called post-processing, namely the uncertainty is rectified after a model is trained. In this case, a validation set is often needed. The second vein can be broadly referred to as regularization, namely the training process is modified to take into consideration. not only the accuracy but also the uncertainty. **Post-processing.** The very first and also the simplest post-processing method for deep neural networks is probably temperature scaling (Guo et al., 2017). The idea is to insert a hyperparameter called temperature in the softmax function of a trained model and fine-tune it on a validation set. The softmax function now becomes \[\sigma(z;T)[k]=\frac{\exp(z[k]/T)}{\sum_{k=1}^{K}\exp(z[k]/T)}. \tag{10}\] When \(T\) is larger, the new predictive distribution will become softer (higher entropy), which may alleviate the typical problem of deep neural network's uncertainty, namely being over-confident and inappropriately close to the one-hot label. Temperature scaling has been shown to be quite effective in improving the uncertainty, particularly in terms of calibration, outperforming more sophisticated calibration methods such as histogram binning (Zadrozny and Elkan, 2001), Isotonic regression (Zadrozny and Elkan, 2002) and Platt scaling (Platt, 1999). On top of its simplicity and effectiveness, another reason for the popularity of temperature scaling is probably that temperature scaling will preserve the model's accuracy albeit improving the uncertainty, because scaling the logits by a scalar will not change the argmax of the predictive vector. Note that despite its success in calibration, temperature scaling cannot improve the ordinal ranking of the model uncertainty. The reason is similar as the logit scaling is universal to all examples. Many variations of temperature scaling have been proposed to further enhance its effectiveness. For example, one can perform the temperature scaling for each class in a one-vs-all manner, despite sacrificing the accuracy since the argmax label is no longer preserved (Kull et al., 2019). Multiple temperature scaling methods can be ensembled to achieve better uncertainty quality (Zhang et al., 2020). One can also combine temperature scaling with other calibration methods such as histogram binning to get a theoretical guarantee on the calibration error (Kumar et al., 2019). Liang et al. (2018) observed that adding small adversarial perturbation to the input after temperature scaling can further improve the ranking of the uncertainty and thus better separate in- and out-of-distribution examples. When there are data from multiple domains available, one can learn and predict the most proper temperature when encountering an unseen example that is likely shifted from existing distributions (Yu et al., 2022). **Regularization.** The standard deep neural network training protocol may already have some specific design that favors the learning of uncertainty. For example, as also mentioned before, the typically used loss functions such as NLL and MSE are in fact proper scoring rules, which can recover the true conditional distribution when minimized (Lakshminarayanan et al., 2017). The uncertainty learned by standard network training may be particularly effective in terms of ranking, and can serve as a strong baseline for detecting misclassified and OOD examples (Hendrycks and Gimpel, 2017). Several simple regularization methods commonly used for improving performance have been shown to effectively improve the model uncertainty as well. These include early stopping or more advanced instance-wise early stopping (Geifman et al., 2019), label smoothing (Muller et al., 2019), focal loss (Mukhoti et al., 2020), dropout (Srivastava et al., 2014) and data augmentation methods such as mixup (Thulasidasan et al., 2019) and Augmix (Hendrycks et al., 2020). \[\lambda\sim Beta(\alpha,\alpha)\] More advanced methods for improving performance are also demonstrated to improve uncertainty. Adversarial training (Goodfellow et al., 2015; Kurakin et al., 2017; Madry et al., 2018), originally designed to improve the robustness of deep neural networks against adversarial examples (Goodfellow et al., 2015), have been shown to improve the uncertainty (Lakshminarayanan et al., 2017). Knowledge distillation can also be viewed as a regularization method with the aid of auxiliary models and has been shown to improve uncertainty in a diverse form, such as self-distillation (Kim et al., 2021), dropout distillation (Bulo et al., 2016; Gurau et al., 2018) and ensemble distillation (Mariet et al., 2020). Recent years have seen a boom in pre-training, which trains the model on a large corpus without any labels, or in a so-called self-supervised scheme. Pre-training can benefit the learning of "universal representations" that transfers to multiple domains (Rebuffi et al., 2017), and can be viewed as a better initialization of a deep neural network. Therefore, it is not surprising that pre-training can improve the model uncertainty and robustness, even when the second-stage learning (or fine-tuning) is happening on a sufficiently large dataset where pre-training fails to improve the performance significantly compared to directly training on it (Hendrycks et al., 2019). There are also a variety of regularization methods specifically designed for uncertainty learning, such as penalizing low-entropy predictive distribution in the training objective (Pereyra et al., 2017), interpolating the predictive distribution and the one-hot label using uncertainty score (Devries and Taylor, 2018), incorporating OOD examples into training and enforcing the model to produce low-confidence predictions on them (Lee et al., 2018), variance-weighted variation of label smoothing (Seo et al., 2019). Moon et al. (2020) shows that penalizing the ranking difference between the uncertainty yielded by predictions at one training step and the moving-averaged predictions at multiple training steps can improve uncertainty learning, particularly effective for ordinal ranking. Maddox et al. (2019) shows that the mean and variance of network weights across multiple training steps can serve as a good prior for their uncertainty distributions, thus building an efficient approximation for BNNs. One can also utilize an alternative network or alternative network modules to specialize in uncertainty learning while leaving the original network intact (Corbiere et al., 2019; Geifman and El-Yaniv, 2019). Note that these regularization methods are not necessarily orthogonal to each other, and may even hurt the uncertainty or accuracy when combined together. For example, it is well-known that label smoothing and knowledge distillation are not compatible with each other (Muller et al., 2019). And ensembling (see Section 3.4) combined with data augmentation methods such as mixup can also hurt the model calibration (Wen et al., 2021). ### Ensemble uncertainty In this section, we specifically focus on those methods that achieve better uncertainty quality by aggregating multiple classic uncertainty estimates. In essence, any randomness or perturbation in the input, sampling, weights, and optimization of deep neural networks during training or inference can be utilized to generate multiple uncertainty estimates for aggregation (Renda et al., 2019). We thus roughly partition the diverse ensembling methods into three families based on the origins of such randomness or perturbation. **Model ensemble.** Probably the most well-known uncertainty ensemble method is Monte-Carlo Dropout (MC-dropout) (Gal and Ghahramani, 2016). The idea is to utilize dropout to randomize the inference process of a network and average multiple stochastic predictions of one example to generate a better uncertainty estimate. Despite its simplicity, MC-dropout is shown to be akin to the Gaussian process and thus can be an efficient approximation of BNNs (Gal and Ghahramani, 2016), and has been widely used in practice. Bachman et al. (2014) generalized such an idea using any network modules to randomize the inference process. Another well-known ensemble method is typically referred to as deep ensemble (Lakshminarayanan et al., 2017). Here the randomness originates from the initialization of the deep neural networks, which is typically sampled from Gaussian distributions. One can thus train multiple deep neural networks on the same training set, and average their predictive distributions to obtain a better uncertainty estimate. It is possible to further promote the diversity of deep ensemble by combining the randomness in the data sampling process, e.g., bagging and boosting (Livieris et al., 2021). But in general cases, such a combination may not necessarily improve the uncertainty estimates and sometimes hurts the performance (Lee et al., 2015; Lakshminarayanan et al., 2017). Deep ensemble is widely used in practice, but may suffer from both increased training cost and inference cost. To reduce the inference cost, one can distill the knowledge of the ensemble into a single and small network (Hinton et al., 2015; Mariet et al., 2020; Nam et al., 2021). To reduce the training cost, instead of training multiple networks, one can also only train multiple modules and share other modules. For example, Kim et al. (2018) shows ensemble multiple attention modules trained with different attention masks can improve the uncertainty for image retrieval. Wen et al. (2020) generalizes such an idea by composing each weight matrix in a network based on the Hadamard product between a shared matrix among all ensemble members and a low-cost matrix unique to each member. Input ensemble.We have already mentioned the idea of using bagging or boosting to promote diversity in ensemble. This on its own can in fact be viewed as input ensemble, namely the randomness originates from either the sampling of the inputs, which can be utilized in training, or perturbation and data augmentation of the inputs, which can be utilized in both training and inference. For example, during training, Nanni et al. (2019) utilizes data augmentation to build an ensemble for bioimage classification while Guo & Gould (2015) utilizes data augmentation for deep ensemble in object detection. During inference, aggregating by data augmentation is even more widely used, particularly in medical image processing (Wang et al., 2018, 2019). Such a technique is also commonly referred to as test-time augmentation (Ayhan & Berens, 2018). Empirical analyses found that test-time augmentation is quite sensitive to the data augmentation being used (Shanmugam et al., 2020). Methods are thus proposed to learn appropriate data augmentation, for example, that customizes for each test input individually (Kim et al., 2020). Optimization ensemble.The randomness of the optimization of a deep neural network can also be utilized to generate an ensemble. Snapshot ensemble (Huang et al., 2017) utilizes the fact that the weight landscape of a deep neural network is non-convex and the weights may traverse multiple local minima during optimization. They thus propose repeatedly decreasing and increasing the learning rate to let the optimization converge multiple times. Each converged network checkpoint can then serve as a member of the ensemble. Yang & Wang (2020) adapts such an idea to adaptive learning rate schedulers. Fast Geometric Ensembling (FGE) (Garipov et al., 2018) and Stochastic Weight Averaging (SWA) (Izmailov et al., 2018) share a similar idea with snapshot ensemble, albeit the specific strategy to sample checkpoints along the training trajectory may differ. Hyperparameter ensemble (Wenzel et al., 2020) is a general method for optimization ensemble which uses AutoML to search for multiple hyperparameters for ensembling. ### "Deep" uncertainty We now move from the classic definition of uncertainty, namely predictive distribution and its variants, to a more generalized definition of uncertainty, especially those that are unique to the training or inference process of deep neural networks. We roughly partition these uncertainty measures into two groups, namely those based on inference dynamics, and those based on training dynamics. Inference dynamics.During the inference of deep neural networks, typically only the outputs are desired. However, it is possible to mine other forms of measures associated with the network inference process as uncertainty. For example, Oberdiek et al. (2018) and Lee & AlRegib (2020) measure the uncertainty of a prediction using gradient information. When predicting on a single example, the gradients of the network weights with respect to the loss are calculated, where the label is simply the predicted class. The uncertainty can then be defined as the norm of the gradients. When the training set or some training examples are available during inference, one can measure the uncertainty by quantifying the "similarity" between the test example and the training examples (Raghu et al., 2019; Ramalho & Corbalan, 2020). In essence, such methods build a density estimation in the input space and thus can reject those test inputs that are likely to be off-distribution. van Amersfoort et al. (2020) adapts this idea by maintaining only a set of representative training examples (or their feature vectors) called centroids. The uncertainty is then quantified as the distance between the test input and the centroid that is closest to it. Note that such uncertainty measures dispense with the need to predict the label. Alternatively, Jiang et al. (2018) defines the uncertainty as the ratio between the distance from the test input to the closest centroid and the distance from the test input to the centroid associated with the class predicted by the model, which can be more robust. These methods may be inherently connected to few-shot learning methods such as ProtoNet (Snell et al., 2017) where the training set is also directly used to help the inference. Finally, there exist some intriguing behaviors of the inference process of deep neural networks that can be leveraged to quantify the uncertainty. Baldock et al. (2021) observed that through the forward propagation in a deep neural network with multiple layers, the prediction of some examples may already be determined after only a few layers. Here the intermediate predictions are made by k-nearest neighbors classifiers on the hidden representations. They thus define an uncertainty measure called Prediction Depth (PD). It is shown that the prediction depth may be closely correlated with the margin of the final predictive distribution and examples with large prediction depth may be more difficult. Second, it is known that the inference of deep neural network is vulnerable to adversarial perturbation. It is observed that the predictions on some examples are more resistant to adversarial perturbation. Therefore, one can define the smallest perturbation size required to change the model's prediction as an uncertainty measure (Carlini et al., 2019). Such a metric, typically referred as minimum adversarial perturbation (Carlini and Wagner, 2017) or adversarial input margin (Baldock et al., 2021), may also be closely correlated with the uncertainty defined on the predictive distribution. **Training dynamics.** Deep neural networks may exhibit even more intriguing behaviors during training. It is well-known that deep neural networks can perfectly fit even pure noise (Zhang et al., 2017), which raises the question of whether deep networks simply "memorize" the training set even on real datasets. However, through a careful investigation of the predictions on individual training examples, Arpit et al. (2017) finds that data examples are not learned at the same pace during training. Those real data examples are learned first, while those random data, either with random input or random labels, are learned late. Further, they also found that simple data examples are learned earlier than those difficult data examples. These observations demonstrate that deep neural networks are not simply memorizing data since they appear to be aware of the content and semantics. Hacohen et al. (2020) further shows such a learning order of training examples is consistent across different random initializations of a network and different model architectures. Even more intriguingly, they find that such a consistent learning order is not observed for non-parametric classifiers such as AdaBoost. They also observed that when trained on synthetic datasets where the images are different rotation or colorization of Gabor patches, such a consistent learning order disappears as well. They thus hypothesize that such an intriguing behavior may originate from the interplay between deep neural networks and natural datasets with recognizable patterns and semantics. Toneva et al. (2019) observed that certain training examples are frequently forgotten during training, which means that they can be first predicted correctly, then incorrectly. The frequency of such forgotten events is shared across different neural architectures. When removing those least forgettable examples from training, model performance can be largely maintained Nevertheless, Dong et al. (2021) shows an opposite trend exists in adversarial training, where those most forgettable examples 2 may be removed without degrading performance, and sometimes even improving it. The variance of the gradient of a data example during training is also shown to be strongly correlated with its difficulty (Agarwal and Hooker, 2022). Footnote 2: Here a correct prediction is defined under adversarial perturbation. ## 4 Utilize Uncertainty for Better Performance Starting from this section, we demonstrate that the uncertainty of deep neural networks can be utilized to improve the robustness and efficiency of learning systems in a variety of realistic applications. ### Uncertainty for robust learning We first focus on the application of uncertainty in robust learning. Here we will skip the detection of OOD examples as it is often a standard application of model uncertainty. Instead, we will discuss how to utilize uncertainty in learning with noisy labels and adversarially robust learning. **Learning with noisy labels.** One straightforward idea for learning with noisy labels is to identify those labels in the training data that are likely to be incorrect. This naturally calls the necessity of uncertainty estimates, which is often referred to as Confidence Learning (CL) (Northcutt et al., 2021) in noisy learning regime. Classic uncertainty can, of course, be utilized to identify noisy labels, but may be inferior since sufficiently trained deep neural networks can memorize the labels and thus be over-confident. To combat this, one can simply perform early stopping to obtain a better uncertainty (Liu et al., 2020). This implicitly leverages the training dynamics of deep neural networks, namely noisy data will be learned late during training. By further combining the observation of prediction depth, namely noisy data will be learned in later layers of a deep neural network, one can early stop different parts of the network at different training checkpoints. In specific, early layers will be trained first and later layers will be progressively trained with few epochs, while the early layers will be frozen (Bai et al., 2021). Xia et al. (2021) further exploits this idea by dividing all network weights into those that are important for fitting clean labels and those that are important for fitting noisy labels based on gradient norm. The former weights are updated regularly while the latter are simply penalized with weight decay. This can be viewed as early stopping the training of the network in a parameter-wise manner. Han et al. (2020) also considers suppressing the learning on data examples that are likely to have noisy labels during training to avoid memorizing noisy labels. Instead of seeking better output uncertainty estimates, several methods directly utilize the uncertainty yielded by training dynamics of deep neural networks as a metric to identify noisy labels. For example, Anonymous (2023) proposes using Time-Consistency Prediction (TCP) to select clean data, which is simply the stability of the prediction on an example throughout training. Slightly differently, Pleiss et al. (2020) proposes to use the averaged margin of an example through training to select clean data. On top of detecting noisy labels, uncertainty estimates are also important in more advanced and sophisticated noisy learning methods. For example, Co-teaching (Han et al., 2018) trains two networks simultaneously where one network is trained on the potentially clean labels selected by its peer network in a mini-batch. A reasonably good uncertainty estimate for noisy labels is necessary here. DivideMix (Li et al., 2020) further develops this idea by using one network to divide the training set into a clean partition and a noisy partition. The peer network is trained on the clean partition, along with the noisy partition without labels in a semi-supervised manner. DivideMix has achieved state-of-the-art on multiple noisy learning benchmarks. **Adversarially robust learning.** Adversarial training (Goodfellow et al., 2015; Madry et al., 2018) is so far the most effective way to enhance the robustness of deep neural networks against adversarial examples. However, the robust accuracy achieved on small benchmark datasets such as CIFAR-10 (Krizhevsky, 2009) or CIFAR-100 (Krizhevsky, 2009) is still unsatisfactory, not to mention larger datasets such as ImageNet (Russakovsky et al., 2014), especially against those strong adversarial attacks such as AutoAttack (Croce & Hein, 2020). Surprisingly, recent studies found that robustness achieved by adversarial training can be greatly boosted if the deep neural networks are trained on additional data, which is either selected from large unlabeled data corpus (Uesato et al., 2019; Carmon et al., 2019) or simply generated by generative models (Sehwag et al., 2021; Gowal et al., 2021), thus requiring minimal human effort. Because those additional data examples may be far away from the original data distribution, selecting high-quality additional data is demonstrated to be crucial in this process (Uesato et al., 2019; Gowal et al., 2020). The typical metric used to select additional data is the classic uncertainty score yielded by a classifier trained on the in-distribution clean data. Dong et al. (2021) suggests that it may be better to use training stability, namely the frequency of predictions on an example being correct throughout training. However, how to build an uncertainty metric to select high-quality additional data remains an open and important problem. Some intriguing problems in adversarially robust learning may also require learning good uncertainty estimates. For example, robust overfitting (Rice et al., 2020) is a well-known problem of adversarial training, which refers to the phenomenon that during adversarial training, the robust accuracy on the test set will unexpectedly start to decrease after a certain number of training steps. Such overfitting occurs consistently across different datasets, training settings, adversary settings, and neural architectures. However, when conducting standard training on the same dataset, such an overfitting phenomenon is not observed. Recent work shows that robust overfitting may originate from the label noise implicitly exists in adversarial training, which is induced by the mismatch between the true conditional distribution and the label distribution of the adversarial examples used for training (Dong et al., 2021). Therefore, a straightforward solution to robust overfitting is to recover the true conditional distribution using uncertainty estimates generated by a model, and is demonstrated to significantly alleviate robust overfitting (Chen et al., 2021). ### Uncertainty for efficient learning In this section, we show that the uncertainty of deep neural networks can be utilized to improve efficiency of deep neural networks. Here we focus on two types of efficiency concerns: data efficiency, namely to reduce the human effort on data annotation in training deep neural networks, and model efficiency, namely to reduce the computation cost of the training and inference of deep neural networks. #### 4.2.1 Data efficiency **Semi-supervised learning.** Semi-supervised learning tackles the challenge in real-world applications where only a limited number of labeled data is available, while the vast majority of data is unlabeled. A natural solution to semi-supervised learning is incorporating the labels predicted by a classifier on unlabeled data into training, which are typically referred to as pseudo-labels (Lee, 2013). Such a process can be repeatedly conducted when a better classifier is trained after more pseudo-labeled data is available, which is known as bootstrapping or self-training (McClosky et al., 2006). However, since the pseudo-labels predicted by a classifier are very likely to be incorrect, it is crucial to define a reliable uncertainty measure to select those correct pseudo-labels. Classic uncertainty can be utilized here but may inevitably suffer from over-confidence (Guo et al., 2017) and memorization Zhang et al. (2017). The recent development of self-training has seen an increasing trend of "uncertainty-aware" methods. For example, instead of using classic uncertainty, Mukherjee & Awadallah (2020) uses MC-dropout to select pseudo-labels for self-training. They also design a loss function that incorporates the uncertainty on the correctness of pseudo-labels into training, where those selected pseudo-labels that are more likely to be correct will be more focused on. Rizve et al. (2021) further shows that self-training using MC-dropout as an uncertainty measure along with other careful designs can compete with much more sophisticated semi-supervised learning methods such as consistency regularization (Laine & Aila, 2016; Tarvainen & Valpola, 2017) and mixed methods (Berthelot et al., 2019), albeit enjoying the advantage that no data augmentation is explicitly required. Zhou et al. (2020) uses the consistency of the model prediction through training as an uncertainty measure in self-training. The hypothesis here is those model predictions that are largely invariant during training are more likely to be correct. This again is reminiscent of the intriguing training dynamics of deep neural networks as mentioned before. **Weakly-supervised learning.** Weakly-supervised learning is even more challenging than semi-supervised learning in that the limited number of labels are not absolutely correct. In most cases, they are generated by a set of pre-defined rules or provided by annotators without domain expertise, which are referred to as weak supervision. Under such circumstances, a reliable uncertainty estimate becomes more important in not only selecting pseudo-labels predicted by classifiers on the unlabeled set, but also selecting those labels given by weak supervision that are likely to be correct. Mekala et al. (2022) provides a comprehensive analysis of uncertainty estimates in weakly-supervised text classification. They found that among multiple uncertainty measures including classic uncertainty, prediction stability and MC-dropout, the learning order performs the best for selecting pseudo-labels given by weak supervision. As mentioned before, the learning order here refers to the consistent order of learning real and noisy data by deep neural networks during training. Because incorrect pseudo-labels are learned late by the model, the epoch at which the model prediction matches the pseudo-label can be an effective metric to distinguish correct and incorrect labels. #### 4.2.2 Model efficiency **Model compression and Knowledge distillation.** To excel in realistic tasks, deep neural networks often have to be excessively large in capacity. This brings significant computation burden in both training deep neural networks and using deep neural networks for predictions. How to compress a large deep neural network while maintaining its performance is thus gaining increasing attention. One well-known method to compress deep neural networks is knowledge distillation (Hinton et al., 2015), namely a small network called student is trained using the predictions provided by a large network called teacher, rather than using the original labels in the training set. Despite its success, why teacher predictions can help student learning has always been a mystery. A recent finding, that the student predictions on the training or test set can often disagree with the teacher predictions on a large number of examples (Stanton et al., 2021), further mystifies knowledge distillation. Toward better understanding and improving knowledge distillation, a significant effort in theoretical analyses has been made in recent works. In specific, Menon et al. (2021) shows that the predictive distribution provided by the teacher can improve the generalization of students because the predictive distribution is a better approximation to the true conditional distribution than one-hot labels, where true conditional distribution as supervision can reduce the variance. Dao et al. (2021) shows that the distance between teacher predictions and the true conditional distribution can directly bound the student accuracy. Based on this understanding, to improve knowledge distillation we essentially require the teacher to learn better uncertainty estimates in terms of its recovery of the true conditional distribution. Recent work has thus proposed to directly optimize the teacher to learn the true conditional distribution, which is dubbed as student-oriented teacher training and has achieved better student performance Dong et al. (2022). **Adaptive inference time.** Deep neural networks often consist of a great many layers. The inference process can thus be computationally prohibitive as the calculation of each layer has to be made sequentially, one after another. An idea to reduce the inference cost is to early stop the inference process of some examples when the outputs of the intermediate layers are already informative enough to make predictions, which is known as adaptive inference time in sequence processing and classification (Graves, 2016). Such a strategy leverages the inference dynamics of deep neural networks where easy examples may have a small prediction depth. To ensure an accurate early stopping, a reliable uncertainty estimate is important here to determine when the early predictions are likely to be correct. The entropy of the predictive distribution at early classification heads is widely used in adaptive inference methods across applications in computer vision and natural language processing, albeit the specific designs of the network architecture or training strategies differ. BranchyNet (Teerapititayanon et al., 2016) inserts multiple early-exit classification heads between intermediate layers and trains the network to minimize the weighted sum of the loss functions defined at all classification heads. Multi-Scale DenseNet (MSDNet) (Huang et al., 2017) refines this idea by altering the network architecture such that feature representations at multiple scales can be utilized jointly to determine an early exit. In sequence classification where transformers dominate, adaptive inference time can be built into the network architecture (Dehghani et al., 2018; Xin et al., 2020). FastBert (Liu et al., 2020) improves the early-exit accuracy by introducing self-distillation on intermediate classification heads. Schwartz et al. (2020) improves the uncertainty estimate for early exits by using temperature scaling to calibrate the predictive distribution and achieves a better speed-accuracy trade-off. ## 5 Conclusion In this survey, we review a wide spectrum of uncertainty measures we can define for deep neural networks. These include the classic definitions of uncertainty such as those based on the predictive distribution, and those definitions of uncertainty that are closely connected to the training and inference dynamics of deep neural networks. We show that these uncertainty measures can be leveraged in realistic applications across computer vision and natural language processing, to improve the robustness and efficiency of learning systems. We believe there are more scenarios where reliable uncertainty estimates are crucial to the performance. We also believe the increasingly popular use cases of uncertainty estimates beyond interpretability and transparency will in turn facilitate more opportunities in uncertainty learning of deep neural networks.
2308.10099
Geometric instability of graph neural networks on large graphs
We analyse the geometric instability of embeddings produced by graph neural networks (GNNs). Existing methods are only applicable for small graphs and lack context in the graph domain. We propose a simple, efficient and graph-native Graph Gram Index (GGI) to measure such instability which is invariant to permutation, orthogonal transformation, translation and order of evaluation. This allows us to study the varying instability behaviour of GNN embeddings on large graphs for both node classification and link prediction.
Emily Morris, Haotian Shen, Weiling Du, Muhammad Hamza Sajjad, Borun Shi
2023-08-19T20:10:54Z
http://arxiv.org/abs/2308.10099v2
# Geometric instability of graph neural networks on large graphs ###### Abstract We analyse the geometric instability of embeddings produced by graph neural networks (GNNs). Existing methods are only applicable for small graphs and lack context in the graph domain. We propose a simple, efficient and graph-native Graph Gram Index (GGI) to measure such instability which is invariant to permutation, orthogonal transformation, translation and order of evaluation. This allows us to study the varying instability behaviour of GNN embeddings on large graphs for both node classification and link prediction. ## 1 Introduction Graph representation learning [1] has seem many recent success in solving problems over relational data. The importance of stochastic effects is evident in graph learning. Earlier methods such as DeepWalk [2] and node2vec [3], commonly known as shallow embedding, leverage random walk. Many recent Graph Neural Network (GNN)[4] models contain stochastic components such as sampling and batching. From an implementation point of view, this allows training models on large real-world graphs. From a theoretical point of view, adding randomness is proven to improve expressiveness and performance [5][6]. The impact of randomness common in standard machine learning frameworks also carries over to the graph domain, such as in weight initialization, gradient descent, dataset splits[7] etc. It is therefore important to understand how all the stochastic components together affect graph learning models at a granular level. Stability of embeddings and models have been extensively studied outside of the domain of graphs[8][9][10][11]. Yet the practical impact of randomness on GNNs has been very rarely studied. [12] recently empirically showed that final predictions given by GCN [13] and GAT[14] vary significantly on individual nodes despite relatively stable overall accuracies. At a finer level, [15] showed the instability in embeddings produced by several shallow methods and GraphSAGE [16] and [17] specifically for node2vec. To the best of our knowledge, all previous studies are carried out on small graphs for node classification and stability indices from other domains are directly borrowed. Additional motivations of understanding embedding geometric stability include improving reproducibility and model reliability [11], reducing retraining effort when embeddings are used by multiple downstream systems by measuring drift [18], and understanding how a model works by studying its embedding space. The main contributions of this paper are: * We formalise geometric stability index for embeddings and examine existing methods. * We propose a simple, time and space efficient and graph-native stability index, the Graph Gram index (GGI). We motivate and show that GGI is invariant to permutation, orthogonal transformation, translation and the order of evaluation. Section 4: We show the geometric instability of embeddings of several popular baseline GNNs on large graphs for both node classification and link prediction. ## 2 Geometric stability **Notation.** Let \(G=(V,E)\) denote a graph, \(|V|\) the number of nodes, \(|E|\) the number of relationships. Each node has a input feature in \(\mathbb{R}^{n}\), and a final embedding in \(\mathbb{R}^{d}\). For a given node at position \(i\) in some ordered list, we refer to that node as \(v_{i}\). We refer to the final embedding of that node as \(z_{i}\in\mathbb{R}^{1\times d}\). Whenever we need to refer to the embedding of some node \(v_{i}\) by name, we will abuse the notation slightly as \(z_{v_{i}}\). Let \(Z\in\mathbb{R}^{|V|\times d}\) denote the embeddings of all nodes in the graph. Since we are concerned with multiple embeddings produced from different configurations, let superscript \(Z^{k}\) denotes the embeddings produced from configuration \(k\), unless otherwise stated. Let \(N\) denote the overall number of configurations. Stability indices that have been used to evaluate the geometric stability of embeddings of graph models are either borrowed from the NLP domain [15], or from analytic topology [17]. We give the definitions of one similarity index from each area and refer to the remaining ones in Appendix A. **Second-order cosine.** Given two embedding matrices \(Z^{l}\), \(Z^{m}\). For each node \(v_{i}\), defined an ordered list \(\{u_{1},\ldots,u_{K}\}=\mathcal{N}^{l}_{k}(v_{i})\cup\mathcal{N}^{m}_{k}(v_{i})\) where \(\mathcal{N}^{l}_{k}(v_{i})\) is the k nearest neighbour of node \(i\) under configuration \(l\). Let \(s^{l}(v_{i})\) denote a vector about \(v_{i}\) where the \(j^{th}\) entry is \(s^{l}_{j}(v_{i})=cossim(z^{l}_{i},z^{l}_{u_{j}})\). Similarly we can define \(s^{m}(v_{i})\). Let \(s^{cos[l,m]}_{k}(z^{l}_{i},z^{m}_{i})=cossim(s^{l}_{v_{i}},s^{m}_{v_{i}})\). Cosine distance are sometimes equivalently used. The second order cosine similarity of \(N\) configurations is then averaging over all nodes and all configuration pairs, i.e \(\frac{1}{N^{2}\times|\mathcal{V}|}\sum_{l,m\in N\times N}\sum_{i\in V}s^{cos[l, m]}_{k}(z^{l}_{i},z^{m}_{i})\). This stability index was originally used to detect words semantic shifts over time [19]. **Wasserstein distance.** Given two embedding matrices \(Z^{l}\), \(Z^{m}\), the Wasserstein distance between them is \(W(Z^{l},Z^{m})=(\inf_{\eta:V\sim V}\sum_{j\in V}\lVert z_{i}-z_{\eta(i)} \rVert_{2}^{2})^{1/2}\). Where \(\inf_{\eta:V\to V}\) is over all bijections between nodes. The average across all possible configuration pairs are taken as the stability index. While these indices have shown preliminary instability results, there are several improvements to be made. First all of, most of the existing similarity indices are inefficient to compute, evident with the common _averaging over all nodes and all configuration pairs_. Finding k nearest neighbours (kNN) itself is a costly operation. While there are performant kNN libraries[20], employing them adds additional engineering cost. Opting for approximate kNN speed up the process[21], at the cost of introducing confounding randomness which makes it difficult to draw concrete conclusions on the embeddings. Several of the neighbour-based indices contain hyperparameters of their own (\(k\)), which for the same reason should ideally be avoided. We give the full time and space complexity in Appendix B. For any index (such as aligned cosine) that directly compares embeddings across configurations (i.e to compare \(z^{l}_{i}\), \(z^{m}_{i}\)), an alignment step such as the solving the orthogonal Procrustes problem[22] (definition in Appendix C) needs to be done first. While embedded node neighbours can still carry important information, the interpretation of semantically similar words no longer applies. Indices from analytic topology (such as Wasserstein distance) are originally used to measure the overlap of probability distributions over metric space, which again does not map to any natural interpretation for graph embedding. Same drawback applies for Hausdorff distance (we elaborate more in Appendix D ). We propose that a stability index for embeddings produced by graph machine learning models should ideally be time and space efficient to compute, free of hyperparameters of its own, and have an intuitive and simple interpretation in the context of graphs. Having such an index enables easier analysis of embeddings produced by graph models on large real-world graphs. ## 3 Graph Gram Index We propose a simple index called the Graph Gram Index (GGI). \(Z^{l}Z^{l}{}^{T}\) is indeed the Gram matrix, which contains all covariance information between \(z^{l}_{i}\), \(z^{l}_{j}\). Using the Gram matrix avoids solving the Procrustes alignment step which would be required if we calculate \(z_{i}^{l}z_{j}^{m}\). Intuitively, this is achieved because we _compare within one set of embeddings to generate a structure summary, and compare the summaries across configurations._ The commutative diagram gives a pictorial illustration. All previous methods follow the top-right path while we follows the bottom-left. An analogous approach underpins Centered Kernel Alignment, a _similarity_ measure to compare internal representations across general neural networks (of any domain). We refer to Appendix E for a definition of CKA and how it inspires our method. Applying hadamard product of adjacency matrix over the Gram matrix means that we only capture the node pairs that are actual edges in the graph. Intuitively, a stable model produces embeddings whose node pairs that correspond to an edge should remain similar, in their respective embedding spaces, _up to any equivariant transformation satisfied by the specific model_. Note that a node pair corresponding to an edge could potentially lie far apart in the embedding space, which previous stability indices would not consider. The framework of GGI is flexible enough to allow various extension, for example by using non-Gram matrix based summary structure, using other the \(S^{l}\to s^{l}\) pooling step such as the Frobenius norm (notably, this step is the same as many graph pooling steps), and replacing std with other summary statistics. We leave them as future work. ### Invariance properties To the best of our knowledge, there is no prior study about the geometric invariances a stability index should satisfy. It is important to define such invariances, because the meaningful geometric instability any index captures should be the ones that consumers of the embeddings (final GNN layer or downstream model) are not equivariant to. At the same time, it should ideally not be only invariant to an overly specific set of transformations that few models satisfy. Hence, we propose that it is desirable to be invariant to node permutation, orthogonal transformation and translation. In addition it should be invariant to the order of evaluating a given set of configurations. kNN-Jaccard and second-order cossim for example are not invariant to permutation due to it's own nondeterminism from sampling. **Lemma**.: _GGI is invariant to node permutation, orthogonal transformation and translation of embeddings, and order of evaluation._ Proof follows from simple applications of properties of matrix multiplication and isometries which we include in Appendix F. ## 4 Experiments Code for the experiments and our implementation of GGI is available on github.2 Footnote 2: [https://github.com/brs96/geometric-instability-gnn-large-graphs](https://github.com/brs96/geometric-instability-gnn-large-graphs) **Node classification.** We use a machine with 60GB of vRAM and 8 vCPU, with.one NVIDIA T4 GPU. We evaluate four popular baseline GNN models, GCN, GraphSAGE, GAT, GIN[23]. We use Cora as a small baseline comparison with existing methods, and additional calculate Graph Gram Index on ogbn-arxiv[24]. For each model, we train for 30 different random seeds with embedding dimension 64 on Cora and 128 on ogbn-arxiv. Further training details in Appendix H. Computing second order cossim on Cora takes around 20 minutes, with 20 nearest neighbours for all nodes precomputed already. On the other hand, computing GGI across 30 sets of embeddings takes only **10 seconds** using the single GPU. The GGI indices for the four models are _1.7%, 3%, 4% and 6.7%_ respectively on Cora, and _0.7%, 1.2%, 1.6% and 3.1%_ on ogbn-arxiv. We take advantage the flexibility of the final step for GGI and replace taking standard deviation by taking the box plot. GraphSAGE and GCN produces embeddings whose edge source and target nodes are assigned more orthogonal vectors, as illustrated in 1(a), 1(b). Box plots for 20NN-Jaccard and second order cossim give contradictory results between themselves which we include in Appendix G, which further shows the difficulty of drawing consistent conclusions based on these methods. It takes close to one hour to simply compute 20NN-Jaccard for one configuration, and prohibitively long for second order cossim on ogbn-arxiv. It however only take **less than 20 minutes** to calculate GGI with peak GPU vRAM less than **10 GB**, doable on any commodity machine. **Link prediction** To the best of our knowledge there's no previous literature studying the geometric stability of embeddings produced by link prediction models. We evaluate batched versions of GraphSAGE and GCN (with GraphSAINT sampling[25]) on ogbl-citation2[24] with embedding dimension 128. Standard training setup is used as detailed in Appendix H. Footnote 2: [https://github.com/brs96/geometric-instability-gnn-large-graphs](https://github.com/brs96/geometric-instability-gnn-large-graphs) We observe GGI for batched GraphSAGE and GCN to be _2.5%, 2.6%_ respectively. Similar box plot is shown in Appendix E. Interestingly, both GGI index and box-plot statistics align with the observations we see during full-batch training of respective models, on different datasets and even tasks. it hence indicates the embedding geometric stability as well as the shape of the learnt point clouds seem to be inherent to the model architecture. ## 5 Conclusion and future work Our results indicate there seems to be a true level of embedding geometric stability produced by one model architecture. GGI and the framework behind it offers a convenient tool for researchers and practitioners to evaluate embedding geometric stability on large graphs. One could leverage the flexibility to adopt suitable indices that capture desirable geometric properties in the embedding space. Implementation of GGI can be easily further improved by leveraging sparse-dense multiplication[26] (for \(A\circ Gram\)) and split multiplications into chunks to fit arbitrarily large graphs into parallelizable GPU workload. Figure 1: Box plots of stability measures for Cora.
2303.03397
Malaria detection using Deep Convolution Neural Network
The latest WHO report showed that the number of malaria cases climbed to 219 million last year, two million higher than last year. The global efforts to fight malaria have hit a plateau and the most significant underlying reason is international funding has declined. Malaria, which is spread to people through the bites of infected female mosquitoes, occurs in 91 countries but about 90% of the cases and deaths are in sub-Saharan Africa. The disease killed 4,35,000 people last year, the majority of them children under five in Africa. AI-backed technology has revolutionized malaria detection in some regions of Africa and the future impact of such work can be revolutionary. The malaria Cell Image Data-set is taken from the official NIH Website NIH data. The aim of the collection of the dataset was to reduce the burden for microscopists in resource-constrained regions and improve diagnostic accuracy using an AI-based algorithm to detect and segment the red blood cells. The goal of this work is to show that the state of the art accuracy can be obtained even by using 2 layer convolution network and show a new baseline in Malaria detection efforts using AI.
Sumit Kumar, Harsh Vardhan, Sneha Priya, Ayush Kumar
2023-03-04T20:54:40Z
http://arxiv.org/abs/2303.03397v2
# Malaria detection using Deep Convolution Neural Network ## I Abstract _The latest WHO report showed that the number of malaria cases climbed to 219 million last year, two million higher than last year. The global efforts to fight malaria have hit a plateau and the most significant underlying reason is international funding has declined. Malaria, which is spread to people through the bites of infected female mosquitoes, occurs in 91 countries but about 90% of the cases and deaths are in sub-Saharan Africa. The disease killed 4,35,000 people last year, the majority of them children under five in Africa. AI-backed technology has revolutionized malaria detection in some regions of Africa and the future impact of such work can be revolutionary. The malaria Cell Image Data-set is taken from the official NIH Website NIH data. The aim of the collection of the dataset was to reduce the burden for microscopists in resource-constrained regions and improve diagnostic accuracy using an AI-based algorithm to detect and segment the red blood cells. The goal of this work is to show that the state of the art accuracy can be obtained even by using \(2\) layer convolution network and show a new baseline in Malaria detection efforts using AI._ ## II Introduction In the engineering, model-based design [1], model based control [2], and model based optimization [3] are the essential components. The discovery of new data-driven models especially deep convolution neural networks in recent times that have worked well in image classification (AlexNet [4] ), engineering design [5], autonomous driving car [6, 7], radiology [8], human genome [9], and many more for developing state of the art [7] control and prediction systems are revolutionizing. Convolution Neural Networks (CNN) are said to be inspired by biological processes that take place in the human brain and the connectivity pattern between neurons resembles the organization of the animal visual cortex. The convolutional layer is the core building block of a CNN. The layer's parameters consist of a set of learnable filters also called kernels, the parameters of the kernel are trained during the learning process and used during prediction. In 2012, a CNN called AlexNet [4] won the ImageNet Large Scale Visual Recognition Challenge. In later years, GoogLeNet [10] increased the mean average precision of object detection to 0.439329, and reduced classification error to 0.06656, the best result to date. The performance of GoogLeNet was close to or more than that of humans. In this work, the goal is to use the CNN-based network in the classification of cell images to distinguish between parasitized and uninfected cell images. We found that a small network can work better than the current state of art network done so far on Malaria detection if the hyperparameter is tuned properly. The expected benefit of the small network is the deployment of a malaria detection system on resource-constrained devices like mobile phones and tablets where high-performance computing is not available in those regions. ## III The problem The most widely used method (so far) is examining thin blood smears under a microscope, and visually searching for infected cells. The patient's blood is smeared on a glass slide and stained with contrasting agents to better identify infected parasites in their red blood cells. Then, a clinician manually counts the number of parasitic red blood cells, sometimes up to 5,000 cells (according to WHO protocol). Manually counting is error-prone and slow. A clinician takes 10 minutes to 30 minutes to count such a number as it is a time-consuming process. There are general guidelines that lab technicians should process no more than 25 slides each day, but a lack of qualified workers leads some to process four times as many. **Why a neural network?** Neural networks have performed really well in recent years in their ability to automatically extract features and learn filters and acted as a very good classifier of images. In previous machine learning solutions, features had to be manually programmed in, for example, size, color, and the morphology of the cells. Utilizing convolutional neural networks (CNN) will greatly speed up prediction time while mirroring (or even exceeding) the accuracy of clinicians. **Dataset:** Data is the basic building block of CNN-based predictor. Without data, the training of the network is not possible. Thankfully, we have a labeled and preprocessed dataset of cell images to train and evaluate our model. NIH [11] has a malaria dataset of 27,558 cell images with an equal number of parasitized and uninfected cells. Datasets contains 2 folders:\(1\) Infected ; \(2\) Uninfected. This total \(27,558\) images are of variable size color images with equal instances of parasitized and uninfected cells from the thin blood smear slide images from the Malaria Screener research activity. Giemsa-stained thin blood smear slides from 150 patients, falciparum-infected and 50 healthy patients were collected and photographed at Chittagong Medical College Hospital, Bangladesh. The smartphone's built-in camera acquired images of slides for each microscopic field of view. The images were manually annotated by an expert slide reader at the Mahidol-Oxford Tropical Medicine Research Unit in Bangkok, Thailand. ## IV Approach Classification of cell images is an interesting problem and has great utility. There are already country and medical universities which are leveraging this AI-backed technology that can detect diseases(refer to related works). I choose LeNet5 [12] as the starting model, which can work with grayscale images. The Le-Net5 network is modified for multi-channel images (as images are on RGB scale) to work with 3 layers of images. and further hyperparameter tuning [13] is done to classify colored cell images with an efficiency target of 95% or more on the test data set. A deep convolution neural network uses an algorithm with millions of pictures as input to train before it is able to generalize the input and make predictions for images it has never seen before. To teach an algorithm how to recognize objects in images, we use a specific type of Artificial Neural Network: a Convolutional Neural Network (CNN). Their name stems from one of the most important operations in the network: convolution. Convolutional Neural Networks are inspired by the brain. Research in the 1950s and 1960s by D.H Hubel and T.N Wiesel on the brain of mammals suggested a new model for how mammals perceive the world visually. They showed that cat and monkey visual cortexes include neurons that exclusively respond to neurons in their direct environment. The computer world consists of only numbers. Every image can be represented as multi-dimensional arrays of numbers, known as pixels. CNNs, like neural networks, are made up of neurons with learnable weights and biases. Each neuron receives several inputs, takes a weighted sum over them, passes it through an activation function, and responds with an output. The whole network has a loss function. Unlike feed-forward neural networks, where the input is a vector, where the input is a multi-channeled image (3 channel in this case). The convolution layer is the main building block of a convolutional neural network. The convolution layer comprises a set of independent filters. Each filter is independently convolved with the image and we end up with multi-layer feature maps. All these filters are initialized randomly and become our parameters which will be learned by the network subsequently. Parameter sharing is sharing of weights by all neurons in a particular feature map. Local connectivity is the concept of each neural connected only to a subset of the input image (unlike a neural network where all the neurons are fully connected) This helps to reduce the number of parameters in the whole system and makes the computation more efficient. A pooling layer is another building block of a CNN. Its function is to progressively reduce the spatial size of the representation to reduce the number of parameters and computations in the network. The pooling layer operates on each feature map independently. Batch normalization is a method we can use to normalize the inputs of each layer, in order to fight the internal covariate shift problem. During training time, a batch normalization layer does the following: it first calculate the mean and variance of the layer's input, then it normalizes the layer inputs using the previously calculated batch statistics, and last, it scales and shifts in order to obtain the output of the layer. Dropout is a regularization technique for neural network models. Dropout technique randomly selected neurons are ignored during training. They are "dropped out" randomly. This means that their contribution to the activation of downstream neurons is temporally removed on the forward pass and any weight updates are not applied to the neuron on the backward pass. Data: The data set was downloaded from the NIH website [11]. A sample of image for both 'not infected cell' and 'infected cell' is shown below. On the left, we have a cell image, which is not infected by the malaria parasite. The infected cell image contains violet dots, which represent the malaria plasmodium parasite during imaging. After hyperparameter search, our final architecture consists of two convolution layers. The input data is preprocessed and reduced its image size to \(64\times 64\), which is fed into a convolution layer with 32 filters of \(3\times 3\) size, with a stride of \(1\). The result of convolution is \(62\times 62\times 32\) feature array. Then the feature array is fed to a max-pooling layer of size \(2\times 2\) and the resultant feature matrix becomes a \(31x31x32\). Max pooling [14] reduces the computational resources required by reducing the size of the pixel of the feature matrix while keeping the feature intact. Regularisation [15] techniques called batch normalization is deployed across all \(32\) parallel channels of the output of the max-pooling layer. The batch normalized feature matrix is added with a dropout layer, with a dropout factor equal to \(0.2\). The convolution layer, max pooling layer, batch normalization layer, and drop-out [16] complete one layer convolution process in the architecture. The architecture is shown below. The output of one complete convolution layer is given to another convolution layer with 32 filters of \(3\times 3\) size, with a stride of 1. The result of convolution is \(29\times 29\times 32\) feature array. Then the feature array is fed to max-pooling layer of size \(2\times 2\) and the resultant feature matrix becomes a \(14\times 14\times 32\). Then, batch normalization is deployed across all \(32\) parallel channels of the output of the max-pooling layer. The batch normalized feature matrix is added with a dropout layer, with a dropout factor equal to \(0.2\). After the two complete convolution layers, the feature size has reduced enough and most of the feature is extracted so that we can now connect it with the feed-forward network. The \(14\times 14\times 32\) feature Fig. 1: Input data image of cell (left : Not infected cell) ; (right : Infected cell) Fig. 2: The architecture of CNN matrix is flattened which size becomes 6272. This flattened feature is fed to a feed-forward network of size 512, along with batch normalization and dropout. The feed-forward network is further added to another layer of 256 neurons with batch normalization and dropout. Finally, the output layer is connected with 2 neurons, and the activation function in the output layer is softmax. The activation of the rest of the layers is 'Rectified Linear unit(RELU) [17]'. The cost function for the error measurement is used as categorial_crossentropy, and the optimizer is 'adam'. The number of parameters to be learned are 3,357,090 (approx 3.5 million). With the available computation capacity, it took 20 minutes for training. A summary of the model is shown below. The optimiser that is used is called Adaptive Moment Estimation (Adam) [18],which combines ideas from both RMSProp and Momentum. It computes adaptive learning rates for each parameter. \[v_{dW}=\beta_{1}v_{dW}+(1-\beta_{1})\frac{\partial\mathcal{J}}{ \partial W}\] \[s_{dW}=\beta_{2}s_{dW}+(1-\beta_{2})(\frac{\partial\mathcal{J}} {\partial W})^{2}\] \[v_{dW}^{corrected}=\frac{v_{dW}}{1-(\beta_{1})^{t}}\] \[s_{dW}^{corrected}=\frac{s_{dW}}{1-(\beta_{1})^{t}}\] \[W=W-\alpha\frac{v_{dW}^{corrected}}{\sqrt{s_{dW}^{corrected}}+\varepsilon}\] The loss function deployed is 'categorical_crossentropy'.Cross-entropy loss, or log loss, measures the performance of a classification model whose output is a probability value between 0 and 1. Cross-entropy Fig. 4: Algorithm used during training Fig. 3: Summary of Neural Network model loss increases as the predicted probability diverge from the actual label. So predicting a probability of.012 when the actual observation label is 1 would be bad and result in a high loss value. A perfect model would have a log loss of 0. ## V Results The model is trained and the result of the model and the cost function is shown below. The training and validation error is calculated simultaneously as the training progress. The number of epochs is set to 50, while a callback function is written to invoke early stopping when the model's validation accuracy reaches 95% of accuracy. This is done to avoid overfitting (generalization error). If a callback is not deployed, the model over-fits and memorizes the training data. In such circumstances, the model gives 98% accuracy on training data but performs badly on validation and testing data. Once the training is done (which stops when validation accuracy reached 95%), then the testing error is calculated on the trained model. The testing accuracy achieved around 95.4%, which is satisfactory. The plot of the cost function is calculated and it is plotted with respect to batch progression. The cost function initially drops sharply and then the rate is reduced. The batch gradient descent is used with a batch size of 64 images per batch. The result of batch training is the cost becomes noisy and fluctuates, but the overall trend of progression is toward optimal value. The cost function is plotted below. ## VI Related work Machine learning is being used in control [19, 20, 21, 22, 23, 24] to model [25, 26] and prediction of complex systems [27, 28, 29, 30, 31, 32, 33]. There are various works in using machine learning in Malaria detection [34, 35] used deep CNN for Malaria detection but their model is more complex than ours. [34] used stacking-based approach for automated quantitative detection of Plasmodium falciparum malaria from blood smear. For the detection, a custom designed convolutional neural network (CNN) operating on focus stack of images is used. The Fig. 5: Result of training and testing cell counting problem is addressed as the segmentation problem and we propose a 2-level segmentation strategy. Use of CNN operating on focus stack for the detection of malaria improved the detection accuracy (both in terms of sensitivity [97.06%] and specificity [98.50%]) but also favored the processing on cell patches and avoided the need for hand-engineered features. [36] used machine learning for detection of Malaria. In 2016, Uganda's Ministry of Health found that the disease is the leading cause of death in the country - accounting for 27 per cent of deaths. Mortality rates are particularly high in rural areas, where the lack of doctors and nurses is acute. Nursing assistants are often taught to read slides instead, but inadequate training can lead to misdiagnosis. Due to lack of availability of lab technicians in the region lead some to process four times as many as recommended number of screening in a day, while it is recommended that each technician should process no more than 25 slides each day. There are so many patients who may require malaria and TB tests, and overworking day and night. The AI lab([37], at Makerere University, has developed a way to diagnose the blood samples using a cell phone. The program learns to create its own criteria based on a set of images that have been presented to it previously. It learns to recognize the common features of the infections. The smartphone clamped in place over one microscope eyepiece brings to light a detailed image of the blood sample below - each malaria parasite circled in red by artificially intelligent software. With this AI backed technology, pathogens are counted and mapped out quickly, ready to be confirmed by a health worker. Diagnosis times could be slashed from 30 minutes to as little as two minutes. The AI software is built on deep learning algorithms that use an annotated library of microscope images to learn the common features of plasmodium parasites that cause malaria. Along with malaria paracites this lab also diagnosis the bacteria called Mycobacterium tuberculosis that is responsible for tuberculosis. Researchers at the Lister Hill National Center for Biomedical Communications (LHNCBC), part of National Library of Medicine (NLM), have developed a mobile application that runs on a standard Android smartphone attached to a conventional light microscope. ## VII Conclusions In this work, we showed a state of the art \(2\) layer deep convolution network for Malaria diagnosis and detection. This method works with state of the art level accuracy without any expensive preprocessing.
2304.07142
On Data Sampling Strategies for Training Neural Network Speech Separation Models
Speech separation remains an important area of multi-speaker signal processing. Deep neural network (DNN) models have attained the best performance on many speech separation benchmarks. Some of these models can take significant time to train and have high memory requirements. Previous work has proposed shortening training examples to address these issues but the impact of this on model performance is not yet well understood. In this work, the impact of applying these training signal length (TSL) limits is analysed for two speech separation models: SepFormer, a transformer model, and Conv-TasNet, a convolutional model. The WJS0-2Mix, WHAMR and Libri2Mix datasets are analysed in terms of signal length distribution and its impact on training efficiency. It is demonstrated that, for specific distributions, applying specific TSL limits results in better performance. This is shown to be mainly due to randomly sampling the start index of the waveforms resulting in more unique examples for training. A SepFormer model trained using a TSL limit of 4.42s and dynamic mixing (DM) is shown to match the best-performing SepFormer model trained with DM and unlimited signal lengths. Furthermore, the 4.42s TSL limit results in a 44% reduction in training time with WHAMR.
William Ravenscroft, Stefan Goetze, Thomas Hain
2023-04-14T14:05:52Z
http://arxiv.org/abs/2304.07142v2
# On Data Sampling Strategies for Training Neural Network Speech Separation Models ###### Abstract Speech separation remains an important area of multi-speaker signal processing. Deep neural network (DNN) models have attained the best performance on many speech separation benchmarks. Some of these models can take significant time to train and have high memory requirements. Previous work has proposed shortening training examples to address these issues but the impact of this on model performance is not yet well understood. In this work, the impact of applying these training signal length (TSL) limits is analysed for two speech separation models: SepFormer, a transformer model, and Conv-TasNet, a convolutional model. The WJS0-2Mix, WHAMR and Libri2Mix datasets are analysed in terms of signal length distribution and its impact on training efficiency. It is demonstrated that, for specific distributions, applying specific TSL limits results in better performance. This is shown to be mainly due to randomly sampling the start index of the waveforms resulting in more unique examples for training. A SepFormer model trained using a TSL limit of 4.42s and dynamic mixing (DM) is shown to match the best-performing SepFormer model trained with DM and unlimited signal lengths. Furthermore, the 4.42s TSL limit results in a 44% reduction in training time with WHAMR. speech separation, context modelling, data sampling, speech enhancement, transformer ## I Introduction Speech separation models are used in a number of downstream speech processing tasks, from meeting transcription to assistive hearing [1, 2, 3, 4]. Often, speakers are at a far-field distance from the microphone, which creates additional challenges for speech separation due to interference from noise and reverberation in the signal [5, 6, 7]. DNN separation models have led to significant improvements on anechoic data but there is a performance gap when these models are used for more distorted speech data [8, 9, 10]. Many of these models use Transformer or bidirectional long short term memory (BLSTM) layers [9, 10, 11] which can consume large amounts of memory and have quadratic time-complexity, i.e. for \(L\) input frames of data the model performs at least \(L^{2}\) operations [12]. This is a particular concern in training when memory requirements are higher due to storing gradients for each operation required in the back-propagation stage [13]. This increased computational load also means longer training times. One way to compensate for the memory requirements is to use a batch size of 1 [9] which leads to even longer training times as more parameter updates are performed. Another approach that reduces memory requirements and allows for larger batch sizes is to reduce the mixture signal length [14]. This reduces the training time but potentially at the expense of performance. In this work, the first aim is to address if there are TSL limits at which no additional performance gain can be attained by DNN speech separation models. It is shown that, depending on model and dataset selection, there is a TSL limit at which not only no additional performance gain is attained but actually limiting the TSL to a specific value can lead to notably improved performance. This effect is demonstrated to be due to a random sampling of the start index when using TSL limits. Further evaluations show the benefit of having more unique training examples than using the full signal lengths. Finally, the application of TSL limits used with DM [15] is evaluated. The remainder of this paper is structured as follows. In Section II, the signal model is introduced. The separation networks and datasets used are described in Section III, and Section IV, respectively. Section V presents evaluations for varying TSL limit for each separation network and dataset. Section VI explores splitting signals to generate more train examples and whether DM mitigates the effects gains found using TSL limits. Final conclusions are provided in Section VII. ## II Signal Model The noisy reverberant speech separation problem is defined as aiming to estimate \(C\) speech signals \(\hat{s}_{c}[i]\) for sample index \(i\) and speaker number \(c\in\{1,\ldots,C\}\) from the discrete time-domain mixture \[x[i]=\sum_{c=1}^{C}s_{c}[i]*h_{c}[i]+\nu[i] \tag{1}\] of length \(L_{x}\). The \(*\) operator denotes convolution, \(h_{c}[i]\) is a room impulse response (RIR) corresponding to speaker \(c\) and \(\nu[i]\) denotes additive background noise. ## III Separation Models The SepFormer [9] and Conv-TasNet [14] models are both widely researched time-domain audio separation networks (TasNets). The structure for these TasNets is to have a time-domain neural encoder which encodes a mixture signal block \(\mathbf{x}_{\ell}\) of size \(L_{\mathrm{BL}}\) to \(\mathbf{w}_{\ell}\) followed by a mask estimation network which estimates a series of masks \(\mathbf{m}_{\ell}\) for each of \(C\) speakers. These masks are used to separate the encoded features which are then decoded back in to a time domain signal \(\mathbf{s}_{\ell,c}\) using a neural decoder. An example of the architecture for \(C=2\) speakers can be seen in Figure 1. Both Conv-TasNet and SepFormer are trained using the utterance-level permutation-invariant scale-invariant signal-to-distortion ratio (SIDR) objective function [9, 14, 16, 17]. Models are trained according to the best performing models in each of their original papers [9, 14] unless stated otherwise. Batch sizes of \(2\) and \(4\) are used for SepFormer and Conv-TasNet respectively except where otherwise stated. ### _SepFormer Network_ The SepFormer model is briefly introduced in this section. SepFormer is chosen because it is a large transformer model that is among the state-of-the-art models on several speech separation benchmarks [9, 18]. The SepFormer uses a 1D convolutional layer for encoding the signal proceeded by a rectified linear unit (ReLU) activation function. The decoder is a single transposed 1D convolutional layer. The mask estimation network uses a dual-path structure [11] whereby a series of Transformer layers are stacked such that each alternating layer computes a multihead attention layer on either the local or global context of the sequence. The local processing is achieved by first splitting the input signal into overlapping chunks of a predetermined size \(K\) turning a batched 3D tensor into a 4D tensor. The output of the local Transformer layer is a 4D tensor which is then reshaped by swapping the axes of encoded chunks and the number of chunks before being fed into the global Transformer layer. The final stage is to reconstruct a 3D tensor using the overlap-add method to produce \(C\) sequences of masks. The encoded features are then masked and decoded back into the time domain. ### _Conv-TasNet_ The Conv-TasNet model contrasts the SepFormer in that it is a much smaller model (25.8M vs. 3.5M parameters in the implementations used here) and the only global information it processes is the overall signal energy, whereas the SepFormer model has global access to all information in the input signal due to the transformer layers used. Conv-TasNet uses a temporal convolutional network (TCN) sequence model for the mask estimator in Figure 1. The encoder and decoder of the network are the same as those used for the SepFormer model but with a different number of output channels in the configuration used in this paper. The mask estimation network is composed of a 1D pointwise convolution (P-Conv) bottleneck layer, a TCN and a 1D P-Conv projection layer with a ReLU activation function to produce the sequence of masks. The TCN is composed of a series of convolutional blocks consisting of P-Conv and depthwise-separable convolution (DS-Conv) layers with kernel size \(P\). The convolutional blocks are configured in stacks of \(X\) blocks with increasing dilation factors \(f\in\{2^{0},2^{1},\ldots,2^{X-1}\}\). Stack are repeated \(R\) times with the dilation factor reset at the start of each stack [14]. ## IV Datasets Three corpora of 2-speaker mixtures are analysed in this work. The trends demonstrated later in Sections V and VI for the 2-speaker scenario are assumed to generalize to higher \(C\) values. For all corpora, the \(8\)kHz _min_ configuration is used. The _min_ configuration refers to mixtures being truncated to the shortest utterance in a mixture as opposed to padding shorter utterances to the longest utterances. ### _WSJ0-2Mix and WHAMR_ WSJ0-2Mix and WHAMR are both simulated 2-speaker corpora derived from the WSJ0 corpus [19, 20]. WSJ0-2Mix takes speech samples from WSJ0 and overlaps them at speech-to-speech ratios (SSRs) between \(0\) and \(5\) dB. WHAMR is a noisy reverberant extension of WSJ0-2Mix with noise from the WHAM [21] dataset mixed with the loudest speaker at signal-to-noise ratios (SNRs) between \(-6\) and \(3\) dB. ### _Libri2Mix_ Libri2Mix is a simulated 2-speaker mixture corpus derived from the LibriSpeech and WHAM corpora [22]. Speech samples come from the LibriSpeech corpus [23] and noise samples come from the WHAM corpus. Instead of SSRs, LibriMix uses loudness units relative to full scale (LUFS) measured in dB to set the loudness of speakers and noise in the mixtures. Speakers have a loudness between -25 and -33 LUFS and noise has a loudness between \(-38\) and \(-30\) LUFS. For training, the _train-100_ dataset was chosen as it has a very similar TSL distribution to the alternate _train-360_ dataset but with less examples, meaning shorter training times. ### _Signal Length Distributions_ The distribution, density estimation (DE) and the mean and standard deviation of the mixture signal length in the WHAMR train _tr_ and test _tt_ sets can be seen in Figure 2. WSJ0-2Mix and WHAMR have identical signal distributions as WHAMR is derived from the former. These distributions are shown in the left panel. The train and test sets in WHAMR have similar distributions of signal length with mean values within 0.3s of one another and standard deviations within 0.1s of one another. This contrasts the distributions of the Libri2Mix dataset [22] also shown in Figure 2 where the _train-100_ and _test_ sets have a difference in mean value of \(6.2\)s and difference in standard deviation of \(1.79\)s. Figure 1: Architecture of the SepFormer and Conv-TasNet models, exemplary for \(C=2\) speakers. The \(\odot\) symbol denotes the Hadamard product. ## V Training Signal Length Analysis Evaluations of varying the TSL limit are presented in this section. For all evaluations, the improvement in SISDR over the input mixture signal, denoted by \(\Delta\) SISDR, is used as the evaluation metric. SISDR measures the energy of distortions in the estimated speech signals and is one of the most common metrics used in recent monaural speech separation literature [9, 10, 11]. ### _Initial TSL Limit Evaluations_ As a first experiment, twelve SepFormer models are trained and evaluated on WSJ0-2Mix, WHAMR and Libri2Mix, each with a different TSL limit. Twelve logarithmically spaced signal limits \(T_{\mathrm{lim}}\) are selected between \(0.5\)s and \(10\)s: \[T_{\mathrm{lim}}\in\{0.5,0.66,0.86,1.13,1.49,1.95,\] \[2.56,3.36,4.42,5.8,7.62,10\}\text{s}. \tag{2}\] The notation \(L_{\mathrm{lim}}\) is used for the respective discrete sample index, i.e. \(L_{\mathrm{lim}}=T_{\mathrm{lim}}f_{\mathrm{s}}\) for sampling rate \(f_{\mathrm{s}}\). When cutting the training signal lengths such that \(L_{x}\leq L_{\mathrm{lim}}\) the starting sample index of the signal is randomly selected from the uniform distribution \(\mathcal{U}\left(0,1+max\left(0,L_{x}-L_{\mathrm{lim}}\right)\right)\). Performance for SepFormer models trained and evaluated on all three datasets is compared in Figure 3. For the WHAMR corpus, an increase in overall \(\Delta\) SISDR performance from the \(0.5\)s to \(1.95\)s limit can be observed. The optimal TSL is at \(3.36\)s. Between \(3.36\)s and \(10\)s performance decreases again by \(-1.4\)dB. This may seem surprising as the general convention with training DNNs is that more data normally results in improved overall performance. A similar trend is observed for WSJ0-2Mix where there is a notable increase between 0.5s and \(3.36\)s and then a drop in performance of \(0.8\)dB between \(4.4\)s and \(10\)s. For Libri2Mix, the performance saturates before a TSL limit of \(4.42\)s. There is no drop in performance as the TSL limit approaches \(10\)s which is likely due to the Libri2Mix training set having a more uniform distribution below signal lengths of \(10\)s than the WHAMR or WSJ0-2Mix datasets, cf. Figure 2. The results for the WHAMR evaluation set are separated into quartiles of mixture signal length for the following experiment. \(\Delta\) SISDR results for each quartile are shown in Figure 4. Comparing Q1 to Q4 shows that, with a sufficiently large TSL (\(\geq 1.95\)s), the best separation performance in SISDR is found on the longest signal lengths, regardless of TSL. A loss in SISDR performance is still observed from \(3.36\)s to 10s regardless of which quartile is evaluated. ### _Training Time Evaluation_ The average training epoch duration (ED) for the SepFormer model on WHAMR and Libri2Mix training sets are shown in Figure 5. Note the ED for WSJ0-2Mix is omitted for brevity but is similar to WHAMR due to having the same TSL distribution (cf. Figure 2). All models were trained on the same hardware to control any impact this has on speed. The average EDs for the WHAMR dataset have a sigmoidal shape due to the majority of the signal lengths being concentrated around the mean signal length of the training set (\(5.6\)s, cf. Figure 2). Libri2Mix has a more linear relationship between TSL limit and ED due to the more uniform shape of the signal length distribution below \(10\)s in the _train-100_ set, cf. Figure 2. Reducing the TSL limit has more benefit in terms of ED for the Libri2Mix dataset when iterating over all training examples. Figure 4: Training signal length (TSL) analysis of the \(1\)st to \(4\)th signal length quartiles in the WHAMR evaluation set. Figure 3: SepFormer results for varying the TSL limit for the anechoic WSJ0-2Mix (top), Libri2Mix (middle) and WHAMR (bottom) test sets. Figure 2: Distributions of mixture signal lengths in WSJ0-2Mix/WHAMR (left) and Libri2Mix (right) for both train (top) and test (bottom) sets. Density estimation (DE) is shown by solid green lines, mean values are indicated by dashed red lines and standard deviation values by dash-dotted blue lines. ### _Fixed vs. Random Start Index_ In Section V-A, the start index of each shortened signal was randomly sampled from a uniform distribution. In this section, this is compared to using a fixed start sample. A start sample of 1999 (\(=0.25\)s at 8kHz) was used for signals where the original mixture signal length was larger than \(L_{\mathrm{lim}}\), else the entire signal length was used. The motivation for this was that many training examples contain silence at the beginning of clips. It was considered desirable to omit as much silence to make for a fairer comparison with the randomly sampled clips which are assumed to have a lower likelihood of beginning with silence. Results in Figure 6 confirm that the loss in performance from a TSL limit of \(3.36\)s to \(10\)s with WHAMR is due to randomly sampling the start index. The performance saturates at a TSL limit of \(5.8\)s when using a fixed start index. This is similar to the performance saturation point of Libri2Mix in Figure 3 demonstrating the performance drop in higher TSLs seen before on WHAMR (cf. Figure 3) is related to both a non-uniform TSL distribution and the random sampling used. ### _Transformer vs. Convolutional Model_ Results comparing the Conv-TasNet model (cf. Section III-B) to the SepFormer model are shown in Figure 7. The loss in performance above \(3.36\)s is not observed for the Conv-TasNet model. All SISDR results above \(T_{\mathrm{lim}}=1.95\)s are within \(0.5\)dB of each other suggesting Conv-TasNet is more invariant to the TSL limit if the limit is sufficiently large. This is possibly due to the \(1.53\)s receptive field of the Conv-TasNet models being smaller than these particular TSLs limits [24]. ## VI Signal Splitting and Dynamic Mixing Next, two sampling strategies are evaluated to (i) investigate whether the performance gained by TSL with random sampling on shorter sequences still holds when the same quantity of audio data in terms of length in seconds is used and (ii) whether using TSL limits still result in performance gains with DM, i.e. simulating new speech mixtures for each epoch [15]. ### _Signal Splitting_ A signal splitting strategy was designed such that a batch of inputs \(\mathbf{X}\in\mathbb{R}^{M\times L_{x}}\) was reshaped to \(\mathbf{X}^{\prime}\in\mathbb{R}^{MD\times\frac{L_{x}}{L}}\) for batch size \(M\). Signal length \(L_{x}\) is still limited such that \(L_{x}\leq L_{\mathrm{lim}}\). The motivation of this method is to evaluate the importance of training on the entire sequence length compared to the raw data quantity used in seconds. Computational complexity in training is also reduced. TSLs for \(T_{\mathrm{lim}}\in[4.42,10]\)s are analysed for \(D=2\). Figure 8 shows that \(D=2\) improves performance for shorter TSL limits (\(T_{\mathrm{lim}}\leq 5.8\)s) compared to \(D=1\) (the original shape). However for \(T_{\mathrm{lim}}\geq 7.62\)s the performance is similar to \(D=1\). As in Figure 3 this is likely due to the TSL distribution of WHAMR. ### _Dynamic Mixing_ DM was proposed to improve performance of various speech separation models [15]. DM often results in a \(1.0\) to \(1.5\)dB SISDR performance improvement dependant upon the model and dataset [15, 25]. The random start index sampling used in Section V is similar to DM in that it provides, the model with unique training examples each epoch but without simulating new mixtures. In this section using DM and TSL limits is compared against just using TSL limits to see if further performance gains can be attained using both approaches. The DM results for the WHAMR corpus are Figure 5: Comparison of average epoch duration (in mins) for the SepFormer model on the Libri2Mix and WHAMR training sets. Figure 8: Comparison of split signal and batch reshape training \(D=2\) against full signal training \(D=1\) for the SepFormer model. Figure 6: Comparison of TSL variation for the SepFormer model trained and evaluated on the WHAMR datasets on a subset of TSL limits in the range \([1.95,7.62]\)s. Figure 7: Comparison of SepFormer and Conv-TasNet across TSL limits \(T_{\mathrm{lim}}\in[4.42,7.62]\) using the WHAMR corpus. shown in Figure 9. It can be seen that with DM the drop in performance is less (at \(7.62\)s) than without DM. The best performing model \(T_{\mathrm{lim}}=4.4\)s is compared to the Sepformer model with no TSL limit in Table I. The batch size reported for the model with no TSL is the largest it was found possible to train on a \(32\)GB Nvidia V\(100\) GPU. It can be seen that using the TSL limited model is able to match its performance with an average ED reduction of \(44\)%, highlighting the benefit of this approach. ## VII Conclusion In this paper it was shown that TSL limits can affect the overall performance for speech separation models in a number of ways. For WSJ0 derived speech separation benchmarks, i.e. WSJ0-2Mix and WHAMR, it is optimal to use shortened training examples randomly sampled from the original examples due to the signal length distribution of these corpora. For the Libri2Mix dataset, the same method led to shorter training times with no notable loss in performance. The SepFormer model was compared to the Conv-TasNet model and it was shown that the Conv-TasNet performance has less variation. Using dynamic mixing and TSL limits with random sampling was shown to be able to match the performance of the SepFormer model trained DM using full sequence lengths on WHAMR with a 44% reduction in training time. With some previous literature opting to limit TSLs [11, 14] and others not [9, 10] on the same benchmarks, the results in this paper suggest that this is not a fair comparison and that TSL limiting is important to factor in when analysing results, particularly for the WSJ0-2Mix and WHAMR benchmarks.
2303.11977
Deep trip generation with graph neural networks for bike sharing system expansion
Bike sharing is emerging globally as an active, convenient, and sustainable mode of transportation. To plan successful bike-sharing systems (BSSs), many cities start from a small-scale pilot and gradually expand the system to cover more areas. For station-based BSSs, this means planning new stations based on existing ones over time, which requires prediction of the number of trips generated by these new stations across the whole system. Previous studies typically rely on relatively simple regression or machine learning models, which are limited in capturing complex spatial relationships. Despite the growing literature in deep learning methods for travel demand prediction, they are mostly developed for short-term prediction based on time series data, assuming no structural changes to the system. In this study, we focus on the trip generation problem for BSS expansion, and propose a graph neural network (GNN) approach to predicting the station-level demand based on multi-source urban built environment data. Specifically, it constructs multiple localized graphs centered on each target station and uses attention mechanisms to learn the correlation weights between stations. We further illustrate that the proposed approach can be regarded as a generalized spatial regression model, indicating the commonalities between spatial regression and GNNs. The model is evaluated based on realistic experiments using multi-year BSS data from New York City, and the results validate the superior performance of our approach compared to existing methods. We also demonstrate the interpretability of the model for uncovering the effects of built environment features and spatial interactions between stations, which can provide strategic guidance for BSS station location selection and capacity planning.
Yuebing Liang, Fangyi Ding, Guan Huang, Zhan Zhao
2023-03-20T16:43:41Z
http://arxiv.org/abs/2303.11977v1
# Deep Trip Generation with Graph Neural Networks for ###### Abstract Bike sharing is emerging globally as an active, convenient, and sustainable mode of transportation. To plan successful bike-sharing systems (BSSs), many cities start from a small-scale pilot and gradually expand the system to cover more areas. For station-based BSSs, this means planning new stations based on existing ones over time, which requires prediction of the number of trips generated by these new stations across the whole system. Previous studies typically rely on relatively simple regression or machine learning models, which are limited in capturing complex spatial relationships. Despite the growing literature in deep learning methods for travel demand prediction, they are mostly developed for short-term prediction based on time series data, assuming no structural changes to the system. In this study, we focus on the trip generation problem for BSS expansion, and propose a graph neural network (GNN) approach to predicting the station-level demand based on multi-source urban built environment data. Specifically, it constructs multiple localized graphs centered on each target station and uses attention mechanisms to learn the correlation weights between stations. We further illustrate that the proposed approach can be regarded as a generalized spatial regression model, indicating the commonalities between spatial regression and GNNs. The model is evaluated based on realistic experiments using multi-year BSS data from New York City, and the results validate the superior performance of our approach compared to existing methods. We also demonstrate the interpretability of the model for uncovering the effects of built environment features and spatial interactions between stations, which can provide strategic guidance for BSS station location selection and capacity planning. keywords: Demand prediction, Bike sharing, System expansion, Graph neural networks, Spatial regression + Footnote †: journal: ## 1 Introduction Bike sharing is an emerging mode of transportation that is growing rapidly in many metropolitan areas around the world. It has proven to benefit our cities and societies in a number of ways, including reducing traffic congestion, enhancing inter-modal connections, alleviating air pollution and promoting healthier lifestyle (Shaheen et al., 2010). Due to these positive effects of bike sharing, many cities have invested on the deployment of bike sharing systems (BSSs) for urban residents, usually starting from a smaller scale first and gradually expanding over the years based on the demand potential. For station-based BSSs, this means building new stations based on existing ones. For instance, when New York City (NYC) first launched its BSS (called Citi Bike) in 2013, there were 329 stations in the city center, while by the end of 2019 the station network has expanded to the outskirt regions, reaching 882 stations. When planning for station-based BSS expansion, the prediction of potential demand for newly added stations is paramount for city planners and bike sharing service providers to make strategic decisions. Particularly, the knowledge of the expected number of trips originating or destined for each station can be used to support important planning decisions regarding when and where a new station should be built, how much capacity the station should have and how many bikes need to be allocated (Liu et al., 2017). This is essentially the goal of trip generation, as the first step in the widely used four-step travel demand forecasting process, but specifically for bike sharing (Noland et al., 2016). In this study, we focus on the problem of trip generation for station-based bike sharing system expansion (TG-BSSE). Due to the worldwide proliferation of bike sharing, increasing attention has been paid to bike sharing demand prediction. Traditionally, researchers use statistical regression models to capture the relationship between bike demand and surrounding geographic and demographic characteristics (Buck and Buehler, 2012; Bachand-Marleau et al., 2012). To consider spatial dependencies between stations, spatial regression has been applied, whose main idea is to encode the space as a representation vector into the framework of regression statistics (Faghih-Imani and Eluru, 2016; Zhang et al., 2017). Although these regression methods have the clear advantage of good interpretability, they may be unable to accurately capture the real structure of demand patterns due to oversimplified (e.g., linear) assumptions. While improved performance has been achieved by recent studies using tree-based ensemble learning models for TG-BSSE (Liu et al., 2017; Kou and Cai, 2021; Guidon et al., 2020), these methods may not be flexible or expressive enough to recover the complexity of spatial interactions between BSS stations. With rapid advances in machine learning and AI, deep neural networks (DNNs) have been applied to BSS demand prediction in recent years because of their capability to extract relationships in any arbitrary function forms. However, existing deep learning methods are mostly developed for short-term prediction in a mature and stable BSS, which uses historical demand sequences as input to predict demand in the near future (i.e., at most 24 hours) (Xu et al., 2018; Chai et al., 2018; Li et al., 2019; Liang et al., 2022). They generally assume no structural changes to the system, rely on stationary demand data, and thus are not applicable to TG-BSSE. Though limited, several recent research works have applied DNNs for trip generation in new transportation sites by aggregating historical demand patterns of nearby existing sites (Gong et al., 2020; Zhou et al., 2021). Another group of studies focused on dynamic demand prediction for a continuously evolving system (Luo et al., 2019; He and Shin, 2020), which is regarded as a spatiotemporal prediction problem with historical demand sequences of existing stations as model input. Despite these recent attempts, there still exist several important research gaps to be addressed: * First, existing deep learning models typically rely on sequential dependencies in the temporal dimension. However, for BSS expansion, the addition of new stations would likely disrupt the demand patterns for the entire system. In addition, considering the long planning time horizon of system expansion, the demand patterns can also change due to various exogenous factors (e.g., the opening of a new subway station). Therefore, models dependent on past sequential/temporal dependencies may not be suitable for BSS expansion. * Second, while changes in BSS network are generally less costly and more frequent compared to other mass transportation systems (e.g., metro), they are still rare events, occurring once every few months or years and often in batches. As a result, only a very limited number of network configurations can be observed in the historical data, leading to data sparsity issues and making it difficult to effectively learn spatial dependencies for trip generation. * Third, the experiment design of most existing studies cannot validate their model effectiveness in real-world expansion scenarios, as their experiments are based on simulation data with short time intervals (i.e., from 30 minutes to 1 day). In practice, the planning and implementation of BSS expansion can take months or even years. Furthermore, the effect of new stations on the whole system is highly complex and can hardly be approximated with relatively simple simulations. * Fourth, although previous research has demonstrated the superior prediction performance of DNNs for demand prediction tasks (mostly in the short term), there is a lack of discussion regarding why the model makes such prediction. In the case of TG-BSSE, interpretability is crucial to gain a deeper understanding of the determinants of BSS demand for a specific station, which can provide valuable policy implications for future BSS network design. To address these research gaps, this study tackles the TG-BSSE problem by exploiting spatial dependencies and utilizing multi-source data to capture the effect of the urban built environment, including POIs, road networks, public transit facilities and socio-demographic information. Specifically, we propose a spatially-dependent multi-graph attention network (Spatial-MGAT) approach to incorporate various built environment features as well as heterogeneous spatial dependencies between stations. For each BSS station, the proposed model can effectively leverage its spatial dependencies on other stations that are either geographically close by or share similar built environment characteristics. The relationship between our proposed graph neural network (GNN) approach and the classic spatial regression model is further discussed. Experiments are conducted on a real-world multi-year BSS expansion dataset from New York City, and the results validate the effectiveness of our approach. Its interpretability is also demonstrated using explainable AI techniques. The main contributions of this paper are summarized as follows: * This study makes one of the first attempts at using deep learning techniques to enhance trip generation for station-based BSS expansion based on multi-source urban built environment data. The proposed model focuses on learning spatial dependencies instead of temporal/sequential dependencies, making it more generalizable for different network configurations. Therefore, it can be used as a planning tool to provide strategic guidance for BSS network design and update. * To capture spatial interactions across the network, we construct two localized graphs centered on each station based on geographical proximity and urban environment similarity respectively, and use attention mechanisms to adaptively learn correlation weights between connected stations. The use of localized graphs makes it possible to use each station as the analysis unit, which mitigates the data sparsity issue. * We demonstrate that our proposed approach can be regarded as a generalized spatial regression model with nonlinear activation functions, heterogeneous spatial dependencies and adaptive spatial weights learned from data. This allows us to conceptually link spatial regression with GNNs, and provide another example of enhancing classic econometric models with state-of-the-art deep learning techniques. * To validate the effectiveness of our proposed approach, the Citi Bike system in NYC is used as a case study. Extensive experiments are conducted based on multi-year data to approximate real-world BSS expansion scenarios. The results validate the superior performance of the proposed model against existing methods for both newly planned and existing stations. Further analysis demonstrates the ability of our approach to explain the determinants of BSS demand and BSS station interactions. ## 2 Literature Review In this section, we first review existing works related to trip generation for BSS expansion, and then present a short summary of deep learning approaches for BSS demand prediction. In addition, we review recent research that uses deep neural networks to enhance traditional theory-based models in transportation research, which will be relevant to our methodology. ### Trip Generation for Bike Sharing System Expansion Early investigation regarding the impact of the built environment on bike sharing trip generation was mainly based on statistical regression models. Linear regression, one of the simplest forms of regression models, has been commonly used to discover the determinants of BSS usage (Bachand-Marleau et al., 2012; Rixey, 2013). Later research employed multi-level mixed regression models to capture dependencies between repeated observations from the same station (Faghih-Imani et al., 2014; El-Assi et al., 2017). To incorporate spatial dependencies between stations, Faghih-Imani and Eluru (2016) and Zhang et al. (2017) leveraged spatial regression models and achieved better model fit. Thanks to the good interpretability of regression models, these studies have identified a large group of factors that are strongly associated with BSS station demand, including land use (e.g., CBD, restaurants, commercial enterprises) (Faghih-Imani et al., 2014), transportation facilities (e.g.,rail stations, nearby bicycle lanes) (Noland et al., 2016), socio-demographic features (e.g., job and population density, race, income) (Bachand-Marleau et al., 2012) and BSS network design (e.g. proximity to other BSS stations, distance to the center of BSS) (Rixey, 2013). However, these methods perform relatively poorly for actual demand forecasting, largely because the assumption of linear relationships between BSS usage and input features limits their ability to capture potentially more complex patterns underlying the data. With their rapid advancement in recent years, machine learning methods have also been considered as a solution to TG-BSSE. Guidon et al. (2020) employed a tree-based ensemble method, namely Random Forest, to predict the demand for expanding BSS to a new city. Kou and Cai (2021) used XGBoost to incorporate spatial network information for improved model performance. A hierarchical demand prediction model was developed by Liu et al. (2017), which first clusters stations with similar POI characteristics and close geographical distances into functional zones, and then predicts BSS demands from functional zone level to station level using Random Forest and Ridge Regression. A hybrid approach was also adopted by Hyland et al. (2018), combining clustering techniques with regression modeling. However, these models only rely on built environment features of the target BSS station itself, and are limited in leveraging the spatial dependencies between stations. DNNs have proven to be a powerful tool for capturing complex nonlinear relationships hidden in human mobility data. However, most existing DNN methods focus on short-term demand prediction for stable systems, as we will discuss in Section 2.2. Only recently did several studies introduce DNNs to human mobility prediction for transportation planning applications. Gong et al. (2020) proposed a multi-view localized correlation learning method, whose core idea is to learn the passenger flow correlations between the target areas and their localized areas with adaptive weights. Zhou et al. (2021) estimated the potential crowd flow at a newly planned site by leveraging the historical demand patterns of nearby multi-modal sites in a collective way. Another group of relevant studies considered demand prediction of time-varying transportation networks. Luo et al. (2019) proposed a graph sequence learning approach for a rapidly expanding electric vehicle system. He and Shin (2020) predicted the demand flow of an evolving dockless e-scooter system using a spatiotemporal graph capsule neural network. Their problem formulation, which is essentially a time-series prediction problem with historical demand series of existing transportation sites as model input, is quite different from ours. In real-world system expansion scenarios with long planning time horizon and potentially major network changes, such problem formulation can easily suffer from demand distribution discrepancy and data sparsity problems. Existing works often sidestepped these challenges by considering short time intervals from 30 minutes to 1 day or masking stations from one network snapshot to simulate possible system expansion, which cannot reflect the planning and deployment of real-world transportation system expansion over time. ### Deep Learning Methods for Bike Demand Prediction In recent years, extensive studies have demonstrated the power of deep learning models for short-term prediction of BSS demand. Early research used recurrent neural networks (RNNs) to capture temporal dependencies in historical demand series. A long-short term memory neural network (LSTM) was used in Xu et al. (2018) considering exogenous factors (e.g., weather). Zhang et al. (2018) leveraged historical demand of public transit to enhance the prediction performance of BSS demand with LSTM. To incorporate spatial information, Zhou et al. (2018) and Qiao et al. (2021) combined RNNs with convolutional neural networks (CNNs), which however, can only provide demand prediction at the grid cell level and are not suitable for station-based BSSs. To capture spatial dependencies across graph-structured data, GNNs have recently been employed for BSS demand prediction at the station level. Lin et al. (2018) proposed a graph learning framework with GCNs and RNNs to capture spatial and temporal dependencies among stations respectively. Chai et al. (2018) considered heterogeneous spatial relationships across stations using a graph convolutional network (GCN) model. A multi-relational GNN was developed by Liang et al. (2022) to leverage spatial information from multi-modal data. However, all these methods are conditioned on large-scale historical data for model training and are not applicable to situations when the transportation systems are expanding from time to time. Though limited, several recent works have shed light on spatiotemporal prediction in a data-scarce transportation system by transferring knowledge from data-rich transportation systems. For example, Wang et al. (2018) proposed a cross-city transfer learning algorithm for demand prediction by linking similar regions in source and target cities. A meta-learning approach was developed by Yao et al. (2019) to transfer the model learned in multiple data-sufficient cities to cities with only a few days of historical transaction records. A domain-adversarial graph learning technique was introduced in Tang et al. (2022) for short-term traffic prediction across cities. Although these research demonstrate the potential ability of deep learning models to generalize to new scenarios, they still focus on short-term demand prediction relying on sequential/temporal dependencies, rather than long-term demand forecasting to account for potential system changes and support transportation planning. For the latter, the spatial dependencies on the underlying built environment and across different stations are more fundamental and generalizable to TG-BSSE, and thus are the focus of this study. ### Enhancing Theory-based Models with Deep Neural Networks In the transportation field, data-driven and theory-based models are usually regarded as disparate. However, these two types of methods can be complementary: data-driven models usually demonstrate better prediction performance in data-rich environments, while theory-based methods (e.g., gravity model) are more advantageous in terms of generalizability and interpretability. Leveraging their complementary natures, several recent studies have explored the potential to combine DNNs and theory-based models. A theory-based residual neural network was introduced in Wang et al. (2021) for choice analysis, which links DNNs with discrete choice models based on their shared utility interpretation. Simini et al. (2021) interpreted the classic gravity model as a shallow neural network with restricted variables as input, and proposed Deep Gravity to generate flow probabilities between origin-destination (OD) pairs. Zhu et al. (2022) provided a comparative analysis of GCNs and linear spatial regression models, and demonstrated that the former can achieve a better performance in spatial imputation tasks. To further bridge the methodological gap between spatial regression and GNNs, this study will demonstrate that a linear spatial regression model can be regarded as a shallow neural network and our proposed GNN model is essentially a generalized spatial regression model. ## 3 Methodology In this section, a few important definitions and the problem formulation of our research are first introduced in Section 3.1. Next, we present a spatial GNN approach (Spatial-MGAT) to predict potential demand for BSS expansion in Section 3.2. In Section 3.3, we further elaborate on the relationship between our proposed GNN approach and classic spatial regression models. ### Problem Statement _Definition 1 (BSS Station Demand):_ For each BSS station, we aim to predict its average number of daily outflow (i.e., departure) and inflow (i.e., arrival) trips in different months. Such demand information is critical for strategic system planning decisions such as the choice of the station site and capacity, for which more detailed temporal (e.g., hourly) demand patterns are less important (Kou and Cai, 2021). Since BSS demand varies greatly by seasons, with higher demand in summer months and lower demand in the winter (more details in Section 4.1), we choose to make monthly predictions to reflect such seasonal variability. The average daily bike outflow \(y_{i,m}^{out}\) and inflow \(y_{i,m}^{in}\) at station \(i\) in a month \(m\) is computed as: \[\begin{array}{l}y_{i,m}^{out}=c_{i,m}^{out}/n_{i,m},\\ y_{i,m}^{in}=c_{i,m}^{in}/n_{i,m},\end{array} \tag{1}\] where \(c_{i,m}^{out}\) and \(c_{i,m}^{in}\) are the number of departure and arrival trips at station \(i\) in month \(m\) respectively, \(n_{i,m}\) is the number of days that station \(i\) is active (i.e., with at least one departure or arrival trip) in month \(m\). _Definition 2 (Localized Station Network):_ For a station \(i\), to predict its demand in month \(m\), we define a localized graph centered on station \(i\) to capture its spatial dependencies with other related BSS stations, denoted as \(G_{i,m}=(V_{i,m},A_{i,m})\), where \(V_{i,m}\) is a set of nodes (i.e., BSS stations) connected with node \(i\) in month \(m\), and \(A_{i,m}\in\mathbb{R}^{|V_{i,m}|\times|V_{i,m}|}\) is a weighted adjacency matrix representing the dependencies between each pair of nodes in \(V_{i,m}\). Note that both \(V_{i,m}\) and \(A_{i,m}\) can change over time due to BSS expansion. _Problem (Trip Generation for BSS Expansion):_ This research aims to predict the number of trips originating and destined for each BSS station in an expanding system (with new stations added over time). Specifically, given the built environment features of a station \(i\) in month \(m\), denoted as \(x_{i,m}\), its localized graph \(G_{i,m}\), as well as the built environment features of other related stations in \(G_{i,m}\), denoted as \(XG_{i,m}=\{x_{j,m},\forall j\in V_{i,m},j\neq i\}\), we learn a mapping function \(F(*)\) to predict the inflow and outflow demand of station \(i\) in month \(m\), represented as \(y_{i,m}=[y_{i,m}^{out};y_{i,m}^{in}],y_{i,m}\in\mathbb{R}^{2}\): \[y_{i,m}=F(x_{i,m},G_{i,m},XG_{i,m}). \tag{2}\] ### Model Architecture In this section, we introduce our modeling framework for TG-BSSE based on multi-source urban data. The overview of the framework is shown in Fig. 1, which consists of four parts. First, we extract various geographic and demographic features to depict the built environment for BSS stations. Second, to capture spatial interactions between stations, we construct two localized graphs based on geographical proximity and built environment similarity respectively, and summarize the features of connected stations into spatial interaction feature vectors. Third, additional features are included to consider temporal information such as the month and station age. Finally, the aforementioned built environment, spatial interaction and temporal features are fed into an output layer to generate the number of inflow and outflow trips for the target station. #### 3.2.1 Built Environment Feature Extraction Based on the existing literature, we extract a diverse set of built environment features for each BSS station, including the following categories: * POI density (10 features): the number of POIs within a radius of 500m for each possible category, i.e., residential, educational, cultural, recreational, commercial, religious, transportation, government, health, and social services. * Socio-demographics (11 features): demographic features of the census tract for the station, including population and housing unit density, proportion of population in households, proportion of people under 18, average household size, total housing units, proportion of occupied housing units, and proportion of different races (i.e., hispanic, white, asian and black). * Road network (16 features): the total number and length of different road networks by levels, including motorway, trunk, primary, secondary, tertiary, unclassified and residential, the length of bike lanes, as well as the number of junctions within a radius of 500m. Figure 1: The architecture of Spatial-MGAT * Transportation facilities (2 features): the distance to the nearest subway station and the number of subway stations within a radius of 500m. * BSS network design (4 features): the number of BSS stations within 0-500m, 500-1,000m and 1,000-5,000m travel distance respectively, average travel distance to the other BSS stations. After getting the above features, each BSS station \(i\) is related with a 43-dimensional vector \(x_{i}\) to describe its surrounding built environment. It is worth noting that \(x_{i}\) can change over time. For example, as the system network is evolving, the number of BSS stations in its neighborhood can dynamically change. For a detailed description of data sources of the aforementioned variables, please refer to Section 4.1. #### 3.2.2 Localized Graph Construction For trip generation given a target BSS station, it is important to consider not only the effect of its local built environment, but also its interactions with other stations. To capture spatial correlations among stations with close geographical distances or similar built environment, we encode two types of spatial dependencies for each station pair: _Geographical Proximity:_ According to the First Law of Geography, station pairs that are geographically adjacent are more likely to be strongly correlated than distant ones. We define a geographical proximity weight between each station pair \(i\) and \(j\) as: \[a_{ij}^{p}=\exp(-(\frac{d_{ij}}{\sigma_{d}})^{2}), \tag{3}\] where \(a_{ij}^{p}\) is the geographical proximity weight between stations \(i\) and \(j\), \(d_{ij}\) is the geographic distance between \(i\) and \(j\), and \(\sigma_{d}\) is the standard deviation of distances. Based on the geographical proximity weights computed above, we construct a localized graph for a target BSS station \(i\) consisting of its \(k\) nearest neighbors, denoted as \(G_{i}^{p}\). \(G_{i}^{p}\) can be updated as new stations are added to the system, since the \(k\)-nearest neighbors of the same station can dynamically change. _Built Environment Similarity_: Previous research has shown that in addition to geographically nearby stations, incorporating features of stations with similar built environment or functionalities can also enhance the prediction performance (Zhou et al., 2021). We define a built environment similarity weight between each station pair to measure such "semantic" relationships: \[a_{ij}^{b}=\exp(-(\frac{dist(x_{i},x_{j})}{\sigma_{b}})^{2}), \tag{4}\] where \(a_{ij}^{b}\) is the weight of built environment similarity between stations \(i\) and \(j\), \(x_{i},x_{j}\) are the built environment feature vectors of station \(i\) and \(j\) as defined in Section 3.2.1, \(dist(*)\) is an Euclidean distance function to compute the distance between \(x_{i}\) and \(x_{j}\), and \(\sigma_{b}\) is the standard deviation of all the vector distances. For a target station \(i\), the \(k\) BSS stations that have the most similar built environment features are selected, resulting in a localized graph denoted as \(G_{i}^{b}\). Note that \(G_{i}^{b}\) can also change over time due to newly added stations as well as the variation of \(x_{i}\) and \(x_{j}\). #### 3.2.3 Spatial Interaction Feature Learning Given the pre-defined localized graphs, we generate two spatial interaction feature vectors for a target station \(i\) based on \(G_{i}^{p}\) and \(G_{i}^{b}\), denoted as \(s_{i}^{p}\) and \(s_{i}^{b}\), respectively. The spatial interaction feature vector is computed as a weighted sum of the built environment features of its connected stations and the weight is learnt through an attention mechanism. Taking \(G_{i}^{p}\) as an example, we compute its corresponding spatial interaction feature vector through the following steps: _Step 1_: Encode the built environment features of station \(i\) as well as its connected stations in the localized graph to a latent vector using a shared linear layer: \[h_{j}=W_{h}x_{j}+b_{h},\forall j\in V_{i}^{p}, \tag{5}\] where \(h_{j}\in\mathbb{R}^{d_{h}}\) is a \(d_{h}\)-dimensional encoded vector for station \(j\), \(W_{h}\in\mathbb{R}^{43\times d_{h}}\) and \(b_{h}\in\mathbb{R}^{d_{h}}\) are the learned model parameters for feature encoding. _Step 2_: Compute the importance score of each connected station \(j\in,V_{i}^{p}j\neq i\) to station \(i\) using a shared feed-forward network applied to every station pair: \[\begin{array}{l}z_{ij}=ReLU(W_{s,1}[h_{i};h_{j}]+b_{s,1}),\\ s_{ij}=LeakyReLU(W_{s,2}z_{ij}+b_{s,2}),\end{array} \tag{6}\] where \(s_{ij}\in\mathbb{R}\) is the importance score of station \(j\) to station \(i\), \(W_{s,1}\in\mathbb{R}^{2d_{h}\times d_{z}},W_{s,2}\in\mathbb{R}^{d_{z}\times 1},b_{s,1}\in\mathbb{R}^{d_{z}},b_{s,2}\in\mathbb{R}^{1}\) are parameters to be learned, \(d_{z}\) is the dimension of the hidden vector \(z_{ij}\). _Step 3_: The attention weight of connected stations to station \(i\) is then computed by normalizing the importance score using a softmax function: \[\epsilon_{ij}=\frac{\exp(s_{ij})}{\sum_{j\in V_{i}^{p},j\neq i}\exp(s_{ij})}, \tag{7}\] where \(\epsilon_{ij}\in\mathbb{R}\) is the attention weight of connected station \(j\) to target station \(i\). _Step 4_: The spatial interaction vector \(s_{i}^{p}\in\mathbb{R}^{d_{h}}\) is computed as a weighted sum of the features of connected stations: \[s_{i}^{p}=\sum_{j\in V_{i}^{p},j\neq i}\epsilon_{ij}h_{j}. \tag{8}\] Using the same method, we compute \(s_{i}^{b}\in\mathbb{R}^{d_{h}}\) based on \(G_{i}^{b}\). The spatial interaction vectors will be further used as input features for the prediction layer as introduced later. #### 3.2.4 Temporal Feature Learning While the proposed model focuses on learning spatial dependencies across stations, the effect of relevant temporal factors should not be overlooked. Specifically, we include two additional temporal features in our model, i.e., month and station age (i.e., the number of months since a target station opens). The month feature is originally a categorical variable and a simple way to encode it as model input is to use one-hot vectors, which, however, might not capture seasonal fluctuations across different months. Instead, we use embedding techniques to map each month to a representation vector. Specifically, a parameter matrix \(W_{m}\in\mathbb{R}^{12\times d_{m}}\) is learnt in our model, with each row representing each month as a \(d_{m}\)-dimensional vector. #### 3.2.5 Prediction Layer The prediction layer aims to generate the final trip prediction based on the aforementioned built environment, spatial interaction and temporal features. Specifically, for a target station \(i\) in month \(m\), the prediction layer takes the following form: \[\begin{array}{l}z_{o,1}=ReLU(W_{o,1}([x_{i,m};s_{i,m}^{p};s_{i,m}^{b};t_{i,m}] )+b_{o,1}),\\ z_{o,2}=sigmoid(W_{o,2}z_{o,1}+b_{o,2}),\\ \hat{y}_{i,m}=W_{o,3}z_{o,2}+b_{o,3},\end{array} \tag{9}\] where \(\hat{y}_{i,m}\in\mathbb{R}^{2}\) is the predicted demand of station \(i\) in month \(m\), \(t_{i,m}\in\mathbb{R}^{d_{m}+1}\) is the temporal feature vector of station \(i\) in month \(m\), \(W_{o,1}\in\mathbb{R}^{(43+2d_{h}+d_{m}+1)\times d_{o}^{1}},W_{o,2}\in\mathbb{R }^{d_{o}^{1}\times d_{o}^{2}},W_{o,3}\in\mathbb{R}^{d_{o}^{2}\times 2}\) are parameter matrices and \(b_{o,1}\in\mathbb{R}^{d_{o}^{1}},b_{o,2}\in\mathbb{R}^{d_{o}^{2}},b_{o,3}\in \mathbb{R}^{2}\) are model bias terms, and \(d_{o}^{1},d_{o}^{2}\) are dimensions of hidden vectors \(z_{o,1}\) and \(z_{o,2}\), respectively. The model is trained to minimize the sum of squared errors between predicted and real demand values across BSS stations over time: \[L_{\theta}=\sum_{m=1}^{M}\sum_{i=1}^{N_{m}}(\hat{y}_{i,m}-y_{i,m})^{2}, \tag{10}\] where \(M\) is the number of months in the training data and \(N_{m}\) denotes the number of active stations in month \(m\). ### Discussion: Synergy between Spatial Regression and GNNs In the previous section, we introduce our model architecture as a GNN approach. In this section, we provide an alternative way to understand our model as a generalized version of a conventional spatial regression model named the spatial lag of X (SLX) (Elhorst and Halleck Vega, 2017). Specifically, we first introduce the original SLX model in Section 3.3.1 and then elaborate on how Spatial-MGAT extends SLX in Section 3.3.2. #### 3.3.1 SLX Model SLX is one of the first and most straightforward spatial regression models, which incorporates the average value of explanatory variables from surrounding locations as model input (Elhorst and Halleck Vega, 2017). Mathematically, it is given as: \[\begin{array}{l}\hat{y}_{i}=f([x_{i};s_{i}^{p}]),\\ s_{i}^{p}=\sum_{j\in V_{i}^{p},j\neq i}w_{ij}^{p}h_{j}^{p},\end{array} \tag{11}\] where \(\hat{y}_{i}\) and \(x_{i}\) is the dependent variable and explanatory variables of location \(i\) respectively, \(s_{i}^{p}\) is a spatial lag term that captures the average spatial information from neighborhood regions of location \(i\). \(V_{i}^{p}\) denotes a set of locations adjacent to \(i\), \(w_{ij}^{p}\) is a pre-defined adjacency weight between locations \(i\) and \(j\), and \(h_{j}^{p}\) is a subset of location \(j\)'s explanatory variables that are thought to be relevant to spatial dependencies. \(f(\star)\) is a function to be learned assuming linear relationships between input and dependent variables. In this study, the dependent variable is the number of trips generated for each BSS station in different months, and the explanatory variables include built environment features (see Section 3.2.1) and temporal information (see Section 3.2.4). Therefore, the SLX model for TG-BSSE can be expressed as: \[\begin{array}{l}\hat{y}_{i,m}=f([x_{i,m};s_{i,m}^{p};t_{i,m}]),\\ s_{i,m}^{p}=\sum_{j\in V_{i,m}^{p},j\neq i}w_{ij,m}^{p}h_{j,m}^{p},\end{array} \tag{12}\] where \(\hat{y}_{i,m}\) is the predicted demand of station \(i\) in month \(m\), \(x_{i,m}\) and \(s_{i,m}^{p}\) are the built environment features and spatial lag term respectively, \(t_{i,m}\) is a temporal feature vector with the month feature encoded as a one-hot vector. Since the spatial lag term is a pre-defined spatial transformation of the explanatory variables of nearby locations, the linear function \(f(\star)\) can be easily estimated using ordinary least squares (OLS). Alternatively, the model may be regarded as a single-layer linear neural network without any activation function, which can be estimated by minimizing the squared loss defined in Eq. (10). This allows us to naturally extend the SLX model with GNNs. #### 3.3.2 Spatial-MGAT as a Generalized SLX Model As introduced in the previous section, SLX can be regarded as a linear neural network and naturally extended by adding hidden layers and nonlinear activation functions. Specifically, our proposed Spatial-MGAT can be regarded as a generalized SLX, with modifications in the several aspects. First, to capture multiple types of spatial dependencies, we incorporate an additional spatial lag term \(s_{i}^{b}\) to capture the average spatial information from BSS stations with similar built environment. Mathematically, this can be expressed as: \[\begin{array}{l}\hat{y}_{i,m}=f([x_{i,m};s_{i,m}^{p};s_{i,m}^{b};t_{i,m}]), \\ s_{i,m}^{p}=\sum_{j\in V_{i,m}^{p},j\neq i}w_{ij,m}^{p}h_{j,m}^{p},\\ s_{i,m}^{b}=\sum_{j\in V_{i,m}^{b},j\neq i}w_{ij,m}^{b}h_{j,m}^{b},\end{array} \tag{13}\] where \(V_{i,m}^{b}\) denotes a set of BSS stations that have similar built environment features with station \(i\) in month \(m\). Second, instead of manually determining the subset of explanatory features from neighborhood stations, we use a linear transformation layer to learn the representation feature vectors of connected stations, denoted as \(h_{j,m}^{p}\) and \(h_{j,m}^{b}\) (see Eq. (5)). Third, rather than using pre-defined adjacency weights, the model learns the adjacency weights \(w_{ij,m}^{p}\) and \(w_{ij,m}^{b}\) with attention mechanisms (see Eq. (6) and Eq. (7)). Fourth, categorical variables, such as the month, can be represented as lower-dimensional embedding vectors, instead of one-hot vectors, to be learned simultaneously in model training. Finally, the prediction function \(f(\star)\) is replaced with a feed-forward network as expressed in Eq. (9). As we will show in Section 4.3, with the aforementioned simple modifications, our proposed model can significantly outperform the original SLX model. This demonstrates the potential of using DNNs to enhance the performance of traditional econometric models. ## 4 Results ### Data Description To validate the performance of our proposed model, we use the Citi Bike system in New York City (NYC) as a case study. The Citi Bike data 1 provides the start and end station and time of each trip over time. Citi Bike was first launched in July 2013 and has undergone massive expansions in the years since. Our preliminary analysis shows that the usage pattern has become quite different since 2020 (likely due to the COVID-19 pandemic). To circumvent the impact of COVID-19, the data from 2013-07 to 2019-12 is used for empirical analysis. Prior to model evaluation, an overview of the data is presented to offer a better understanding of the system expansion process and the spatiotemporal demand distribution of BSS stations. Footnote 1: [https://ride.citibikenyc.com/system-data](https://ride.citibikenyc.com/system-data) Fig. 2 presents the expansion process of NYC BSS during our study period. The blue line represents the total number of BSS stations, and the red and grey bars indicate the number of newly opened and closed stations2. It can be clearly seen that from 2013-07 to 2019-12, there are 4 mass expansion periods, i.e., 2015-08, 2016-08, 2017-09, and 2019-09. Outside of these periods, the system is dynamically evolving with newly opened and closed stations simultaneously, while the total number of stations keeps almost unchanged. The spatial distribution of BSS stations before and after the 4 mass expansion periods is shown in Fig. 3. Blue dots represent unchanged BSS stations before and after expansion, while red and grey dots represent newly opened and closed bike stations after expansion. Before July 2015, BSS stations in NYC were only distributed in Downtown and Midtown Manhattan, Figure 2: BSS expansion process in NYC over time as well as in the west part of Brooklyn (e.g., Downtown Brooklyn and Williamsburg). From 2015 to 2016, the system was extended to Upper Manhattan, and the north and southwest sides of Brooklyn. BSS stations in Manhattan were expanded further north in 2017, and new stations were built in the west part of Queens (e.g., Long Island City and Astoria) and southeast side of Brooklyn. In 2019, BSS expansion in Manhattan came to a near halt, with new development concentrated in the east side of Brooklyn. In comparison, the removal of stations is almost negligible. The temporal pattern of daily average outflow trips per station in different months is presented in Fig. 4. A clear seasonal pattern can be observed, with higher ridership in the summer and autumn and lower ridership in the winter and spring. This can be explained by the weather fluctuations, as low temperatures and higher chances of snow can deter people from cycling outdoors. It is worth noting that, while the total BSS demand grows over time (as a result of system expansion), the average trip generation per station does not change much. This is despite the fact that stations added in later years are typically in lower-density neighborhoods and generally have lower demand. The spatial distributions of BSS trips in several selected months are presented in Fig. 5. It shows a pattern of gradual decay from the downtown area of Manhattan to its proximity, suggesting that Downtown and Midtown Manhattan have the highest demand for bike sharing, while the demand in other areas are Figure 3: Spatial distribution of BSS stations before and after mass development periods relatively lower. In addition to BSS trip data, several open-source supplementary data are used for built environment feature extraction: * POIs: The POI data is obtained from NYC Open data3 which consists of 20,558 POI points from 13 facility categories. We use 10 of them as listed in Section 3.2.1. Footnote 3: [https://opendata.cityofnewyork.us/](https://opendata.cityofnewyork.us/) * Socio-demographics: The data comes from NYC Population Factfinder4, which provides Decennial Census data in 2010 for different census tracts in NYC. Figure 4: Seasonal pattern of average BSS station demand Figure 5: Spatial pattern of BSS station demand in NYC over time * Road Network: The road network data is downloaded from OpenStreetMap5, comprising 91,868 roads and 55,314 junctions in NYC. The downloaded data associates each road link with a road level and we consider 7 of them for feature construction as listed in Section 3.2.1. The bike lane data is obtained from NYC Open Data, which provides the spatial distribution and opening dates of different bike lanes in NYC. Footnote 5: [https://www.openstreetmap.org](https://www.openstreetmap.org) * Transportation facilities: We obtain the distribution of subway stations from NYC Open Data, with 218 subway stations in total. ### Experiment Design To reflect the realistic effect of system expansion over time, we use BSS trip data from 2013-07 to 2017-08 for model training and validation, and data from 2017-09 to 2019-12 for model testing. During 2013-07 to 2017-08, there are 21,827 station-month observations across 734 stations. We randomly select 80% for model training and the rest 20% for model validation, resulting in 17,462 and 4,365 training and validation samples respectively. During the testing period, there are 1,012 stations in total with 21,808 station-month observations. Among the 1,012 stations, 645 stations exist in the training set with 16,446 station-month observations and 367 stations are newly added (i.e., unseen in the training and validation set) with 5,362 station-month observations. We use the monthly observations of both existing and newly added stations as test set and evaluate our model performance on them separately. To facilitate model training, min-max normalization is applied to the demand data as well as the input built environment variables before feeding them into the model. For deep learning models, the number of training epochs \(E\) is set as 200 and we use early stopping of 10 epochs on the validation set to prevent overfitting. The models are trained using Adam Optimizer with a learning rate of 0.002, a batch size of 32 and L2 regularization with a weight decay equal to 1e-5. The hyperparameters of our proposed model are set as follows: the number of neighbors \(k\)=5, the dimension of spatial interaction feature vectors \(d_{h}\)=8, the hidden dimension for attention weight learning \(d_{z}\)=16, the dimension of month embedding vectors \(d_{m}\)=12, and the hidden dimensions of the prediction layer \(d_{o}^{1}\)=32, \(d_{o}^{2}\)=16. We repeat experiments for each model 10 times and report the average performance. The model performance is evaluated using 3 metrics computed on the test set: root mean square error (RMSE), mean absolute error (MAE) and the coefficient of determination (\(R^{2}\)). ### Comparison with regression and machine learning models Previous research for TG-BSSE typically used regression or tree-based ensemble machine learning models to capture the relationship between BSS demand and urban built environment. In this subsection, we compare the performance of our proposed model with the following regression and machine learning baselines: * **Linear Regression**(Singhvi et al., 2015): a regression model which assumes linear relationships between BSS station demand and input variables. In our implementation, its input variables include built environment (Section 3.2.1) and temporal features (Section 3.2.4), with the month feature encoded as a one-hot vector. * **Spatial Regression**(Faghih-Imani and Eluru, 2016): We use the original SLX model introduced in Section 3.3.1 as an example of spatial regression models, which incorporates spatial dependencies between geographically close stations by including built environment features from nearby BSS stations as additional model input. * **XGBoost**(Kou and Cai, 2021): a tree-based ensemble machine learning method based on gradient boosting decision trees. We use the same input variables as those of linear regression in our implementation. * **Function Zone**(Liu et al., 2017): a hierarchical model which first clusters stations into functional zones, and then predict bike sharing trip generation from functional zone level to station level using Random Forest and Ridge Regression respectively. We replace Ridge Regression with XGBoost, which can achieve better performance in our dataset. The results of different models are presented in Table 1. It is found that all models result in larger RMSE and MAE for existing stations than new stations. This is reasonable as existing stations are mainly distributed in downtown regions (see Figure 5) and are associated with higher demand. Meanwhile, the \(R^{2}\) for existing stations is higher than that for new stations using all methods. This can be explained that the models are trained with historical observations of existing stations. Compared with the baseline models, our approach achieves significantly superior performance regarding all evaluation metrics for both new and existing stations. This is likely because our method leverages deep learning techniques to capture the nonlinear relationship between BSS demand and input variables, and graph learning approaches to model complex spatial interactions across BSS stations. Among baseline models, the poor performance of regression approaches is likely due to their oversimplified linear assumptions. Leveraging spatial information from nearby BSS stations, spatial regression performs better than linear regression for existing stations. However, it does not provide notably better prediction for newly planned stations in our experiments. This might be because spatial interactions among BSS stations are quite complicated and cannot be effectively captured using simple linear models. Benefiting from the ability of machine learning models to capture relationships from data, XGBoost improves the prediction performance by a large margin compared to regression models. The advantage of Function Zone to XGBoost is minimal in our case, which might be because our implementation does not include taxi transaction record data used in the original paper. Compared with XGBoost, Spatial-MGAT can further reduce the prediction error, with RMSE improvement of 22.6% and 32.1% for newly planned and existing stations respectively. As introduced in Section 3.3, our method can be regarded as a generalized spatial regression model. Compared with the linear spatial regression model, Spatial-MGAT can reduce RMSE by 71.5% and 40.3% for new and existing stations. This suggests that DNNs, with a proper network design, can greatly enhance the model performance of classical econometric models, and the improvement can generalize well to unseen observations (e.g., new BSS stations). ### Comparison with deep learning variants As introduced in Section 2.1, existing deep learning approaches for TG-BSSE require historical demand data of nearby BSS stations as input and are formulated quite differently from our problem. In A, we test their performance in real-world BSS expansion scenarios. It turns out that they may not work well due to data sparsity and demand distribution discrepancy issues. To the best of our knowledge, our study is among the first to apply deep learning techniques to for TG-BSSE based on urban built environment features. To properly evaluate the effectiveness of our model architecture and quantify the contribution of different model components, we design several variant models as listed below: * **Feed Forward Network (FNN)**(Svozil et al., 1997): A general FNN consists of an input layer, an output layer and several hidden layers in between. For fair comparison, FNN takes the form of the prediction layer used in our model with features of the target station as input. It will be used as the deep learning baseline for model benchmarking. * **Multi-graph Convolutional Network (Spatial-MGCN)**(Kipf and Welling, 2016): Spatial-MGCN adopts a similar structure with Spatial-MGAT, with GAT replaced by GCN to model spatial interactions among stations. The main difference between GAT and GCN is that GAT adaptively learns attention weights to represent correlations between station pairs, while GCN uses pre-defined adjacency weights to capture spatial dependencies. Note that Spatial-MGCN can also be regarded as an extended version of SLX with pre-defined weights between neighborhoods. * **Graph Attention Network based on geographical proximity (Spatial-PGAT)**: In our proposed model, two types of spatial dependencies are considered: geographical proximity and built environment similarity. This variant model only encodes spatial interactions between geographically nearby stations. Spatial-PGAT can also be seen as a generalized SLX with only the spatial lag term for geographical proximity. * **Graph Attention Network based on built environment similarity (Spatial-BGAT)**: In this variant, with the dependency of geographical proximity ablated, the model makes prediction based on spatial interaction features from only stations with similar built environment characteristics. Similar, Spatial-BGAT can be regarded as a generalized SLX with only the spatial lag term for built environment similarity. Table 2 displays the average performance of our proposed approach and the variant deep learning models over 10 independent runs. It is found that FNN can already achieve notably better performance than XGBoost, demonstrating the ability of deep architectures in capturing complex nonlinear relationships between BSS demand and input variables. Using GCNs to leverage spatial information from related BSS stations, Spatial-MGCN performs \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{3}{c}{New stations} & \multicolumn{3}{c}{Existing Stations} \\ & RMSE & MAE & \(R^{2}\) & RMSE & MAE & \(R^{2}\) \\ \hline Linear Regression & 33.646 & 26.185 & 0.454 & 49.369 & 36.692 & 0.555 \\ Spatial Regression & 34.963 & 26.812 & 0.467 & 46.662 & 34.606 & 0.596 \\ XGBoost & 25.920 & 16.900 & 0.602 & 41.027 & 27.719 & 0.717 \\ Function Zone & 26.343 & 16.882 & 0.576 & 42.311 & 29.004 & 0.655 \\ Spatial-MGAT & _20.392_ & _12.599_ & _0.742_ & _27.860_ & _19.140_ & _0.852_ \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison among regression and machine learning models better than FNN by a large margin, suggesting the effectiveness of considering station interactions using graph learning approaches. Meanwhile, Spatial-MGAT can further reduce the prediction error compared with Spatial-MGCN, which is likely due to the use of attention mechanisms to adaptively learn correlation weights between stations. With either geographical proximity or built environment similarity graph ablated, the variant model performs worse than the original one. This verifies the importance of using multiple graphs to capture heterogeneous relationships between BSS stations. Between the two, geographic proximity has a greater impact on prediction results, which is reasonable as BSS stations are more likely to be influenced by other stations that are spatially close by. A major concern of DNNs is their instability in practical applications. To investigate this, we display the prediction results of all deep learning models in 10 independent experiments and use XGBoost as a benchmark in Figure 6. It can be found that compared to FNN, graph learning models display relatively smaller variance. This suggests that incorporating the built environment features of connected stations can potentially improve model stability. In addition, although graph deep learning approaches generally have larger performance variance than XGBoost, they can perform significantly better than XGBoost for both newly planned and existing stations in all experiments. This demonstrates that GNNs can be a promising solution for the TG-BSSE problem with reasonable performance variability. To further provide an intuitive comparison of different model families, we plot the relationships between the true and predicted number of trips for all station-month observations in the test set based on several selected models in Figure 7. Specifically, we use spatial regression as an example of regression models, XGBoost as an example of machine learning models, FNN as an example of non-graph deep learning models, and Spatial-MGAT as an example of graph learning models. It is apparent that Spatial-MGAT can provide a better model fit, especially for high-demand observations. In BSS or many other transportation systems, it is typical that a small number of high-demand stations serve a large proportion of passengers (i.e., the Matthew effect), and thus the ability to generate accurate predictions for these stations is essential for the efficiency of the whole system. In comparison, traditional models do not perform as well for such high-demand stations, as evidenced in Figure 7(a-b), making them less practically useful. This demonstrates the value of using deep architectures and learning nonlinear relationships, especially when the data is unevenly distributed. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{3}{c}{New stations} & \multicolumn{3}{c}{Existing stations} \\ & RMSE & MAE & \(R^{2}\) & RMSE & MAE & \(R^{2}\) \\ \hline FNN & 23.673 & 14.528 & 0.659 & 29.492 & 20.123 & 0.834 \\ Spatial-MGCN & 21.654 & 13.042 & 0.716 & 28.645 & 19.710 & 0.845 \\ Spatial-PGAT & 20.965 & 12.822 & 0.729 & 28.334 & 19.544 & 0.848 \\ Spatial-BGAT & 21.924 & 13.757 & 0.704 & 29.448 & 19.894 & 0.834 \\ Spatial-MGAT & _20.392_ & _12.599_ & _0.742_ & _27.860_ & _19.140_ & _0.852_ \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparison among various deep learning models ### Understanding the relationship between built environment and BSS demand In addition to trip generation for newly planned stations, understanding how the built environment would affect BSS demand is important to provide policy implications for BSS network design. In this section, we unravel the effects of different built environment features based on the model results using Shapley Additive exPlanations (SHAP). SHAP is an explainable AI technique whose main idea is to explain machine learning models using game theories (Lundberg and Lee, 2017). With SHAP, each feature is assigned with an optimal Shapley value, which suggests how the presence or absence of a feature influences the model prediction result (i.e., BSS station demand in our case). Fig. 8 shows the distribution of SHAP values of the 30 most influential features in our Figure 6: Comparison of model stability Figure 7: The relationship between true and predicted demand of several selected models proposed model. It is found that the station age plays the most important role. The longer a station has been in the system, the higher its demand is. This is as expected, since a newly added station is generally less well known to users, resulting in relatively low usage in the beginning. The number of BSS stations in the neighborhood also plays an important role: generally, the demand for a BSS station can increase with more stations in the 1000-5000m distance range, but decrease with more stations within 1000m travel distance. This suggests that stations that are too close to each other can compete for users, while stations in a medium distance can complement each other and attract more users. Another influential factor is the distance to the nearest subway station: BSS stations that are closer to a subway station are usually associated with higher demand. This is quite intuitive as a major use case of bike sharing is to support first-mile/last-mile trips to/from mass transit systems. The structure of the road network around a BSS station also has a notable effect on its demand: BSS stations surrounded by more high-level road networks (e.g., motorway, primary) tend to have higher demand, while more low-level roads (e.g., residential) in the neighborhood can lead to lower demand. This might be because regions with high-level roads are usually associated with higher human flow and thus more potential customers. Meanwhile, the number of junctions in the neighborhood is negatively related to the number of intersections in the neighborhood. Figure 8: Distribution of Shapley values for the top 30 features in Spatial-MGAT BSS demand, which might be due to the higher cost of riding in areas with more intersections and turns. POI density is another useful indicator of BSS demand. It is found that zones with higher residential, transportation and commercial POI density in the neighborhood can be associated with higher demand. This is reasonable as people can use bike sharing to access home or transportation facilities, and places with more commercial POIs are usually more prosperous and thus associated with higher demand. Socio-demographic features also affect BSS station demand significantly. Among them, the percentage of white residents in the census tract is one of the most important features, and areas with more white residents generally have higher BSS demand. ### Understanding spatial interactions between BSS stations In addition to the relationship between the built environment and BSS demand, we can also use our model to understand the local spatial interactions between BSS stations by examining the attention weights learned from the GAT layers. Recall that the attention weights represent spatial dependencies of a target BSS station on other stations in its localized graph. In Fig. 9, we choose three newly added stations in the test set as examples and show the learned attention weights between the target station and connected stations. Each dot represents a BSS station and a thicker dashed line between two dots represents a higher attention weight and thus stronger correlations between stations. It can be found that both the stations with close geographical proximity and those with high built environment similarity can have strong correlations with the target station. In addition, the interaction does not strictly follow the distance decay: stations that are further from the target station can contribute more to its prediction. This suggests the advantage of using attention mechanisms to learn station interactions instead of using pre-defined adjacency weights. ## 5 Conclusion This research focuses on the trip generation problem for BSS expansion based on urban built environment characteristics. Previous research typically rely on regression or machine Figure 9: Example BSS stations and their spatial dependencies (with thicker dashed lines denoting higher attention weights) learning models, which might not be adequate to capture the nonlinear effect of the built environment or the complex spatial interactions between BSS stations. While deep learning methods have shown promise for such demand prediction tasks, most existing models focus on the short-term prediction for mature and stable systems, and are generally not applicable to trip generation for system expansion scenarios. To address these issues, this study introduces a graph deep learning approach for TG-BSSE. It leverages various built environment features as model inputs, including POIs, road network, transportation facilities, socio-demographics and BSS network design. To capture spatial interactions between BSS stations, we construct localized graphs centered on each target BSS station based on both geographical proximity and built environment similarity, and adaptively learn their spatial correlation weights using attention mechanisms. We further demonstrate that the proposed GNN approach can be seen as a generalized spatial regression model with nonlinear activation functions, heterogenous spatial dependencies and adaptive spatial weights, which allows us to synergize the development of GNN and spatial regression methods. Using a real-world BSS expansion dataset from NYC over multiple years, experiment results verify the improved performance of our proposed model compared to existing methods as well as alternative deep learning variants. Furthermore, we demonstrate the model interpretability regarding how different built environment features affect BSS demand and how BSS stations interact with each other. The proposed model can be used to support strategic planning for BSS expansion, especially related to the site selection and capacity design for new stations. The consideration of spatial dependencies across new and existing stations allow us to evaluate the potential impact of new stations on the whole system. This research can be further improved or extended in several aspects. First, currently we only exploit urban built environment features to make predictions, and future research can explore how to better leverage both built environment and historical demand data for long-term transportation planning. For example, inspired by recent research for cold-start user recommendation (Dong et al., 2020), it is possible to use memory-augmented meta-learning approaches to transfer knowledge learned from historical demand of existing stations to newly planned ones. Second, although this research focuses on trip generation at the station level, it is also important to understand how the generated trips would be distributed across different OD pairs (i.e., the trip distribution step in four-step travel demand forecasting). One challenge is that the OD-level demand observations can be rather sparse, with most OD pairs having few trips, which leads to computational robustness issues. Nevertheless, recent development in uncertain quantification methods can be incorporated to mitigate such concerns (Zhuang et al., 2022). Third, currently our proposed model focuses on the use case for system expansion, in which the training and testing data are generated in the same city. This is no longer applicable when a city has to plan a BSS from scratch. One possible solution is to extend our approach for cross-city planning scenarios, where the model is trained in one city and deployed in another. The performance likely depends on many city-specific factors, some of which are not easily quantifiable (e.g., cycling culture, BSS branding). Last but not least, as BSS strategic planning is cited as one of the main target applications for our demand prediction model, a natural next step is to develop a larger framework to directly incorporate the prediction results with a joint optimization framework for station location selection and capacity design. This will serve as a valuable toolkit for the data-driven planning and design of future BSS as well as other transportation networks. ## Acknowledgements This research is supported by National Natural Science Foundation of China (NSFC 42201502) and Seed Fund for Basic Research for New Staff at The University of Hong Kong (URC104006019). ## Appendix A Comparison with DNNs based on sequential dependencies As introduced in Section 2.1, existing deep learning approaches formulated TG-BSSE as a time-series prediction problem with historical demand patterns of existing stations as input. They mostly use simulation data for experiments with short time intervals (i.e., from 30 minutes to 1 day). To test their performance in real-world BSS expansion scenarios, we design a problem formulation as follows: assuming the planning and construction of BSS stations take \(K\) months, to predict the potential demand for BSS expansion at time step \(t\), we use the historical demand series of existing stations from month \(t-K-T\) to month \(t-K\) as model input. In the test data, we define stations that do not exist at time step \(t-K\) as new observations and the others as existing observations. In our experiments, we set \(T=6\) and \(K=6\). This results in 2336 and 19472 new and existing observations for model evaluation respectively. We compare our proposed model with two existing baselines based on temporal sequential dependencies: * **DDP-Exp**(Luo et al., 2019): a graph sequence learning approach to trip generation for time-varying transportation networks, which uses LSTM to capture temporal dependencies and GCN to capture spatial dependencies. * **MOHER**(Zhou et al., 2021): a spatiotemporal framework to predict the number of trips generaged for a newly planned transportation site by aggregating the historical demand of multi-modal transportation sites that are either geographically adjacent or have similar POI distributions. We implement a simplified version considering only historical demand of BSS stations. The results displayed in Table 3 show that the aforementioned DNNs based on temporal sequential dependencies might not work well in real-world BSS expansion scenarios, \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{New observations} & \multicolumn{3}{c}{Existing observations} \\ & RMSE & MAE & \(R^{2}\) & RMSE & MAE & \(R^{2}\) \\ \hline DDP-Exp & 39.913 & 35.804 & 0.325 & 43.472 & 30.920 & 0.621 \\ MOHER & 33.125 & 24.733 & 0.663 & 41.831 & 26.180 & 0.653 \\ Spatial-MGAT & _22.375_ & _13.305_ & _0.783_ & _27.495_ & _18.095_ & _0.848_ \\ \hline \hline \end{tabular} \end{table} Table 3: Performance comparison with DNNs based on sequential dependencies likely for the following reasons. First, they assume that the demand data used for model training and test follows similar data distribution, neglecting the fact that the addition of new stations can lead to structural changes for the entire system. Second, with a longer and more realistic time interval (i.e., month) in our experiment setting, they may suffer from data sparsity issues and can easily lead to overfitting. Finally, in cases when newly planned stations are located far from existing ones, it is difficult to leverage historical demand from nearby sites for model fitting. This is often the case for real-world system expansion, in which a cluster of new stations in the same neighborhood are planned and deployed at the same time. Compared with these methods, we formulate TG-BSSE as a spatial regression problem by leveraging multi-source urban built environment features, which are more general and reliable for long-term transportation planning applications.
2305.01626
Basic syntax from speech: Spontaneous concatenation in unsupervised deep neural networks
Computational models of syntax are predominantly text-based. Here we propose that the most basic syntactic operations can be modeled directly from raw speech in a fully unsupervised way. We focus on one of the most ubiquitous and elementary properties of syntax -- concatenation. We introduce spontaneous concatenation: a phenomenon where convolutional neural networks (CNNs) trained on acoustic recordings of individual words start generating outputs with two or even three words concatenated without ever accessing data with multiple words in the input. We replicate this finding in several independently trained models with different hyperparameters and training data. Additionally, networks trained on two words learn to embed words into novel unobserved word combinations. To our knowledge, this is a previously unreported property of CNNs trained in the ciwGAN/fiwGAN setting on raw speech and has implications both for our understanding of how these architectures learn as well as for modeling syntax and its evolution from raw acoustic inputs.
Gašper Beguš, Thomas Lu, Zili Wang
2023-05-02T17:38:21Z
http://arxiv.org/abs/2305.01626v2
# Basic syntax from speech: Spontaneous concatenation in unsupervised deep neural networks ###### Abstract Computational models of syntax are predominantly text-based. Here we propose that basic syntax can be modeled directly from raw speech in a fully unsupervised way. We focus on one of the most ubiquitous and basic properties of syntax--concatenation. We introduce _spontaneous concatenation_: a phenomenon where convolutional neural networks (CNNs) trained on acoustic recordings of individual words start generating outputs with two or even three words concatenated without ever accessing data with multiple words in the input. Additionally, networks trained on two words learn to embed words into novel unobserved word combinations. To our knowledge, this is a previously unreported property of CNNs trained on raw speech in the Generative Adversarial Network setting and has implications both for our understanding of how these architectures learn as well as for modeling syntax and its evolution from raw acoustic inputs. ## 1 Introduction Concatenation (or compounding/conjoining elements) is one the most basic operations in human syntax. Many animal communication systems use simple symbols (call/sign\(\sim\)meaning pairs) that are not concatenated (termed "elementary signals" by Nowak and Komarova 2001). In human syntax, on the other hand, individual elements such as words can combine into "compound signals" (Nowak and Komarova, 2001) with compositional meaning. The evolution of concatenation (Progovac, 2015) as well as the existence of related operations that are presumably uniquely human and domain-specific (such as the proposed _Merge_; Chomsky 2014) have been the focus of debates in linguistics and cognitive science. Models of human syntax are predominantly text-based, but hearing human learners acquire syntax from acoustic inputs. Modeling syntax from raw speech also has engineering applications: speech processing increasingly by-passes text (Lakhotia et al., 2021). Understanding syntactic capabilities and limitations of spoken language models can inform architectural choices. Here, we model how compound signals or concatenated words can arise spontaneously in deep neural networks trained on raw speech in a fully unsupervised manner. The sounds of human speech are a measurable, physical property, but they also encode abstract linguistic information such as syntactic, phonological, morphological, and semantic properties. For these reasons, we model basic syntactic dependencies from raw acoustic inputs with CNNs. We train CNN in the Generative Adversarial Network (GAN) setting. CNNs and GANs are uniquely appropriate for modeling linguistic dependencies from raw speech without supervision. These models have been demonstrated to learn disentangled near-categorical representations of linguistically meaningful units at the phonetic, phonological, morphological, and lexical semantic levels from raw acoustic inputs (Begus, 2020, 2021a, 2021b). To test whether CNNs can spontaneously concatenate words, we conduct two sets of experiments. In the two _one-word_ experiments, we train the networks on single-word inputs. Because the networks are trained in the GAN setting, the Generator never accesses the training data directly, but generates innovative outputs. It has been shown that GANs innovate in highly interpretable ways that produce novel words or sound sequences (Begus, 2021a). Here we test whether innovations can produce spontaneously concatenated words. In the second experiment, we train the networks on one-word and two-word inputs (the _two-word_ experiment) and withhold a subset of two-word combinations. We then test whether words can be embedded into novel unobserved combinations in the output. Such a design also mimics one-word and two-word stages in language acquisition (Berk and Lillo-Martin, 2012). Methods ### The model We train the ciwGAN and modified fiwGAN models [1]. ciwGAN/fiwGAN models are information-theoretic extensions of GANs (based on InfoGAN, WaveGAN and DCGAN; Chen et al.2016; Donahue et al.2019; Radford et al.2015) designed to learn from audio inputs. The ciwGAN/fiwGAN architectures involve three networks (Figure 1): the Generator that takes latent codes \(c\) (either one-hot vectors or binary codes) and random latent space variable \(z\) (\(z\sim\mathcal{U}(-1,1)\)) and through six upconvolutional layers generates 2.048s audio (32,768 samples). The audio is then fed to the Discriminator, which evaluates realness of the output via the Wasserstein loss [1]. The unique aspect of the ciwGAN/fiwGAN architecture is a separate Q-network, which is trained to estimate the Generator's hidden code \(c\). During training the Generator learns to generate data such that it increases the Discriminator's error rate and decreases the Q-network's error rate. In other words, the Generator needs to learn to encode unique information into its acoustic inputs, such that the Q-network is able to decode unique information from its generated sounds. The training between the Generator and the Q-network mimics the production-perception loop in speech communication: after the training, the Generator learns to generate individual words given a latent code \(c\) and the Q-network learns to classify unobserved words with the same corresponding codes [1]. Since learning is completely unsupervised, the Generator could in principle encode any information about speech into its latent space, but the requirement to be maximally informative causes it to encode linguistically meaningful properties (both lexical and sublexical information; 1). Such a setting not only replicates the production-perception loop, but is also one of the few architectures featuring traces of communicative intent (between the Generator and the Q-network). Unlike in generative models trained on next sequence prediction or data replication where no communicative intent exists, the training objective between the Generator and the Q-network is to increase mutual information between the latent space and the data such that the Q-network can retrieve the information (latent code) encoded into the speech signal by the Generator. CiwGAN and fiwGAN have been shown to be highly innovative in linguistically interpretable ways. For example, the Generator produces new words or new sound sequences that it never accesses during training. Crucially, the Generator never directly accesses the data: it learns by generating data from noise such that the Discriminator fails to distinguish real and generated data. In this respect, it mimics learning by imitation in human language (rather than replication as is the case with variational autoencoders). ### Data The training dataset consists of sliced lexical items from the TIMIT database of spoken English [1], such that each item is a single spoken word. In the first, _one-second one-word_ experiment, we use 5 lexical items:, _oily, rag, suit, year_ and _water_. In this experiment based on a pre-trained model, the 5-layer Generator outputs only 1.024s of audio and data is never left-padded (only right padded) which controls for the effect of padding on concatenation. We replicate the results with another one-word experiment trained on _box, greasy, suit, under_, and _water_. Here, each item is randomly padded with silence to a length of 2s to produce 100 distinct data points for each class, for a total of 500 data points used in training (the _two-second one-word_ experiment). In the third experiment (_two-second two-word_), we use 3 lexical items: _greasy, suit_, and _water_. 100 data points each 2s in length are generated in an analogous process to the first experiment, but for each combination of two items (i.e. _greasy, suit_, and _water_ alone, _greasy_ followed by _water_, Figure 1: The architecture of ciwGAN used in the two-second one-word experiment. _water_ followed by _greasy_, and so on). However, we withhold the combination _suit/greasy_, such that they do not appear together in the training set in any order, to produce a final training set of 700 data points. For the one-word experiments, we use the ciwGAN model with one-hot encoding and five levels, such that each of the five unique words can be represented with a unique one-hot vector. In the two-word experiment, we use a modified fiwGAN (binary code). The binary code is limited to three bits, but each code can have up to two values of 1 (e.g. [1,0,0] and [1,1,0]). We also train an additional two-word two-second model with the same data but with 6-level one-hot \(c\) in the ciwGAN architecture. ## 3 Results To test whether the models can spontaneously concatenate, we train the networks for 8,011 (pre-trained one-second one-word), 8,956 (two-second one-word), 9,166 (two-second two-word fiwGAN), and 18,247 steps (two-second two-word ciwGAN) and analyze generated data. We use the technique proposed in Begus (2020) to analyze the relationship between linguistically meaningful units and latent space. According to this technique, setting individual latent space variables to values outside of the training range reveals the underlying linguistic value of each variable. ### One-word model In the one-second one-word model, the Generator learns to associate each unique one-hot code with a unique lexical item in a fully unsupervised and unlabeled manner. The Generator's input during training is a one-hot vector with values 0 or 1. For example, the network learns to represent _suit_ with \([1,0,0,0,0]\). To test this observation, we set the one-hot vector to values outside the training range (e.g. \([5,0,0,0,0]\)), which overrides lower-level interactions in the latent space. This causes the Generator to output _suit_ at near categorical levels (9 times out of 10), revealing the underlying value of the code. This further reveals that \([0,1,0,0,0]\) encodes _year_ (8 times out of 10) and \([0,0,1,0,0]\) encodes _water_ (10 times out of 10), since setting latent codes to values greater than 1 results in the model almost categorically outputting the associated word. In addition to lexical learning, we observe a robust pattern: the networks trained on one-word inputs generate two-word outputs when the one-hot values are set to negative values outside of the training range. For example, when the latent code is set to \([0,-2,-2,-2,0]\), the Generator consistently outputs a two-word output _suit year_ (8 times out of 10). For \([-3,-2,-3,-2,2]\), the network consistently outputs _rag year_ (8 times out of 10; Figure 3). These concatenations occur despite the fact that the training data is always left-aligned, the Generator never accesses the data directly and the Discriminator only sees single words. To show that this outcome is not an idiosyncratic property of one model and that it is indeed the negative values that encode concatenated outputs, we also analyze a separately trained two-second one-word model. The inputs to the Discriminator in this case are also single words only, but they are longer (2s) and randomly padded with silence on the left and right. While high positive values occasionally yield two-word outputs in this model, negative values are consistently associated with two-word outputs. For example, \([-50,-50,0,-50,0]\) (with extreme values) consistently encodes _box greasy_ (9 times out of 10), and \([-50,-50,-50,0,0]\) consistently encodes _greasy under_ (10 times out of 10). Positive values of the same codes produce completely unintelligible outputs (noise). In addition to several two-word concatenated outputs, the network even occasionally generates a three-word concatenated output _box under water_ for the latent code \(c\) with all negative values \([-3,-1,-1,-1,-1]\) (2 times out of 10). Figure 2 illustrates the three-word sequence. Figure 2: The three-word concatenated output _box under water_. Independently, the second word (_under_) is somewhat difficult to analyze, but given only five training words, it is clearly the closest output to _under_. ### Two-word model In the two-word experiment, the models get one-word and two-word inputs (thus mimicking the two-word stage). The models are only trained on three words and their combinations, except for the _suit/greasy_ combination withheld. In the ciwGAN two-word model, the Generator consistently outputs the unobserved _greasy suit_ for \([15,0,0,0,0,0]\) (17 times out of 20), which suggests the network learned this unobserved combination as one of the possible sequences and encoded it with a one-hot value. For the code \([-1,4,4]\) (modified fiwgan), the Generator occasionally outputs the three-word output _suit greasy water_ (1 time out of 20; Fig. 4), which contains the unseen _suit greasy_ pair. It appears that the negative values of the latent code \(c\) again encode unobserved novel combinations. We also observe repeated three-word outputs such as _water water suit_ as a consistent output of \([0,50,-50]\) (20 times out of 20). ### Repetition In addition to two-word concatenation and embedding of words into novel combinations, we also observe outputs with repeated words in all our trained models. The training data never includes the same word repeated, yet the models frequently include repeated words. For example, the two-second one-word model consistently outputs _greasy greasy_ for \([0,0,-40,0,0]\) (7 times out of 10; Fig. 4). This is significant because repetition or reduplication is one of the most common processes in human language and language acquisition (Berent et al., 2016; Dolatian and Heinz, 2020). Additionally, full or total reduplication (where the entire word is repeated) is among the most computationally complex morphophonological processes (Dolatian and Heinz, 2020) because it represents unbound copying at the segmental level. It has been shown elsewhere that deep convolutional neural networks can learn partial reduplication (where only a part of the word is repeated) from raw speech and extend it to novel unobserved data (Begus, 2021). Our results suggest that total reduplication (or unbound copying; Dolatian and Heinz 2020) can arise spontaneously in these models. ### Why negative values? The Generator makes use of the unutilized space in the latent code to encode unobserved but linguistically interpretable outputs. During training, the Generator is trained on only two values of the latent code: 0 and 1. In the one-second one-word model, individual codes represent unique individual words, which suggest lexical learning emerges in the positive values in these models. The network never accesses two-word inputs and it never gets negative values in the latent code \(c\) during the training. It appears that the network is biased to concatenate and that it uses the unobserved latent code space to encode unobserved concatenated outputs. ## 4 Conclusion Our results suggest that the Generator network in the ciwGAN architecture not only learns to encode information that corresponds to lexical items in its audio outputs, but also spontaneoulsy concatenates those lexical items into novel unobserved two-word or three-word sequences. The ability of unsupervised deep neural networks trained on raw speech to concatenate words into novel unobserved combinations has far-reaching consequences. It means that we can model basic syntactic properties directly from raw acoustic inputs of spoken language, which opens up potential to model several other syntactic properties directly from speech with deep convolutional neural networks as well as from other architectures. From the perspective of evolution of syntax, the results suggest that a deep neural network architecture with no language-specific properties can spontaneously begin generating concatenated signals from simple signals. The step from one-word stage to two-word stage is necessary both in evolution of human language as well as during language acquisition. Our second experiment mimics the two-word stage. We argue that unsupervised deep learning models not only concatenate single words into multi-word outputs, but are also able to embed words into novel unobserved combinations once the model is trained on multiple-word inputs. Further research into the relationship between basic syntactic properties that spontaneously emerge in these fully unsupervised models trained on raw speech and the structure of the latent space has the potential to yield insights for the study of syntactic theory, language acquisition, and language evolution. By evaluating these models on syntactic properties of spoken language, we should also get a better understanding of computational limits of unsupervised CNNs. ### Limitations This paper models concatenation of acoustic lexical items. Syntax is substantially more complex than concatenation [1]. Exploration of other syntactic properties as well as of compositionality in these models is left for future work. We also train the network on a relatively small number of lexical items (5) and a small number of tokens (100). The small number of lexical items is representative of the earliest stages of language acquisition when the number of lexical items is highly limited [1]. ## Ethics Statement Two models are trained for the purpose of this paper, and one model is pretrained. The three models were trained for 16hrs on a single GPU (NVIDIA 1080ti). We use the TIMIT [1] database for training. The number of parameters is given in the Appendix A. We take the standard hyperparameters (from Donahue et al. 2019 and Begus 2021a). Because the outputs are salient and rarely ambiguous, all transcriptions are performed by the authors. Generated audio files and models' checkpoints are available at the anonymous link: [https://osf.io/przuq/?view_only=9d19a26f0bb84a3ea4db8e6844b37985](https://osf.io/przuq/?view_only=9d19a26f0bb84a3ea4db8e6844b37985).
2308.13553
Synthesizing 3D computed tomography from MRI or CBCT using 2.5D deep neural networks
Deep learning techniques, particularly convolutional neural networks (CNNs), have gained traction for synthetic computed tomography (sCT) generation from Magnetic resonance imaging (MRI), Cone-beam computed tomography (CBCT) and PET. In this report, we introduce a method to syn-thesize CT from MRI or CBCT. Our method is based on multi-slice (2.5D) CNNs. 2.5D CNNs offer distinct advantages over 3D CNNs when dealing with volumetric data. In the experiments, we evaluate the performance of our method for two tasks, MRI-to-sCT and CBCT-to-sCT generation. Target organs for both tasks are brain and pelvis.
Satoshi Kondo, Satoshi Kasai, Kousuke Hirasawa
2023-08-23T21:36:41Z
http://arxiv.org/abs/2308.13553v1
# Synthesizing 3D computed tomography from MRI or CBCT using 2.5D deep neural networks ###### Abstract Deep learning techniques, particularly convolutional neural networks (CNNs), have gained traction for synthetic computed tomography (sCT) generation from Magnetic resonance imaging (MRI), Cone-beam computed tomography (CBCT) and PET. In this report, we introduce a method to synthesize CT from MRI or CBCT. Our method is based on multi-slice (2.5D) CNNs. 2.5D CNNs offer distinct advantages over 3D CNNs when dealing with volumetric data. In the experiments, we evaluate the performance of our method for two tasks, MRI-to-sCT and CBCT-to-sCT generation. Target organs for both tasks are brain and pelvis. Keywords:Synthetic computed tomography, 2.5D convolutional neural networks. ## 1 Introduction Radiation therapy (RT) is a critical cancer treatment that often requires computed tomography (CT) for accurate dose calculations. Magnetic resonance imaging (MRI) provides superior soft tissue contrast, but lacks the electron density data of CT for dose calculations. Combining the two modalities presents challenges, including mis-registration errors. MRI-only RT has emerged to address these challenges, reduce ionizing radiation exposure, and improve patient comfort. However, the generation of synthetic CT images from MRI (sCT) remains challenging due to the lack of direct correlation between nuclear magnetic properties and electron density. Deep learning (DL) techniques, particularly convolutional neural networks (CNNs), have gained traction for sCT generation from MRI, Cone-beam CT (CBCT) and PET [1]. In this report, we introduce a method to synthesize CT from MRI or CBCT. Our method is based on multi-slice (2.5D) CNNs. 2.5D CNNs offer distinct advantages over 3D CNNs when dealing with volumetric data. These benefits stem from a thoughtful compromise between computational efficiency and capturing relevant spatial context. In the experiments, we evaluate the performance of our method for two tasks, MRI-to-sCT and CBCT-to-sCT generation. Target organs for both tasks are brain and pelvis. ## 2 Proposed Method Our base method is same for both tasks and both organs. We use encoder-decoder type deep neural networks for converting MRI or CBCT images to synthetic CT (sCT) images. Figure 1 shows an overview of our method. Although the input images are 3D volumes, we use a 2D deep neural network model with multi-slice inputs (2.5D CNNs). 2.5D CNNs offer distinct advantages over 3D CNNs when dealing with volumetric data. These benefits stem from a thoughtful compromise between computational efficiency and capturing relevant spatial context. Reasons why 2.5D CNNs are favored in many cases include reduced computational complexity, memory efficiency, leveraging anisotropic resolution, multi-planar analysis, contextual information, and overcoming class imbalance. In our model, \(N\) consecutive slices in an input volume are processed to produce one slice in a sCT volume. The input slices are along transverse plane. The consecutive slices are processed as an \(N\) channel 2D image in our model. In training phase, \(N\) slices are randomly selected \(M\) times from each volume in the training dataset in each epoch. In inference phase, each volume is processed in slice-by-slice way and each slice in sCT volume is produced. We use L1 error between predicted sCT slices and ground truth CT slices as the loss function. ## 3 Experiments ### Dataset Data was acquired for radiotherapy treatments in the radiotherapy departments of UMC Utrecht, UMC Groningen, and Radboud Nijmegen [2]. The numbers of data are summarized in Table 1. Each data includes source image (MRI for the MRI-to-sCT task and CBCT for the CBCT-to-sCT task), ground truth (CT) and mask. We divide each dataset to training and validation data. The numbers of training and validation data are 162 and 18 in each dataset, respectively. Figure 1: Overview of our method. ### Experimental conditions We used U-Net [3] as the basic segmentation network and replaced its encoder part as EfficientNet [4]. We conducted hyper-parameter tuning. The hyper-parameters include the encoder size, the number of slices, the initial learning rate. As the results of hyper-parameter tuning, we selected EfficientNet-B7 as the encoder, 3 as the number slices. The initial learning rates were selected as 1\(\times\)10\({}^{-3}\), 5\(\times\)10\({}^{-4}\), 1\(\times\)10\({}^{-4}\), and 5\(\times\)10\({}^{-5}\) for task-1 brain, task-1 pelvis, task-2 brain, and task-2 pelvis, respectively. The optimizer was AdamW [5] and the learning rate was decreased at every epoch with cosine annealing. The number of epochs was 100, and We used the model with the lowest loss value for the validation data as the final model. As pre-processing, histogram normalization was performed for MRI volumes. No data augmentations were performed. ### Experimental results Table 2 shows the summary of the experimental results. We show two metrics; PSNR and Mean Absolute Error (MAE). These are the differences between sCT and ground truth CT. As for the tasks, it cannot be seen big differences between MRI-to-sCT and CBCT-to-sCT. Figures 2, 3, 4 and 5 show examples of experimental results. In each figure, (a) shows an input slice (MRI or CBCT), (b) shows the corresponding slice of sCT, and (c) shows the corresponding slice of ground truth (CT). \begin{table} \begin{tabular}{c c c} \hline Task & Organ & Number of data \\ \hline \multirow{2}{*}{MRI-to-sCT} & Brain & 180 \\ & Pelvis & 180 \\ & Brain & 180 \\ CBCT-to-sCT & Pelvis & 180 \\ \hline \end{tabular} \end{table} Table 1: Datasets \begin{table} \begin{tabular}{c c c c} \hline Task & Organ & PSNR (dB) \(\uparrow\) & Mean Absolute Error (HU) \(\downarrow\) \\ \hline \multirow{2}{*}{MRI-to-sCT} & Brain & 27.06 & 77.93 \\ & Pelvis & 28.51 & 64.26 \\ \multirow{2}{*}{CBCT-to-sCT} & Brain & 27.38 & 81.44 \\ & Pelvis & 28.12 & 68.07 \\ \hline \end{tabular} \end{table} Table 2: Experimental results for validation dataset. Figure 4: Examples of experimental results in CBCT-to-SCT / Brain. (a) CBCT (input). (b) sCT (out-put). (c) CT (ground truth). Figure 5: Examples of experimental results in CBCT-to-SCT / Pelvis. (a) CBCT (input). (b) sCT (out-put). (c) CT (ground truth). Figure 3: Examples of experimental results in MRI-to-SCT / Pelvis. (a) MRI (input). (b) sCT (out-put). (c) CT (ground truth). Figure 2: Examples of experimental results in MRI-to-SCT / Brain. (a) MRI (input). (b) sCT (output). (c) CT (ground truth). We also evaluated our method in SynthRAD2023 challenge site [6]. In preliminary test task, the algorithm run six cases on the grand challenge platform and the system gives MAE, PSNR and SSIM metrics for each case. Table 3 shows the summary of the preliminary test task. ## 4 Conclusions In this report, we introduced a method to synthesize CT from MRI or CBCT. Our method is based on multi-slice (2.5D) CNNs. In the experiments, we evaluatde the performance of our method for two tasks, MRI-to-sCT and CBCT-to-sCT generation. Target organs for both tasks are brain and pelvis. From the experimental results, big differences in performance between MRI-to-sCT and CBCT-to-sCT were not observed. As for the organs, the results for pelvis were slightly better than the results for brain.
2310.18894
Emergence of Shape Bias in Convolutional Neural Networks through Activation Sparsity
Current deep-learning models for object recognition are known to be heavily biased toward texture. In contrast, human visual systems are known to be biased toward shape and structure. What could be the design principles in human visual systems that led to this difference? How could we introduce more shape bias into the deep learning models? In this paper, we report that sparse coding, a ubiquitous principle in the brain, can in itself introduce shape bias into the network. We found that enforcing the sparse coding constraint using a non-differential Top-K operation can lead to the emergence of structural encoding in neurons in convolutional neural networks, resulting in a smooth decomposition of objects into parts and subparts and endowing the networks with shape bias. We demonstrated this emergence of shape bias and its functional benefits for different network structures with various datasets. For object recognition convolutional neural networks, the shape bias leads to greater robustness against style and pattern change distraction. For the image synthesis generative adversary networks, the emerged shape bias leads to more coherent and decomposable structures in the synthesized images. Ablation studies suggest that sparse codes tend to encode structures, whereas the more distributed codes tend to favor texture. Our code is host at the github repository: \url{https://github.com/Crazy-Jack/nips2023_shape_vs_texture}
Tianqin Li, Ziqi Wen, Yangfan Li, Tai Sing Lee
2023-10-29T04:07:52Z
http://arxiv.org/abs/2310.18894v1
# Emergence of Shape Bias in Convolutional Neural Networks through Activation Sparsity ###### Abstract Current deep-learning models for object recognition are known to be heavily biased toward texture. In contrast, human visual systems are known to be biased toward shape and structure. What could be the design principles in human visual systems that led to this difference? How could we introduce more shape bias into the deep learning models? In this paper, we report that sparse coding, a ubiquitous principle in the brain, can in itself introduce shape bias into the network. We found that enforcing the sparse coding constraint using a non-differential Top-K operation can lead to the emergence of structural encoding in neurons in convolutional neural networks, resulting in a smooth decomposition of objects into parts and subparts and endowing the networks with shape bias. We demonstrated this emergence of shape bias and its functional benefits for different network structures with various datasets. For object recognition convolutional neural networks, the shape bias leads to greater robustness against style and pattern change distraction. For the image synthesis generative adversary networks, the emerged shape bias leads to more coherent and decomposable structures in the synthesized images. Ablation studies suggest that sparse codes tend to encode structures, whereas the more distributed codes tend to favor texture. Our code is host at the github repository: [https://github.com/Crazy-Jack/nips2023_shape_vs_texture](https://github.com/Crazy-Jack/nips2023_shape_vs_texture) ## 1 Introduction Sparse and efficient coding is a well-known design principle in the sensory systems of the brain [3; 29]. Recent neurophysiological findings based on calcium imaging found that neurons in the superficial layer of the macaque primary visual cortex (V1) exhibit even a higher degree of lifetime sparsity and population sparsity in their responses than previously expected. Only 4-6 out of roughly 1000 neurons would respond strongly to any given natural image [35]. Conversely, a neuron typically responded strongly to only 0.4% of the randomly selected natural scene images. This high degree of response sparsity is commensurated with the observation that many V1 neurons are strongly tuned to more complex local patterns in a global context rather than just oriented bars and gratings [34]. On the other hand, over 90% of these neurons did exhibit statistically significant orientation tuning, though mostly with much weaker responses. This finding is reminiscent of an earlier study that found similarly sparse encoding of multi-modal concepts in the hippocampus [30]. This leads to the hypothesis that neurons can potentially serve both as a super-sparse specialist code with their **strong responses**, encoding specific prototypes and concepts, and as a more distributed code, serving as the classical sparse basis functions for encoding images with much **weaker responses**. The specialist code is related to the idea of prototype code, the usefulness of which has been explored in deep learning for serving as memory priors [22] in image generation, for representing structural visual concepts [36; 37], or for constraining parsimonious networks [24] for object recognition. In computer vision community, recent studies found that Convolutional Neural Networks (CNNs) trained for object recognition rely heavily on texture information [11]. This texture bias leads to misclassification when objects possess similar textures but different shapes [2]. In contrast, human visual systems exhibit a strong'shape bias' in that we rely primarily on shape and structures over texture for object recognition and categorization [19]. For instance, a human observer would see a spherical object as a ball, regardless of its texture patterns or material make-up [32]. This poses an interesting question: What is the design feature in the human vision systems that lead to the shape bias in perception? In this paper, we explore whether the constraint of the high degree of strong-response sparsity in biological neural networks can induce shape bias in neural networks. Sparsity, particularly in overcomplete representations, is known to encourage the formation of neurons encoding more specific patterns [28]. Here, we hypothesize that these learned specific patterns contain more shape and structure information, thus sparsifying the neuronal activation could induce shape bias in neuronal representation. To test this hypothesis, we impose a sparsity mechanism by keeping the Top K absolute response of neuronal activation at each channel in one or multiple layers of the network, and zeroing out the less significant activation with K is a sparsity parameter that we can adjust for systematic evaluation. We found that this sparsity mechanism can indeed introduce more shape bias in the network. In fact, simply introducing the Top-K operation during inference in the pre-trained CNNs such as AlexNet [18] or VGG16 [31] can already push the frontier of the shape bias benchmark created by [10] (as shown in Figure 1). Additional training of these networks with the Top-K operation in place further enhances the shape bias in these object recognition networks. Furthermore, we found that the Top-K mechanism also improves the shape and structural bias in image synthesis networks. In the few-shot image synthesis task, we show that the Top-K operation can make objects in the synthesized images more distinct and coherent. To understand why Top-K operation can induce these effects, we analyzed the information encoded in Top-K and non-Top-K responses using the texture synthesis paradigm and found that Top-K responses tend to encode structural parts, whereas non-Top-K responses contribute primarily to texture and color encoding, even in the higher layers of the networks. Our finding suggests that sparse coding is important not just for making neural representation more efficient and saving metabolic energy but Figure 1: Shape bias of our sparse CNNs versus standard CNNs and SOTA transformer-based networks in comparison to the shape bias of human subjects, as evaluated on benchmark dataset [10] across 16 classes. The red dotted line shows the frontier of transformer-based networks with the best shape bias. The greed dotted line shows that sparse CNNs push the frontier of the shape bias boundary toward humans. also for contributing to the explicit encoding of shape and structure information in neurons for image analysis and synthesis, which might allow the system to analyze and understand 3D scenes in a more structurally oriented part-based manner [4], making object recognition more robust. ## 2 Related Works Shape Bias v.s. Texture BiasThere has been considerable debate over the intrinsic biases of Convolutional Neural Networks (CNNs). [11] conducted a pivotal study demonstrating that these models tend to rely heavily on texture information for object recognition, leading to misclassifications when objects have similar textures but distinct shapes. In addition, it has also been shown that using the texture information alone are sufficient to achieve object classification [6]. This texture bias contrasts markedly with human visual perception, which exhibits a strong preference for shape over texture - a phenomenon known as'shape bias'[19]. Humans tend to categorize and recognize objects primarily based on their shape, a factor that remains consistent across various viewing conditions and despite changes in texture [32]. These studies collectively form the foundation upon which our work builds, as we aim to bridge the gap in shape bias between computer vision systems and human visual systems. Improving Shape Bias of Vision ModelsFollowing the identification of texture bias in CNNs by [11], numerous studies sought to improve models' shape bias for better generalization. Training methods have been developed to make models more shape-biased, improving out-of-distribution generalization. Some approaches, like [11], involved training with stylized images to disconnect texture information from the class label. Such approach posed computational challenges and didn't scale well. Others, like [14], used human-like data augmentation to mitigate the problem, while [21] proposed shape-guided augmentation, generating different styles on different sides of an image's boundary. However, these techniques all rely on data augmentation. Our focus is on architectural improvements for shape bias, similar to [1] which created a texture-biased model by reducing the CNN model's receptive field size. We propose using sparsity operations to enhance shape bias of CNNs. Furthermore, [7] proposes to scale up the transformer model into 22 billion parameters and show a near human shape bias evaluation results. We, on the other hand, are not comparing with their network since we focus on CNNs which requires less computation and doesn't require huge data to learn. We demonstrate in the supplementary that the same sparsity constraint could also be beneficial to the ViT family as well, hinting the generalizability of our findings. Robustness In Deep LearningRobustness in deep learning literature typically refers to robustness against the adversarial attack suggested by [33] which showed that minuscule perturbations to images, imperceptible to the human eye, can drastically alter a deep learning model's predictions. Subsequent research [13; 20] corroborated with these findings, showing that deep neural networks (DNNs) are vulnerable to both artificially-induced adversarial attacks and naturally occurring, non-adversarial corruptions. However, the robustness we are mentioning in this paper is about the robustness against confusing textures that are misaligned with the correct object class, as illustrated by the cue-conflict datasets provided by [11]. Although sparsity has been shown to be effective against the adversarial attack [23], explicit usage of Top-K in shape bias has not been explored. ## 3 Method ### Spatial Top-K Operation in CNN Sparse Top-K LayerWe implement the sparse coding principle by applying a Top-K operation which keeps the most significant K responses in each channel across all spatial locations in a particular layer. Specifically, for a internal representation tensor \(X\in R^{c\times h\times w}\), Top-K layer produces \(\text{X}_{\texttt{Top\_K}}:=\texttt{Top\_K}(\text{X},\text{ K})\), where the \(\text{X}_{\texttt{Top\_K}}\) is defined as: \[\text{X}_{\texttt{Top\_K}}[\texttt{i},\texttt{j},\texttt{k}]:=\begin{cases} \text{X}[\texttt{i},\texttt{j},\texttt{k}],&\text{if }\texttt{abs}(\text{X}[\texttt{i},\texttt{j},\texttt{k}])\geq\texttt{Rank}( \texttt{abs}(\text{X}[\texttt{i},:,:]))[K]\\ 0,&\text{otherwise}\end{cases} \tag{1}\] Equation 1 specifies how each entry of a feature tensor \(\text{X}\in R^{c\times h\times w}\) would be transformed inside the Top-K layer. The zero-out operation in Equation 1 suggests that the gradient w.r.t. any non Top-K value, as well as the gradients that chain with it in the previous layers will become zero. However, our analysis later suggests that the network can still learn and get optimized, leading to improved dynamic spatial Top-K selection. Sparse Top-K With Mean ReplacementTo determine the relative importance of the Top-K values or the Top K positions in the Top-K operation, we create an alternative scheme in which all the Top-K responses in a channel are replaced by the mean of the Top-K responses in the channel Top_K_Mean_Rpl as defined below: \[\text{X}_{\text{Top\_K\_Mean\_Rpl}}[\mathtt{i},\mathtt{j},\mathtt{k}]:=\begin{cases} \frac{1}{n}\sum_{\mathtt{j},k\in\{\text{Top-K}\}_{n}}\text{X}[\mathtt{i}, \mathtt{j},\mathtt{k}],&\text{if }\mathtt{abs}(\text{X}[\mathtt{i},\mathtt{j}, \mathtt{k}])\geq\mathtt{Rank}(\mathtt{abs}(\text{X}[\mathtt{i},:,:]))[\mathtt{ k}]\\ 0,&\text{otherwise}\end{cases}\] This Top_K_Mean_Rpl operation reduces the communication rate between layers by 1000 times. We study the impact of this operation on object performance in an object recognition network (See section 4.3 for the results) and to determine which type of information (values versus positions) is essential for inducing the shape bias. ### Visualizing the Top-K code using Texture Synthesis We used a texture synthesis approach [9] with ablation of Top-K responses to explore the information contained in the Top-K responses in a particular layer using the following method. Suppose a program \(F(\cdot)\) denotes a pre-trained VGG16 network with a number of parameters N [31] and \(TS:R_{i}^{h\times w\times 3}\times N\to R_{i}^{h\times w\times 3}\) denotes the texture synthesis program from [9] where an input image \(I\) is iteratively optimized by gradient descent to best match the target image \(T\)'s internal activation when passing through \(F(T)\). We detail the operations inside \(TS\) below. Denote the internal representation at each layer \(i\) of the VGG16 network when passing an input image \(I\) through \(F(\cdot)\) as \(X_{i}(I)\), and suppose there exist \(L\) layers in the VGG16 network. We update the image \(I\) as follows: \[I\gets I-lr*\left(\tfrac{\partial}{\partial I}\sum_{i}^{L}[Gr(X_{i}(I))- Gr(X_{i}(T))]\right)\] ,where \(Gr(\cdot):R^{h\times w\times c}\to R^{c\times c}\) denotes the function that computes gram matrix of a feature tensor, i.e. \(Gr(X_{i}(I))=X_{i}(I)^{T}X_{i}(I)\). We adopt LBFGS [26] with initial learning rate 1.0 and 100 optimization steps in all our experiments with the texture synthesis program. Utilizing the above texture synthesis program \(TS(\cdot,\text{VGG16})\), we can obtain the synthesis results in \(S_{\text{w/o Top-K}}\) by manipulating the internal representation of VGG16 such that we only use the non-Top-K responses to compute the Gram matrix when forming the Gram matrix optimization objectives. This effectively computes a synthesis such that it only matches with the internal non-Top-K neural response. For a given target image \(T\), this leads to \(S_{\text{w/o Top-K}}\): \[S_{\text{w/o Top-K}}=TS(T,\mathtt{ZeroUntInternalTopK}(\text{VGG16}))\] , which would show the information encoded by non-Top-K responses. Next, we include the Top-K firing neurons when computing the Gram matrix to get \(S_{\text{w/ Top-K}}\): \[S_{\text{w/ Top-K}}=TS(T,\mathtt{IdentifFunction}(\text{VGG16}))\] . Comparing these two results will allow us assess the information contained in the Top-K responses. ### Visualizing the Top-K neurons via Reconstruction Similar to section 3.2, we provide further visualization of the information Top-K neurons encode by iteratively optimizing an image to match the internal Top-K activation directly. Mathematically, we redefine our optimization objective in section 3.2: \[I\gets I-lr*\left(\tfrac{\partial}{\partial I}\sum_{i}^{L}[X_{i}(I)- \mathtt{Mask}_{i}*X_{i}(T)]\right)\] , where \(\mathtt{Mask}_{i}\) a controllable mask for each layer \(i\). There are three types of mask we used in the experiments: \(\{\text{Top-K\_Mask},\text{ non\_Top-K\_Mask},\text{ Identity\_Mask}\}\). Top-K_Mask selects only the Top-K fired neurons while keeps the rest of the neurons zero, whereas non_Top-K_Mask only selects the opposite of the Top-K_Mask and Identity_Mask preserves all neurons. By comparing these three settings, one can easily tell the functional difference between Top-K and non Top-K fired neurons (See results in Figure 3). ### Shape Bias Benchmark To demonstrate our proposal that the Top-K responses are encoding the structural and shape information, we silence the non-Top-K responses during inference when using pre-trained CNNs. To test the networks' shape bias, we directly integrate our code into the benchmark provided by [10]. The benchmark contains a cue-conflict test which we use to evaluate the Top-K operation. The benchmark also includes multiple widely adopted models with pre-trained checkpoints and human psychological evaluations on the same cue-conflict testing images. ## 4 Results ### Top-K Neurons Encode Structural Information To test the hypothesis that the shape information is mostly encoded among the Top-K significant responses, whereas the non-Top-K responses are encoding primarily textures, we used the method described in Section 3.2 and compared the texture images synthesized with and without the Top-K responses for the computation of the Gram matrix. Figure 2 compares the TS output obtained by matching the early layers and the higher layers of the VGG16 network in the two conditions. One can observe that ablation of the Top-K responses eliminated much of the structural information, resulting in more texture images. We conclude from this experiment that (1) Top-K responses are encoding structural parts information; (2) Non Top-K responses are primarily encoding texture information. To provide further insights about the different information Top-K and non Top-K neurons are encoding, we show another qualitative demonstration in Figure 3 where we optimize images that would excite the Top-K neurons alone and the non Top-K neurons alone respectively (See full description in Section 3.3). From Figure 3, it is clear that optimizing images to match the Top-K fired neurons yields high level scene structures with details abstracted away while optimization efforts to match the non Top-K fired neurons produce low level local textures of the target images. Together, we provide evidence to Figure 3: Visualizing Top-K and non Top-K neurons through optimizing input images to match their activation. Figure 2: Texture Synthesis (TS) using [9]. i. shows the original image, ii. shows the TS results of \(S_{\text{w/ Top-K}}\) with both Top-K and Non Top-K activation intact, iii. shows the TS results \(S_{\text{w/o Top-K}}\) with Top-K activation deleted before performing TS. support our hypothesis that it is the strong firing neurons in the convolutional neural networks that provide structural information while the textures are encoded among the weakly activated neurons. Next, we demonstrate that this phenomenon will result in improved shape bias in both analysis and synthesis tasks. ### Top-K Responses already have Shape Bias without Training We test the Top-K activated CNNs with different degrees of sparsity on the shape bias Benchmark proposed from [10]. This benchmark evaluates the shape bias by using a texture-shape cue conflict dataset where the texture of an image is replaced with the texture from other classes of images. It defines the shape and texture bias in the following ways: \[\textbf{shape bias}=\frac{\textbf{\# of correct shape recognitions}}{\textbf{\# of correct recognitions}}\] \[\textbf{texture bias}=\frac{\textbf{\# of correct texture recognitions}}{\textbf{\# of correct recognitions}}\] It has been shown in previous work [10; 11] that CNNs perform poorly on shape-based decision tests whereas human subjects can make successful shape-based classification on nearly all the evaluated cases. This results in CNN models having relatively low shape bias scores while humans have close to 1 shape bias score. Interestingly, it has been observed that Vision Transformers (ViT) model family has attained significant improvement in shape bias [10]. Adding the Top-K operation to a simple pretrained such as AlexNet or VGG16 alone already can already induced a significant increase in shape bias, as shown in Fig.4. With the sparsity knob K equal to 10% and 20%, the Top-K operation alone appears to achieve as much or more shape bias as the state-of-the-art Vision Transformer models in the cue-conflict dataset, leading further support to the hypothesis that Top-K sparsity can lead to shape bias. We plot the best of the Top-K sparsified AlexNet and VGG16 for each evaluation of 16 object classes in Fig.5. We can observe that sparsity constraint improve shape-biased decision-making for most of the object classes, bringing the performance of the pre-trained model closer to human performance. With the proper settings of sparsity, certain classes (e.g. the bottle and the clock category) could attain human level performance in shape-biased scores. However, we should note that the confidence interval is quite large, indicating that the network performs differently across the different classes. A closer look at shape bias for each class is shown in Figure 5. ### Top-K training induces Shape Bias in Recognition Networks To evaluate the shape bias can be enhanced by training with Top-K operation, we trained ResNet-18 [12] on different subsets of ImageNet dataset [8]. Each subset contains randomly selected 10 original categories from ImageNet,for all the training and evaluation. Every experiment is run three times to obtain an error bar. During the evaluation, we employ AdaIn style-transfer using programs adopted from [17] to transform the evaluation images into a texture-ablated form as shown in Figure 6. The original texture of the image is replaced by styles of non-related images using style transform. Figure 4: Overall shape bias of sparse CNNs, CNNs, Transformer and humans This allows us to evaluate how much a trained model is biased toward the original texture instead of the underlying shape. In this experiment, we train classification networks with two non-overlapping subsets of the ImageNet, namely IN-\(S_{1}\) and IN-\(S_{2}\). We select the categories that are visually distinctive in their shapes. The details about the datasets can be found in the Supplementary Information. We trained ResNet-18 models over the selected IN-\(S_{1}\) and IN-\(S_{2}\) dataset with the standard Stochastic Gradient Descent (SGD, batch size 32) and a cosine annealing learning rate decay scheduling protocol with lr starting from 0.1. The same optimization is applied to ResNet-18 models with a 20% spatial Top-K layer added after the second bottleneck block of the ResNet-18. All models are then evaluated on the styleized version of IN-\(S_{1}\) and IN-\(S_{2}\) evaluation dataset after trained with 50 epochs. Table 1, shows that (i) the classification accuracy on the original evaluation dataset doesn't drop: mean top-1 accuracy of 87.8 (baseline) v.s. 89.4 (w. Top-K) on IN-\(S_{1}\) and 81.3 (baseline) v.s. 83.4 (w. \begin{table} \begin{tabular}{c c c c c} \hline Top-1 Acc. (\%) & IN-\(S_{1}\) (\(\uparrow\)) & Stylized-IN-\(S_{1}\) (\(\uparrow\)) & IN-\(S_{2}\)(\(\uparrow\)) & Stylized-IN-\(S_{2}\) (\(\uparrow\)) \\ \hline ResNet-18 [12] & 87.8 \(\pm\) 0.5 & 49.3 \(\pm\) 1.5 & 81.3 \(\pm\) 1.7 & 52.4 \(\pm\)2.2 \\ ResNet-18 w. Top-K during training & **89.4**\(\pm\) 0.6 & **55.4**\(\pm\) 0.8 & 83.4 \(\pm\) 0.9 & **59.7**\(\pm\) 0.6 \\ \hline ResNet-18 w. Top\_K\_Mean\_Rpl during training & 84.9 \(\pm\) 0.3 & 56.8 \(\pm\) 1.7 & 75.5 \(\pm\) 2.5 & 53.1 \(\pm\) 1.0 \\ \hline \end{tabular} \end{table} Table 1: Evaluation for models trained on IN-\(S_{1}\) and IN-\(S_{2}\) datasets, each of which consists 10 classes of all train/val data from ImageNet-1k dataset [8]. Figure 5: The classification result on the Shape Bias Benchmark proposed from[10]. This plot shows the shape bias of sparse CNNs, CNNs and humans on different class in texture-shape cue conflict dataset. It also show the shape bias of different sparsity degree. e.g. 5% means that only top 5% activation value would be passed to the next layer. Vertical lines means the average value. Figure 6: Evaluating shape bias of the network with stylized ImageNet subsets. Three pairs of images are presented sampled from the our evaluation datasets. Specifically, we transfer (a) \(\rightarrow\) (b), (c) \(\rightarrow\) (d) and (e) \(\rightarrow\) (f) by AdaIN [17] and keep the original class labels. During the evaluation, the transferred images are presented instead of the original test image to measure the network’s texture bias sensitivity. Top-K) on IN-\(S_{2}\) respectively even when we push the sparsification to K= 20%; (ii) The shape bias improves significantly: mean top-1 accuracy of 55.4 (w. Top-K) v.s. 49.3 (baseline) on Stylized-IN-\(S_{1}\) and 59.7 (w. Top-K) v.s 52.4 (baseline) on Stylized-IN-\(S_{2}\) respectively. This supports our conjecture that the sparse code could introduce more shape bias, in comparison to the dense representation, during learning. To further investigate why Top-K might induce shape bias, we evaluate whether the values of the Top-K responses matter by compressing the information in each channel to the mean responses of the Top-K responses of that channel at the Top-K positions. This reduces the information of each channel to only a binary mask indicating the Top-K responding locations and a single float number that relays the channel's weighted contribution to downstream layers, effective compressing the communication rate by 3 orders of magnitude (See Section 3.1 for a detailed description of the Top_K_Mean_Rpl). Despite the enormous amount of data compression by replacing the Top-K values with the mean, the network can still maintain the shape bias comparable to the normal ResNet-18 baseline (as indicated by the improved or on-par performance on the Stylized-IN-\(S_{1}\)/\(S_{2}\) between ResNet-18 and ResNet-18 w. Mean Top-K in Table 1). This suggests that the spatial map of the Top-K activation is more important than the precise values of the Top-K responses. This suggests a significant amount of the object shapes features are actually encoded in the occupancy map of significant neural activities, i.e. the binary mask of the Top-K. ### Towards Shape Biased Few Shot Image Synthesis Humans are great few-shot learners, i.e. learning from a few examples. This ability might be related to our cognitive ability to learn a generative model of the world that allows us to reason and imagine. Recurrent feedback in the hierarchical visual system has been hypothesized to implement such a generative model. We investigate whether the Top-K operation also induces shape bias in the synthesis network for the few-shot image synthesis task. We hypothesize that shape bias can benefit few-shot image synthesis by emphasizing on structural and shape information. Figure 6(b) shows that state-of-the-art few-shot synthesis program (FastGAN [25]) suffers severely from texture bias. We found that introducing Top-K operation in the fourth layer (the 32 x 32 layer) in FastGAN, significant improvement in the synthesis results can be obtained on datasets selected from ImageNet [8] (100 samples each from four diverse classes, synthesizing each class independently, see Supplementary for details) as shown in Table 7. Images from ImageNet possess rich structural complexity and population diversity. To be considered to be a good synthesis, generated samples would have to achieve strong global shape coherence in order to have good matching scores with the real images. A better quantitative evaluation result would suggest the emergence of a stronger shape or structural bias in the learned representation. Samples of the four classes are shown in Figure 6(a). To assess the image synthesis quality, we randomly sample 3000 latent noise vectors and pass them through the trained generators. The generated images are then compared with the original training images in the inception-v3 encoder space by Frechet Inception Distance (FID [15]) and Kernel Inception Distance (KID [5]) scores and documented in Table 2. Each setting is run 3 times to produce an error bar. First, the synthesis quality measurements for adding the Top-K operation to the FastGAN network show a consistent improvement in terms of FID and KID scores in Table 2. Figure 6(b) shows that the Top-K operation leads the generation of objects (e.g. the jeep) that are more structurally coherent and distinct, compared to the original FastGAN. Specifically, we observe a 21.1% improvement on FID scores and 50.8% improvement on KID scores for the Jeep-100 class when K= 5% sparsity was Figure 7: Few shot image synthesis datasets and qualitative comparison resutlts between our methods and FastGAN [25]. imposed during training (i.e. only keep 5% neurons active). Similarly, when 5% sparsity is imposed on Fish-100 and Train-100 datasets the FID is increased by 17.3% and 12% respectively and the KID performance is boosted by 48.5% and 33.4%. Lastly, we test the in-door table class which contains complex objects with many inter-connected parts and subparts. A K=15% sparsity leads to a gain of 9.3 % and 22.3% in FID and KID respectively for synthesizing similar in-door tables. Overall, our experiments show that introducing the Top-K operation in a challenging few-shot synthesis task can significantly improve the network's generated results on a diverse set of complicated natural objects. ### Parts Learning in Sparse Top-K Finally, we study the training dynamics of the Top-K layer's internal representation. In Figure 8, we can make the following observations. (1) By applying sparse constraint using Top-K, each channel is effectively performing a binary spatial segmentation, i.e. the image spatial dimension is separated into either Top-K region or non Top-K territory. (2) Although there is no explicit constraint to force the Top-K neurons to group together, the Top-K responses tend to become connected, forming object parts and subparts, as training evolves. We believe the development of this continuous map when training with Top-K operation might be due to two factors (1) CNN activations are locally smooth, i.e. two adjacent pixels in layer \(L_{n}\) are linked by the amount of overlap between their corresponding input patches in \(L_{n-1}\), (2) Top-K increase the responsibility of each individual neuron to the loss function. When neurons i and j are being selected as Top-K, their responses are likely similar to each other. However, if their corresponding spatial location in the output has different semantic meanings, they will receive diverse gradients which will be then amplified by the increased individual responsibility. The diverging gradients would lead to the value difference of two neuron i and j's gradients, resulting in one of them leaving the Top-K group while only the semantic similar ones remaining the Top-K sets. This might suggest a principle we call _neurons fire together, optimize together_ during CNN Top-K training, which could lead to the observed emerging of semantic parts and subparts. This localist code could further connect to the learning by recognition component theory [4] and could further leads to cleaner and easier time to achieve shape bias. In functional analysis of the brain, [27] also show that a local smoothness constraint could lead to the topological organization of the neurons, hinting that the the hypothesized factors here could have a neuroscientific grounding. \begin{table} \begin{tabular}{c|c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Jep-100} & \multicolumn{2}{c}{Fish-100} & \multicolumn{2}{c}{Train-100} & \multicolumn{2}{c}{Table-100} \\ \cline{2-9} & FID \(\downarrow\) & KIDP \(\downarrow\) & FID \(\downarrow\) & KIDP \(\downarrow\) & FID \(\downarrow\) & KIDP\(\downarrow\) & FID \(\downarrow\) & KIDP\(\downarrow\) \\ \hline FastGAN [25] & 49.0 \(\pm\) 1.4 & 12.0 \(\pm\) 1.9 & 46.2 \(\pm\) 1.8 & 13.4 \(\pm\) 0.6 & 46.1 \(\pm\) 1.8 & 11.2 \(\pm\) 0.8 & 67.2 \(\pm\) 0.1 & 19.7 \(\pm\) 0.3 \\ FastGAN w. Top-K (ours) & **38.7**\(\pm\) 0.5 & **5.9**\(\pm\) 0.7 & **38.2**\(\pm\) **1.4** & **6.9**\(\pm\) **0.7** & **40.2**\(\pm\) **0.8** & **7.4**\(\pm\) **0.2** & **60.9**\(\pm\) 0.3 & 15.3 \(\pm\) 0.2 \\ \hline \hline \end{tabular} \end{table} Table 2: Few-shot Image Synthesis results measured in FID [16] and KID [5] Note that KID * denotes KID scaled by a factor of \(10^{3}\) to demonstrate the difference. Figure 8: Even though Top-K Operation is not fully differentiable, the network is able to relocate the spatial activation mass smoothly towards a connected meaningful parts which eventually leads to component learning as shown in Figure 9 Figure 9: Synthesis network internal topk layers reveals semantic decomposition of parts and subparts. Top-K Sparsity HyperparameterWith the understanding from Section 4.5, we want to reiterate the importance of the sparsity hyperparameter we used. The amount of the sparsity can be directly translated to "size of the parts". Thus, depending on the image type, the composition of the scene, the network architecture, and the layers in which the Top-K sparsity operation is applied on, the results could be drastically different. We refers to the supplementary for more detailed ablation study. ## 5 Conclusion In this study, we discovered that an operation inspired by a well-known neuroscience design motif of sparse coding can induce shape bias in neural representation. We demonstrated this in object recognition networks and in few-shot image synthesis networks. We found that simply adding the Top-K sparsity operation can induce shape bias in pre-trained convolutional neural networks and that training of the CNNs and GAN with the simple Top-K operation can increase the shape bias further toward human performance, which makes object recognition more robust against texture variations and makes image synthesis generating structurally more coherent and distinct objects. Using texture synthesis, we are able to demonstrate that Top-K responses carry more structural information, while the non Top-K responses carry more texture information. The observation that sparse coding operation can induce shape bias in deep learning networks suggests sparsity might also contribute to shape bias in human visual systems. ## 6 Ethics Statement This study investigates whether the sparse coding motif in neuroscience can induce shape bias in deep learning networks. The positive results suggest that sparsity might also contribute to shape bias in the human visual systems, thus providing insights to our understanding of the brain. While deep learning can advance science and technology, it comes with inherent risks to society. We acknowledge the importance of ethical study in all works related to deep learning. A better understanding of deep learning and the brain however is also crucial for combating the misuse of deep learning by bad actors in this technological arm race. ## 7 Acknowledgement This work was supported by an NSF grant CISE RI 1816568 awarded to Tai Sing Lee. This work is also partially supported by the graduate student fellowship from CMU Computer Science Department.
2308.04749
Enhancing Efficient Continual Learning with Dynamic Structure Development of Spiking Neural Networks
Children possess the ability to learn multiple cognitive tasks sequentially, which is a major challenge toward the long-term goal of artificial general intelligence. Existing continual learning frameworks are usually applicable to Deep Neural Networks (DNNs) and lack the exploration on more brain-inspired, energy-efficient Spiking Neural Networks (SNNs). Drawing on continual learning mechanisms during child growth and development, we propose Dynamic Structure Development of Spiking Neural Networks (DSD-SNN) for efficient and adaptive continual learning. When learning a sequence of tasks, the DSD-SNN dynamically assigns and grows new neurons to new tasks and prunes redundant neurons, thereby increasing memory capacity and reducing computational overhead. In addition, the overlapping shared structure helps to quickly leverage all acquired knowledge to new tasks, empowering a single network capable of supporting multiple incremental tasks (without the separate sub-network mask for each task). We validate the effectiveness of the proposed model on multiple class incremental learning and task incremental learning benchmarks. Extensive experiments demonstrated that our model could significantly improve performance, learning speed and memory capacity, and reduce computational overhead. Besides, our DSD-SNN model achieves comparable performance with the DNNs-based methods, and significantly outperforms the state-of-the-art (SOTA) performance for existing SNNs-based continual learning methods.
Bing Han, Feifei Zhao, Yi Zeng, Wenxuan Pan, Guobin Shen
2023-08-09T07:36:40Z
http://arxiv.org/abs/2308.04749v1
Enhancing Efficient Continual Learning with Dynamic Structure Development of Spiking Neural Networks ###### Abstract Children possess the ability to learn multiple cognitive tasks sequentially, which is a major challenge toward the long-term goal of artificial general intelligence. Existing continual learning frameworks are usually applicable to Deep Neural Networks (DNNs) and lack the exploration on more brain-inspired, energy-efficient Spiking Neural Networks (SNNs). Drawing on continual learning mechanisms during child growth and development, we propose Dynamic Structure Development of Spiking Neural Networks (DSD-SNN) for efficient and adaptive continual learning. When learning a sequence of tasks, the DSD-SNN dynamically assigns and grows new neurons to new tasks and prunes redundant neurons, thereby increasing memory capacity and reducing computational overhead. In addition, the overlapping shared structure helps to quickly leverage all acquired knowledge to new tasks, empowering a single network capable of supporting multiple incremental tasks (without the separate sub-network mask for each task). We validate the effectiveness of the proposed model on multiple class incremental learning and task incremental learning benchmarks. Extensive experiments demonstrated that our model could significantly improve performance, learning speed and memory capacity, and reduce computational overhead. Besides, our DSD-SNN model achieves comparable performance with the DNNs-based methods, and significantly outperforms the state-of-the-art (SOTA) performance for existing SNNs-based continual learning methods. ## 1 Introduction Children are able to incrementally learn new tasks to acquire new knowledge, however, this is a major challenge for Deep Neural Networks (DNNs) and Spiking Neural Networks (SNNs). When learning a series of different tasks sequentially, DNNs and SNNs forget the previously acquired knowledge and fall into catastrophic forgetting [12]. Despite some preliminary solutions that have recently been proposed for DNNs-based continual learning, there is still a lack of in-depth inspiration from brain continual learning mechanisms and exploration on SNNs-based models. The studies attempt to address the continual learning problem of DNNs under task incremental learning (recognition within the classes of a known task) and class incremental learning (recognition within all learned classes) scenarios. Related works can be roughly divided into three categories: **a) Regularization.** Employing maximum a posterior estimation minimizes the changes of important weights [13, 14, 15]. These methods require strong model assumptions, such as the EWC [14] supposing that new weights are updated to local regions of the previous task weights, which are highly mathematical abstractions and poorly biologically plausibility. **b) Replay and retrospection.** Reviewing a portion of the samples of the old tasks while learning the new task [15, 16, 17], is currently considered as the superior class incremental learning method. The samples of old tasks are stored in additional memory space or generated by additional generation networks, resulting in extra consumption. **c) Dynamic network structure expansion.** [18, 19] proposed progressive neural networks that extend a new network for each task, causing a linear increase in network scale. To reduce network consumption, a sub-network of the whole is selected for each task using pruning and growth algorithms [11, 12], evolutionary algorithms [16] or reinforcement learning (RL) algorithms [13, 14]. However, these methods require storing a mask for each sub-network, which to some extent amounts to storing a separate network for each task, rather than a brain-inspired overall network capable of performing multiple sequential tasks simultaneously. To the best of our knowledge, there is little research on SNNs-based continual learning. Spiking neural networks, as third-generation neural networks [15, 16], simulate the information processing mechanisms of the brain, and thus serve well as an appropriate level of abstraction for integrating inspirations from brain multi-scale biological plasticity to achieve child-like continual learning. The existing HMN algorithm [15] uses a DNN network to decide the sub-network of SNN for each task, and is only applicable to two-layer fully connected networks for the N-MNIST dataset. There is still a lack of SNNs-based continual learning methods that could incorporate in-depth inspiration from the brain's continual learning mechanisms, while achieving comparable performance with DNNs under complex continual learning scenarios. Structural development mechanisms allow the brain's nervous system to dynamically expand and contract, as well as flexibly allocate and invoke neural circuits for efficient continual learning [11]. Motivated by this, this paper proposes Dynamic Structure Development of Spiking Neural Networks (DSD-SNN) for efficient and adaptive continual learning. DSD-SNN is designed as an SNN architecture that can be dynamically expanded and compressed, empowering a single network to learn multiple incremental tasks simultaneously, overcoming the problem of needing to assign masks to each task faced by DNNs-based continual learning methods. We validate the effectiveness of our proposed model on multiple class incremental learning (CIL) and task incremental learning (TIL) benchmarks, achieving comparable or better performance on MNIST, N-MNIST, and CIFAR-100 datasets. Especially, the proposed DSD-SNN model achieves an accuracy of 77.92% \(\pm\) 0.29% on CIFAR100, only using 37.48% of the network parameters. The main contributions of this paper can be summarized as follows: * DSD-SNN dynamically grows new neurons to learn newly arrived tasks, while extremely compressing the network to increase memory capacity and reduce computational overhead. * DSD-SNN maximally utilizes the previously learned tasks to help quickly adapt and infer new tasks, enabling efficient and adaptive continual learning (no need to identify separate sub-network mask for each task). * The experimental results demonstrate the remarkable superiority of DSD-SNN model on performance, learning speed, memory capacity and computational overhead compared with the state-of-the-art (SOTA) SNNs-based continual learning algorithms, and comparable performance with DNNs-based continual learning algorithms. ## 2 Related Work This paper mainly focuses on dynamic network structure expansion algorithms based on structural plasticity, which can be divided into progressive neural networks (PNN) and sub-network selection algorithms. In fact, the existing network structure expansion algorithms are mostly DNNs-based continual learning, with little exploration on SNNs. **Progressive neural networks.**[17] first proposes the progressive neural network and applies it to multiple continual reinforcement learning tasks. The PNN expands a new complete network for each new task and fixes the networks of the old tasks. In addition, lateral connections are introduced between the networks to effectively leverage the knowledge already learned. PIL [13] extends the PNN to large-scale convolutional neural networks for image classification tasks. However, the PNNs algorithms extremely increase the network storage and computational consumption during continual learning. In contrast, as development matures and cognition improves, the number of brain synapses decreases by more than 50% [10], forming a highly sparse brain structure perfect for continual learning. The PNNs blindly expand the structure causing catastrophic effects in the case of massive sequential tasks. Sub-network selection algorithm.A part of the network nodes is selected to be activated for a given task. PathNet [10] is first proposed to select path nodes (each node contains a set of neurons) for each task using the genetic algorithm. RPS-Net [14] randomly activates multiple input-to-output paths connected by convolutional blocks, and chooses the highest-performing ones as the final path. In addition, RCL [23] employ additional RL networks to learn the number of neurons required for a new task, while CLEAS [1] uses RL to directly determine the activation and death of each neuron. HMN [15] uses a hybrid network learning framework that uses an ANN modulation network to determine the activation of neurons for a SNN prediction network, but is only applied to small-scale networks for simple scenarios. A sub-network mask learning process based on pruning strategy is proposed by [1], which is applied to CIL combined with the replay strategy. The above algorithms select sub-networks for each task separately, failing to maximize the reuse of acquired knowledge to support new task learning. To solve this problem, DRE [24] prunes a sparse convolutional feature extractor for each task, and then merges the output of the convolution extractor into the previous tasks. CLNP [18] grows new neurons for a new task based on the old network, and DEN [21] expands when the already learned network is insufficient for the new task, while reusing the existing neurons. These several works require storing an additional sub-network mask for each task, which both increases additional storage consumption and is not consistent with the overall developmental learning process of the brain. Considering the various limitations of existing works above, the DSD-SNN proposed in this paper, which is a pioneering algorithm on SNNs-based continual learning, enables the capacity of a single network to learn multiple sequential tasks simultaneously, while reusing the acquired knowledge and significantly increasing the memory capacity. ## 3 Method ### Continual Learning Definition We are expected to sequentially learn \(\Gamma\) tasks, \(\Gamma=\{T_{1},...,T_{N}\}\). Each task \(T_{i}\) takes the form of a classification problem with its own dataset: \(D_{T_{i}}=\{(x_{j},y_{j})\}_{j=1}^{N_{T_{i}}}\), where \(x_{j}\in\chi,y_{i}\in\{1,...,C_{T_{i}}\}\), \(\chi\) is the input image space, \(N_{T_{i}}\) and \(C_{T_{i}}\) are the number of samples and classes of task \(T_{i}\). For the task incremental learning scenario, \(T_{i}\) is knowable in the testing process, setting requires to optimize: \[\underset{\theta}{max}\:E_{T_{i}\sim\Gamma}[E_{(x_{j},y_{j})\sim T_{i}}[logp_{ \theta}(y_{j}|x_{j},T_{i})]] \tag{1}\] where \(\theta\) is the network parameters. When \(T_{i}\) is unknown in testing, more complex class incremental learning scenarios solve the following problems: \[\underset{\theta}{max}\:E_{T_{i}\sim\Gamma}[E_{(x_{j},y_{j})\sim T_{i}}[logp_ {\theta}(y_{j}|x_{j})]] \tag{2}\] ### DSD-SNN Architecture The design of the DSD-SNN algorithm is inspired by the dynamic allocation, reorganization, growth, and pruning of neurons during efficient continual learning in the brain. As depicted in Fig. 1, the proposed DSD-SNN model includes three modules (random growth, adaptive pruning, freezing neurons) to accomplish multi-task incremental learning. Random growth.When a new task is coming, the DSD-SNN model first randomly assigns and grows a portion of untrained empty neurons to form a new pathway. And the new task-related classification neurons are added to the output layer as shown in Fig. 1. Newly grown neurons receive the output of all non-empty neurons of the previous layer (both newly grown neurons and already frozen neurons in the previous tasks). Therefore, all features learned from previous tasks can be captured and reused by the neural pathways of the new task. Then, the DSD-SNN algorithm can take full advantage of the features learned from the previous task to help the new task converge quickly, while the newly grown neurons can also focus on learning features specific to the new task. Adaptive pruning.During the learning process of the current task, the DSD-SNN algorithm adaptively detects relatively inactive neurons in the current pathway based on synaptic activity and prunes those redundant neurons to save resources. The pruned neurons are re-initialized as empty neurons that can be assigned to play a more important role in future tasks. Pruning only targets those neurons that are newly grown for the current task and does not include neurons that were frozen in the previous tasks. Adaptive pruning can substantially expand the memory capacity of the network to learn and memorize more tasks under a fixed scale. Freezing neurons.The contributing neurons that are retained after pruning will be frozen, enabling the DSD-SNN model to learn new tasks without forgetting the old tasks. The frozen neurons can be connected to newly grown neurons to provide acquired knowledge. During the training of new Figure 1: The DSD-SNN model realizes multi-task incremental learning through random growth, adaptive pruning, and freezing of neurons. task \(T_{i}\), all input synapses of the frozen neuron are no longer updated, only the newly added output synapses to the new neurons can be updated. The DSD-SNN model with neuron growth, pruning, and freezing can memorize previous knowledge and reuse the acquired knowledge to learn new tasks for efficient continual learning. The deep SNN with multiple convolutional and fully connected layers is constructed to implement task incremental learning and class incremental learning, as shown in Fig. 2. During the training process, we sequentially input training samples of each task and update the synapses newly added to the network. In the testing process, test samples of all learned tasks are fed into our overall multi-task continual learning network, so that a single DSD-SNN model can achieve all tasks without the need to identify separate sub-network mask for each task. To address more complex class incremental learning, we add a two-layer network as the task classifier. The task classifier receives inputs from the classes outputs of the continual learning network, and outputs which task the current sample belongs to (as in the red box in Fig. 2). According to the inferred task \(\hat{T_{i}}\) obtained from the task classifier, the DSD-SNN model chooses the maximum output class of the \(\hat{T_{i}}\) task in the continual learning network as the predicted class. ### DSD-SNN Computational Details So far in this section, we have described how our model efficiently and adaptively accomplishes continual learning. We now introduce the detailed growth and pruning scheme that we use throughout this paper. #### 3.3.1 Neuronal Growth and Allocation During brain development, neurons and synapses are first randomly and excessively grown and then reshaped based on the external experience [15, 10]. In the DSD-SNN model, the SNN is first initialized to consist of \(N^{l}\) neurons in each layer \(l\). In the beginning, all neurons in the network are unassigned empty neurons \(N_{empty}\). When the new task \(T_{i}\) arrives, we randomly grow \(\rho\%\times N^{l}\) neurons from the empty neurons for each layer, denoted as \(N_{new}\). After training and pruning for task \(T_{i}\), all retained neurons \(N_{new}\) are frozen, added to \(N_{frozen}\). To better utilize the acquired knowledge, the newly grown neurons \(N_{new}^{l}\) in each layer not only receive the output of the new growth neurons \(N_{new}^{l-1}\) in the previous layer, but also receive the output of the frozen neurons \(N_{frozen}^{l-1}\) in the previous layer, as follows. \[\{N_{frozen}^{l-1},N_{new}^{l-1}\}\to N_{new}^{l} \tag{3}\] Where \(\rightarrow\) represents the input connections. For the frozen neurons \(N_{frozen}^{l-1}\), growth does not add input connections to avoid interference with the memory of previous tasks. Note that we do not assign task labels to frozen and new growth neurons in either the training or testing phase of continual learning. That is, the DSD-SNN algorithm uses the entire network containing all neurons that have learned previous tasks to do prediction and inference. Thus, our model is able to learn multiple sequential tasks simultaneously without storing separate sub-network masks. #### 3.3.2 Neuronal Pruning and Deactivation Neuroscience researches have demonstrated that after the overgrowth in infancy, the brain network undergoes a long pruning process in adolescence, gradually emerging into a delicate and sparse network [12, 13, 14]. Among them, input synapses are important factors to determine the survival of neurons according to the principle of "use it or lose it" [15, 16, 17]. For SNN, neurons with input synapse weights close to 0 are more difficult to accumulate membrane potentials beyond the spiking threshold, resulting in firing spikes less and contributing to the outputs less. Therefore, we used the sum of input synapse weights \(S_{i}^{l}\) to assess the importance of neurons \(i\) in the \(l\) layer as in Eq. 4. \[S_{i}^{l}=\sum_{j=1}^{M_{l-1}}W_{ij} \tag{4}\] Figure 2: The architecture of the DSD-SNN model. Where \(W_{ij}\) is the synapse weights from presynaptic neuron \(j\) to postsynaptic neuron \(i\), \(M_{l-1}\) is the number of presynaptic neurons. During the training of new tasks, we monitor the importance of newly grown neurons \(N_{new}\) and prune redundant neurons whose values of \(S_{i}\) are continuously getting smaller. Here, we define a pruning function as follows: \[\phi_{P_{i}^{l}}=\alpha*Norm(S_{i}^{l})-\rho_{p} \tag{5}\] \[P_{i}^{l}=\gamma P_{i}^{l}+e^{-\frac{epoch}{\eta}}\phi_{P_{i}^{l}} \tag{6}\] Where \(Norm(S_{i}^{l})\) refers to linearly normalize \(S_{i}^{l}\) to 0 \(\sim\) 1. \(\alpha=2\) and \(\rho_{p}\) control the pruning strength. \(\rho_{p}\) includes \(\rho_{c}\) and \(\rho_{f}\) for the convolutional and fully connected layers, respectively. \(P_{i}^{l}\) is initialized to 5. \(\gamma=0.99\) and \(\eta\) controls the update rate as [14]. \(e^{-\frac{epoch}{\eta}}\) decreases exponentially with increasing epoch, which is consistent with the speed of the pruning process in biological neural networks that are first fast, then slow, and finally stable [17, 14]. The pruning functions are updated at each epoch, then we prune neurons with the pruning function \(P_{i}^{l}<0\). We structurally prune channels in the convolutional layer and prune neurons in the fully connected layer, removing their input connections and output connections. ### SNNs Information Transmission Different from DNNs, SNNs use spiking neurons with discrete 0/1 output, which are able to integrate spatio-temporal information. Specifically, we employ the leaky integrate-and-fire (LIF) neuron model [1] to transmit and memorize information. In the spatial dimension, LIF neurons integrate the output of neurons in the previous layer through input synapses. In the temporal dimension, LIF neurons accumulate membrane potentials from previous time steps via internal decay constants \(\tau\). Incorporating the spatio-temporal information, the LIF neuron membrane potential \(U_{i}^{t,l}\) at time step \(t\) is updated by the following equation: \[U_{i}^{t,l}=\tau(1-U_{i}^{t-1,l})+\sum_{j=1}^{M_{l-1}}W_{ij}O_{j}^{t,l-1} \tag{7}\] When the neuronal membrane potential exceeds the firing threshold \(V_{th}\), the neuron fires spike, and its output \(O_{i}^{t,l}\) is equal to 1; Conversely, the neuron outputs 0. The discrete spiking outputs of LIF neurons conserve consumption as the biological brain, but hinder gradient-based backpropagation. To address this problem, [20] first proposed the method of surrogate gradient. In this paper, we use Qgategrad [15] surrogate gradient method with constant \(\lambda=2\) to approximate the spiking gradient, as follows: \[\frac{O_{i}^{t,l}}{U_{i}^{t,l}}=\begin{cases}0,&|U_{i}^{t,l}|>\frac{1}{\lambda} \\ -\lambda^{2}|U_{i}^{t,l}|+\lambda,&|U_{i}^{t,l}|\leq\frac{1}{\lambda}\end{cases} \tag{8}\] Overall, We present the specific procedure of our DSD-SNN algorithm as Algorithm 1. ``` Input: Dataset \(D_{T_{i}}\) for each task \(T_{i}\); Initialize empty network \(Net\); Constant parameters of growth \(\rho\%\) and pruning \(\rho_{c},\rho_{f}\). Output: Prediction Class in task \(T_{i}\) (TIL) or in all tasks (CIL). for each sequential task \(T_{i}\)do Growing new neurons to \(Net\) as Eq. 3; for\(epoch=0\); \(epoch<E\); \(epoch++\)do SNN forward prediction \(Net\) (\(D_{T_{i}}\)) as Eq. 7; SNN backpropagation to update new connections as Eq. 8; Assessing importance for newly grown neurons as Eq. 4; Calculating the neuronal pruning function as Eq. 5 and Eq. 6; Pruning redundant neurons with \(P_{i}^{l}<0\); end for Freezing retained neurons in \(Net\); end for ``` **Algorithm 1**The DSD-SNN Continual Learning. ## 4 Experiments ### Datasets and Models To validate the effectiveness of our DSD-SNN algorithm, we conduct extensive experiments and analyses on the spatial MNIST [11], CIFAR100 [21] and neuromorphic temporal N-MNIST datasets [1] based on the brain-inspired cognitive intelligence engine BrainCog [13]. The specific experimental datasets and models are as follows: * Permuted MNIST: We permute the MNIST handwritten digit dataset to ten tasks via random permutations of the pixels. Each task contains ten classes, divided into 60,000 training samples and 10,000 test samples. As for the SNN model, we use the SNN with two convolutional layers, one fully-connected layer, and the multi-headed output layer. * Permuted N-MNIST: We randomly permute the N-MNIST ( the neuromorphic capture of MNIST) to ten tasks. And we employ the same sample division and the same SNN structure as MNIST. * Split CIFAR100: The more complex natural image dataset CIFAR100 is trained in several splits including 10 steps (10 new classes per step), 20 steps (5 new classes per step). SNN model consisting of eight convolutional layers, one fully connected and multi-headed output layer are used to generate the predicted class. For the task classifier, we use networks containing a hidden layer with 100 hidden neurons for MNIST and N-MNIST, and 500 hidden neurons for CIFAR100. To recognize tasks better, we replay 2000 samples for each task as [12, 13, 14]. Our code is available at [https://github.com/BrainCog-X/BrainCog/tree/main/examples/Structural_Development/DSD-SNN](https://github.com/BrainCog-X/BrainCog/tree/main/examples/Structural_Development/DSD-SNN). ### Comparisons of Performance As shown in Fig. 2(a), our DSD-SNN model maintains high accuracy with increasing number of learned tasks. This demonstrates that the proposed model overcomes catastrophic forgetting on all MNIST, neuromorphic N-MNIST and more complex CIFAR100 datasets, achieving robustness and generalization capability on both TIL and CIL. To validate the effectiveness of our dynamic structure development module, we compare the learning process of DSD-SNN with other DNNs-based continual learning and transfer them to SNN as Fig. 2(b). The experimental results indicate that DSD-SNN realizes superior performance in learning and memorizing more incremental tasks, exhibiting larger memory capacity compared to the DNNs-based continual learning baselines. The comparison results of the average accuracy with existing continual learning algorithms based on DNN and SNN are shown in Table 1 and Table 2. In the TIL scenario, our DSD-SNN achieves an accuracy of 97.30% \(\pm\) 0.09% with a network parameter compression rate of 34.38% for MNIST, which outperforms most DNNs-based algorithms such as EWC [17], GEM [14], and RCL [20]. In particular, our algorithm achieves a higher performance improvement of 0.70% over the DEN [21] model (which is also based on growth and pruning). For the temporal neuromorphic N-MNIST dataset, our DSD-SNN algorithm is superior to the existing HMM algorithm which combines SNN with DNN [15]. Meanwhile, our DSD-SNN model achieves 92.69% \(\pm\) 0.57% and 96.94% \(\pm\) 0.05% accuracy in CIL scenarios for MNIST and N-MNIST, respectively. From Table 2, our DSD-SNN outperforms PathNet [16], DEN [21], RCL [20] and HNET [21], which are also structural extension methods, in both TIL and CIL scenarios for 10 steps CIFAR100. iCaRL [14] and DER++ [22] achieve higher accuracy of 84.20% in TIL scenarios than our 77.92%, but they are inferior in CIL scenarios (51.40% and 55.30%) than our 60.47%. Moreover, the DSD-SNN compresses the network to only 37.48% after learning all tasks, further saving energy consumption. For 20 steps CIFAR100 with more tasks, our DSD-SNN achieves even higher accuracy 81.17% in TIL scenario and has excellent experimental results consistent with 10 steps. To the best of our knowledge, this is the first time that the energy-efficient deep SNNs have been used to solve CIFAR100 continual learning and achieve comparable performance with DNNs. In summary, the DSD-SNN model significantly outperforms the SNNs-based continual learning model on the N-MNIST dataset. On MNIST and CIFAR100 datasets, the proposed model achieves comparable performance with DNNs-based models and performs well on both TIL and CIL. ### Effects of Efficient Continual Learning Fig. 4 depicts the performance of the DSD-SNN model for task incremental learning on multiple datasets. The experimental results demonstrate that our SNNs-based model could improve the convergence speed and performance of new tasks during sequential continual learning, possessing the forward transfer capability. The newer tasks achieve higher performance from the beginning for MNIST and CIFAR100 datasets, indicating that the previously learned knowledge is fully utilized to help the new tasks. Also, the new tasks converge to higher performance faster, suggesting that the network has a strong memory capacity to continuously learn and remember new tasks. Similar comparable results can be obtained on the N-MNIST dataset. ### Ablation Studies **Effects of each component.** To verify the effectiveness of the growth and pruning components in our DSD-SNN model, we compare the number of network parameters (Fig.4(a)) and performance (Fig.4(b)) of DSD-SNN, DSD-SNN without pruning, and DSD-SNN without reused growth during multi-task continual learning. The experimental results show that the number of parameters in the DSD-SNN model fluctuates up and finally stabilizes at 37.48% for CIFAR100, achieving superior accuracy on multi-task continual learning. In contrast, \begin{table} \begin{tabular}{c c c} \hline \hline Method & Dataset & Acc \\ \hline EWC [17] & MNIST & 81.60\% \\ GEM [14] & MNIST & 92.00\% \\ DEN [21] & MNIST & 96.60\% \\ RCL [20] & MNIST & 96.60\% \\ CLNP [21] & MNIST & 98.42\(\pm\) 0.04 \% \\ **Our DSD-SNN** & MNIST & **97.30\(\pm\) 0.09 \%** \\ HMN(SNN+DNN) [15] & N-MNIST & 78.18\% \\ **Our DSD-SNN** & N-MNIST & **97.06\% \(\pm\) 0.09 \%** \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy of task incremental learning compared to other works for MNIST and N-MNIST datasets. Figure 4: During the continual learning process of each task, the changes of accuracy with epochs. Figure 3: The average accuracy with increasing number of tasks. **(a)** Our DSD-SNN for MNIST, N-MNIST and CIFAR100. **(b)** Comparison of our DSD-SNN with other methods for CIFAR100. the network scale of the model without pruning rises rapidly and quickly fills up the memory capacity, leading to a dramatic drop in performance after learning six tasks. The above results reveal that the pruning process of DSD-SNN not only reduces the computational overhead but also improves the performance and memory capacity. For the growth module of DSD-SNN, we eliminate the addition of connections from frozen neurons to verify the effectiveness of reusing acquired knowledge in improving learning for new tasks. From Fig. 4(a) and 4(b), DSD-SNN without reused growth suffers from catastrophic forgetting when there is no additional conservation of sub-network masks. The scale of the non-reused network is very small, and the failure to reuse acquired knowledge significantly degrades the performance of the model on each task. Therefore, we can conclude that reusing and sharing acquired knowledge in our DSD-SNN model achieves excellent forward transfer capability. **Effects of different parameters.** We analyze the effects of different growing and pruning parameters (the growth scale \(\rho\) and pruning intensity \(\rho_{c},\rho_{f}\)). For the growth parameter \(\rho\), the results are very close in the range of 5-15% for MNIST in Fig. 5(a), as well as in the range of 7.5-15% for CIFAR100 in Fig. 5(b). Only in the larger case, there is a performance degradation in the later learning task (8th task), due to the larger growth scale of the previous task resulting in insufficient space to learn new knowledge in the later tasks. Fig. 5(c) and 4 describe the effects of pruning strength \(\rho_{c}\), \(\rho_{f}\) on performance. The larger \(\rho_{c},\rho_{f}\), the more convolutional channels and fully connected neurons are pruned. We found that the accuracy is very stable at less than \(\rho_{c}=0.50,\rho_{f}=1.00\) for MNIST and \(\rho_{c}=0.75,\rho_{f}=1.25\) for CIFAR100, but the accuracy declines at larger \(\rho_{c},\rho_{f}\) due to the over-pruning. The DSD-SNN model is more adaptable to pruning parameters on the CIFAR100 dataset because it has a larger parameter space of SNN model. These ablation experiments demonstrate that our DSD-SNN is very robust for different growth and pruning parameters across multiple datasets. ## 5 Conclusion Inspired by the brain development mechanism, we propose a DSD-SNN model based on dynamic growth and pruning to enhance efficient continual learning. Applied to both TIL and CIL scenarios based on the deep SNN, the proposed model can fully reuse the acquired knowledge to help improve the performance and learning speed of new tasks, and combine with pruning mechanism to significantly reduce the computational overhead and enhance the memory capacity. Our DSD-SNN model belongs to the very few explorations on SNNs-based continual learning. The proposed algorithm surpasses the SOTA performance achieved by SNNs-based continual learning algorithm and achieves comparable performance with DNNs-based continual learning algorithms. \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & 10steps TIL Acc (\%) & 10steps CIL Acc (\%) & 20steps TIL Acc (\%) & 20steps CIL Acc (\%) \\ \hline EWC [Kirkpatrick _et al._, 2017] & 61.11 \(\pm\) 1.43 & 17.25 \(\pm\) 0.09 & 50.04 \(\pm\) 4.26 & 4.63 \(\pm\) 0.04 \\ MAS [Aljundi _et al._, 2018] & 64.77 \(\pm\) 0.78 & 17.07 \(\pm\) 0.12 & 60.40 \(\pm\) 1.74 & 4.66 \(\pm\) 0.02 \\ PathNet [Fernando _et al._, 2017] & 53.10 & 18.50 & - & - \\ SI [Zenke _et al._, 2017] & 64.81 \(\pm\) 1.00 & 17.26 \(\pm\) 0.11 & 61.10 \(\pm\) 0.82 & 4.63 \(\pm\) 0.04 \\ DEN [Yoon _et al._, 2018] & 58.10 & - & - & - \\ RCL [Xu and Zhu, 2018] & 59.90 & - & - & - \\ icRL [Rebuffi _et al._, 2017] & 84.20 \(\pm\) 1.04 & 51.40 \(\pm\) 0.99 & 85.70 \(\pm\) 0.68 & 47.80 \(\pm\) 0.48 \\ HNET [Yon _et al._, 2020] & 63.57 \(\pm\) 1.03 & & 70.48 \(\pm\) 0.25 & - \\ DER++ [Yan _et al._, 2021] & 84.20 \(\pm\) 0.47 & 55.30 \(\pm\) 0.10 & 86.60 \(\pm\) 0.50 & 46.60 \(\pm\) 1.44 \\ FOSTER [Wang _et al._, 2022] & - & 72.90 & - & 70.65 \\ DyTox [Douillard _et al._, 2022] & - & 73.66 \(\pm\) 0.02 & - & 72.27 \(\pm\) 0.18 \\ **Our DSD-SNN** & **77.92 \(\pm\) 0.29** & **60.47 \(\pm\) 0.72** & **81.17 \(\pm\) 0.73** & **57.39 \(\pm\) 1.97** \\ \hline \hline \end{tabular} \end{table} Table 2: Accuracy comparisons with DNNs-based algorithms for CIFAR100. Figure 5: Effects of each component. Number of network parameters (**a**) and accuracy (**b**) of our DSD-SNN, non-pruned model and non-reused model for CIFAR100. Figure 6: The effect of pruning and growth parameters on accuracy in multi-task continual learning. ## Acknowledgements This work is supported by the National Key Research and Development Program (Grant No. 2020AAA0107800), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB32070100), the National Natural Science Foundation of China (Grant No. 62106261). ## Contribution Statement B.H. and F.Z are equal contribution and serve as co-first authors. B.H., F.Z. and Y.Z. designed the study. B.H., F.Z. W.P. and G.S.performed the experiments and the analyses. B.H., F.Z. and Y.Z. wrote the paper.
2308.08173
Expressivity of Graph Neural Networks Through the Lens of Adversarial Robustness
We perform the first adversarial robustness study into Graph Neural Networks (GNNs) that are provably more powerful than traditional Message Passing Neural Networks (MPNNs). In particular, we use adversarial robustness as a tool to uncover a significant gap between their theoretically possible and empirically achieved expressive power. To do so, we focus on the ability of GNNs to count specific subgraph patterns, which is an established measure of expressivity, and extend the concept of adversarial robustness to this task. Based on this, we develop efficient adversarial attacks for subgraph counting and show that more powerful GNNs fail to generalize even to small perturbations to the graph's structure. Expanding on this, we show that such architectures also fail to count substructures on out-of-distribution graphs.
Francesco Campi, Lukas Gosch, Tom Wollschläger, Yan Scholten, Stephan Günnemann
2023-08-16T07:05:41Z
http://arxiv.org/abs/2308.08173v2
# Expressivity of Graph Neural Networks ###### Abstract We perform the first adversarial robustness study into Graph Neural Networks (GNNs) that are provably more powerful than traditional Message Passing Neural Networks (MPNNs). In particular, we use adversarial robustness as a tool to uncover a significant gap between their theoretically possible and empirically achieved expressive power. To do so, we focus on the ability of GNNs to count specific subgraph patterns, which is an established measure of expressivity, and extend the concept of adversarial robustness to this task. Based on this, we develop efficient adversarial attacks for subgraph counting and show that more powerful GNNs fail to generalize even to small perturbations to the graph's structure. Expanding on this, we show that such architectures also fail to count substructures on out-of-distribution graphs. Machine Learning, Graph Neural Networks, Graph Neural Networks ## 1 Introduction In recent years, significant efforts have been made to develop Graph Neural Netwoks (GNNs), for several graph-related tasks, such as molecule property predictions (Gasteiger et al., 2020), social network analysis (Fan et al., 2019), or combinatorial problems (Gasse et al., 2019), to name a few. The most commonly used architectures are based on message passing, which iteratively updates the embedding of each node based on the embeddings of its neighbors (Gilmer et al., 2017). Despite their broad success and wide adoption, different works have pointed out that so called Message Passing Neural Networks (MPNNs) are at most as powerful as the 1-Weisfeiler-Lehman (WL) algorithm (Morris et al., 2019; Xu et al., 2019) and thus, have important limitations in their expressive power (Chen et al., 2020). This encouraged the development of provably more powerful architectures. However, there is no guarantee that the training process also yields models that are as powerful as theoretically guaranteed. Thus, this work investigates if and to what extent the empirically achieved expressivity of such GNNs lacks behind their theoretic possibilities by taking a novel look from the perspective of adversarial robustness. In particular, we focus on the task of counting different subgraphs, which is provably impossible for MPNNs (Chen et al., 2020) (except for very limited cases), but important for many downstream tasks (Huang et al., 2023; Liu et al., 2019; Monti et al., 2018). Using our new adversarial framework for subgraph counting, we find that the counting ability of theoretically more powerful GNNs fail to generalize even to small perturbations to the graph's structure (see Figure 1). A result even more interesting given that subgraph counting is polynomially solvable for fixed subgraph sizes (Shervashidze et al., 2009).1 We expand on these results and show that these architectures also fail to count substructures on out-of-distribution (OOD) graphs. Furthermore, retraining the last MLP layers responsible for the prediction based on the graph embedding does not entirely resolve this issue. Footnote 1: In general, this problem is NP-complete (Ribeiro et al., 2021) **Contributions.** (i) We perform the first study into the adversarial robustness of GNNs provably more powerful than the 1-WL and use it as an effective tool to uncover a significant gap between the theoretically possible and empirically achieved expressivity for substrucu Figure 1: GNNs more powerful than 1-WL are not adversarially robust for subgraph-counting tasks. (see Section 6). (ii) We extend the concept of an adversarial example from classification to (integer) regression tasks and develop multiple perturbations spaces interesting for the task of subgraph counting (see Section 4). (iii) We develop efficient and effective adversarial attacks for subgraph counting, operating in these perturbations spaces and creating _sound_ perturbations, i.e., where we know the ground truth (see Section 5). (iv) In Section 6.2 we show that subgraph-counting GNNs also fail to generalize to out-of-distribution graphs, providing additional evidence that these GNNs fail to reach their theoretically possible expressivity. Our code implementation can be found at [https://github.com/francesco-campi/Rob-Subgraphs](https://github.com/francesco-campi/Rob-Subgraphs). ## 2 Background We consider undirected, unattributes graphs \(G\)=\((V,E)\) with nodes \(V\)=\(\{1,\ldots,n\}\) and edges \(E\)\(\subseteq\)\(\{\{i,j\}\mid i,j\in V,i\neq j\}\), represented by adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\). A graph \(G_{S}=(V_{S},E_{S})\) is a _subgraph_ of \(G\) if \(V_{S}\subseteq V\) and \(E_{S}\subseteq E\). We say \(G_{S}\) is an _induced_ subgraph if \(E_{S}\) contains all edges in \(E\) that connect pairs of nodes in \(V_{S}\). An egonet \(\mathrm{ego}_{l}(i)\) is the induced subgraph containing all nodes with a distance of at most \(l\) from root node \(i\). Furthermore, two graphs \(G,G^{\prime}\) are isomorphic (\(\simeq\)) if there exists a bijection \(f:V\to V^{\prime}\) such that \(\{i,j\}\in E\) if and only if \(\{f(i),f(j)\}\in E^{\prime}\). Lastly, the diameter \(\mathrm{diam}(G)\) denotes the length of the largest shortest path in graph \(G\). ### Subgraph-Counting Consider a fixed graph \(H\) which we call a pattern (Figure 2). A classic graph-related problem is the _(induced-) subgraph-counting_ of the pattern \(H\)(Ribeiro et al., 2021), which consists of enumerating the (induced) subgraphs of \(G\) isomorphic to \(H\). The subgraph-count of \(H\) is denoted by \(\mathcal{C}(G,H)\), and by \(\mathcal{C}_{I}(G,H)\) in the induced case. To simplify the notation we will also refer to it as \(\mathcal{C}(G)\) if \(H\) is given in the context. Several algorithms have been developed to solve the task of subgraph-counting. In this work we specifically consider the (exact) algorithm of Shervashidze et al. (2009) (presented in Appendix B) due to its low computational cost. ### Expressivity of Graph Neural Networks The expressivity of machine learning models is about which functions they can and cannot approximate. There are different ways of studying the expressive power of GNNs. In this work we specifically consider their ability to count subgraphs (Chen et al., 2020) because it is strictly related to different real-world tasks such as computational chemistry (Jin et al., 2020) and social network analysis (Jiang et al., 2010). We define the ability to count subgraphs as follows: **Definition 2.1**.: A family of functions \(\mathcal{F}\) can perform _subgraph-counting_ of a target pattern \(H\) on a graph class \(\mathcal{G}\) if for any two graphs \(G_{1},G_{2}\in\mathcal{G}\) with \(\mathcal{C}(G_{1},H)\neq\mathcal{C}(G_{2},H)\) there exists a function \(f\in\mathcal{F}\) such that \(f(G_{1})\neq f(G_{2})\). Surprisingly, MPNNs have considerable limitations in subgraph-counting. In fact, Chen et al. (2020) show that MPNNs are not able to count induced patterns with three or more nodes, leaving out only the ability to count edges. For example, Figure 3 shows two graphs that, despite having different triangle counts, will always return identical outputs when fed in the same MPNN. A different perspective to measure the expressive power is graph isomorphism. In this regard, Xu et al. (2019); Morris et al. (2019) demonstrated that an MPNN is at most as powerful as 1-WL isomorphism test at distinguishing pairs of non-isomorphic graphs. Moreover, since the WL algorithms are designed to extract representation vectors from graphs, they could be used also to perform subgraph-counting. In particular, Chen et al. (2020) showed that \(k\)-WL, and equivalently powerful architectures, can perform substructure-counting for patterns with at most \(k\) nodes, creating a connection between the two approaches. ### More Expressive Graph Neural Networks In this work, we analyze two state-of-the-art architectures for the task of subgraph counting: PPGN (Maron et al., 2019), and I\({}^{2}\)-GNN (Huang et al., 2023). PPGN represents the graph structure in a tensor and exploits tensor multiplications to enhance the expressivity. It reaches the same expressive power of 3-WL, which makes it capable of counting patterns of size three. I\({}^{2}\)-GNN, following the approach of subgraph GNNs (Frasca et al., 2022), decomposes the whole graph into different subgraphs and processes them independently with an MPNN. It has been explicitly developed to be expressive enough for counting different substructures and most important for this work, can count arbitrary patterns of size four. Both, PPGN and I\({}^{2}\)-GNN are effective architectures for downstream tasks such as molecular property predictions. Figure 3: Pair of undistinguishable graphs for MPNNs with different triangle counts. Figure 2: Examples of graph patterns used for subgraph-counting. ## 3 Related Work Chen et al. (2020) were the first to study the expressivity of GNNs w.r.t. their ability to count substructures. They, and later Tahmasebi et al. (2021) proposed architectures for counting substructures. However, these suffer from high computational complexity. Yu et al. (2023) proposed an architecture purely focusing on subgraph counting. However, subgraph counting alone can be solved by efficient randomized algorithms (Bressan et al., 2021). Thus, in this work, we focus on efficient architectures, which leverage their subgraph counting ability to improve generalization for other downstream tasks. In particular, we focus on PPGN (Maron et al., 2019) and I\({}^{2}\)-GNN (Huang et al., 2023). Both achieve state-of-the-art results for substructure counting while having formal expressivity guarantees. Different works have studied the adversarial robustness of GNNs for graph-level classification (Dai et al., 2018) and node-level classification (Zugner et al., 2018). Regarding the latter, Gosch et al. (2023) exactly define (semantic-preserving) adversarial examples. Moreover, Geisler et al. (2022) use adversarial attacks with sound perturbation models, i.e., where the ground truth change is known, to investigate the generalization of neural combinatorial solvers. Conversely, adversarial robustness for regression tasks has currently received very little attention (Deng et al., 2020). ## 4 Robustness in Subgraph-Counting The field of adversarial robustness is about the problem that machine learning models are vulnerable to small changes to their inputs (Goodfellow et al., 2015). In particular, for the subgraph-counting problem we want to analyze whether the error of the models increases when tested on perturbed input graphs \(\tilde{G}\) of a graph from a set of perturbed graphs \(\mathcal{P}(G)\). To evaluate the performance of a model \(f\) on perturbed graphs \(\tilde{G}\in\mathcal{P}(G)\) we use the following adversarial loss: \[\ell_{adv}(\tilde{G}):=|f(\tilde{G})-\mathcal{C}(\tilde{G},H)|.\] ### Subgraph-Counting Adversarial Examples To empirically evaluate the expressivity of machine learning models for subgraph-counting via adversarial robustness, we have to introduce a notion of adversarial example. In classification tasks adversarial examples are simply perturbations that change the predicted class. In general regression tasks one can define a threshold on \(\ell_{adv}\) for which we call a perturbed graph an adversarial example (Deng et al., 2020). However, this definition is application-dependent and, in our work, we define a specific threshold exploiting the fact that subgraph-counting is an _integer_ regression task. **Definition 4.1**.: Given a model \(f\) and clean graph \(G\), we say that \(\tilde{G}\in\mathcal{P}(G)\) is an _adversarial example_ for \(f\) if: 1. \(\lfloor f(G)+0.5\rfloor=\mathcal{C}(G)\) 2. \(\lfloor f(\tilde{G})+0.5\rfloor\neq\mathcal{C}(\tilde{G})\) 3. \(\frac{\ell_{adv}(\tilde{G})-\ell_{adv}(G)}{\ell_{adv}(G)}>\delta\). The conditions \((i)\) and \((ii)\) guarantee that the model prediction, when approximated to the nearest integer, is correct for \(G\) and wrong for \(\tilde{G}\). Here, having a correct initial prediction is essential to clearly distinguish the performances on the original graph from the perturbed graph. In addition, the condition \((iii)\) ensures that a margin exists between the errors on the original data instance and the perturbed one, and the size of the margin depends on the value of \(\delta\). This requisite prevents easily generating adversarial examples from graphs that are almost wrongly predicted, i.e. \(\ell_{adv}(G)\approx 0.5\). ### Perturbation Spaces We define different perturbation spaces for a graph \(G\) as constrained sets of structurally perturbed graphs constructed from \(G\). In particular, we consider different combinations of edge deletions and additions, for example \(E^{\prime}=E\cup\{i,j\}\) with \(\{i,j\}\notin E\) represents an edge addition. We always consider sound perturbation models, i.e, where we know the ground truth change. These are efficiently implemented as described in Section 5. It is meaningful to limit the number of perturbations in order to control how shifted the distribution of the perturbed subgraph-counts is compared to the distribution of the original ones. Then, we define the _constrained_ perturbation space with maximal budget \(\Delta\) as: \[\mathcal{P}_{\Delta}(G):=\{\tilde{G}\mid\frac{1}{2}\|\mathbf{A}-\mathbf{A}^{ \prime}\|_{0}\leq\Delta\}, \tag{1}\] where \(\|\cdot\|_{0}\) represents the number of non-zero elements, i.e. the number of perturbed edges. **Semantic-Preserving Perturbations.** Additionally, we conduct a robustness analysis more closely in line with adversarial examples for classification tasks, by incorporating a further constraint to guarantee the preservation of a specific level of semantic meaning. In particular, we define the _count-preserving_ perturbation space as: \[\mathcal{P}_{\Delta}^{c}(G)\coloneqq\{\tilde{G}\mid\tilde{G}\in\mathcal{P}_{ \Delta}(G)\ \wedge\ \mathcal{C}(\tilde{G})=\mathcal{C}(G)\}. \tag{2}\] Additionally, when considering induced subgraphs, keeping the count constant does not guarantee that the subgraphs isomorphic to the pattern remain the same. In fact, perturbations can simultaneously delete a subgraph isomorphic to the pattern and generate a new one (see Figure 4). We will denote the _subgraph-preserving_ perturbation space by \[\mathcal{P}_{\Delta}^{s}(G)\coloneqq\{\tilde{G}\mid\tilde{G}\in \mathcal{P}_{\Delta}(G)\wedge \tag{3}\] \[G_{S}\subseteq G,G_{S}\simeq H\iff G_{S}\subseteq\tilde{G},G_{S }\simeq H\}.\] ## 5 Subgraph-Counting Adversarial Attacks For a subgraph-counting model \(f\), the goal of an adversarial attack is to find the pertubed graph \(G^{*}\in\mathcal{P}(G)\) that causes the maximal error increase. This problem can be formulated as an optimization problem: \[G^{*}=\operatorname*{argmax}_{\tilde{G}\in\mathcal{P}(G)}\ell_{adv}(\tilde{G}). \tag{4}\] Attacking subgraph-counting GNNs for studying their empirical expressivity is particularly challenging. In fact, (i) the subgraph-count can vary significantly even for slight structural changes, and (ii) finding \(G^{*}\) of Equation (4) requires solving a discrete optimization problem. ### Sound Perturbations for Subgraph-Counting To tackle the sensitivity of the counts to structural changes, we exploit the exact algorithm to update the ground-truth count after every perturbation. In this way, we generate sound perturbations since the exact ground-truth value is know. In order to prevent this step to become computationally prohibitive, we develop an efficient count updating scheme that uses only a small portion of the graph. **Proposition 5.1**.: _Consider a graph \(G\) and a pattern \(H\) with \(\operatorname{diam}(H)=d\). Then, for every edges \(\{i,j\}\) we have that \(\operatorname{ego}_{d}(i)\) and \(\operatorname{ego}_{d}(j)\) contain all the subgraphs \(G_{S}\subset G\) such that \(G_{S}\simeq H\) and \(i,j\in V_{S}\)._ Proof in Appendix A. When an edge \(\{i,j\}\) is perturbed, only the subgraphs containing both the end nodes can be affected and potentially change their isomorphism relation with \(H\). Therefore, according to Proposition 5.1, it is sufficient to verify potential count changes only in \(\operatorname{ego}_{d}(i)\) (or equivalently \(\operatorname{ego}_{d}(j)\)). Specifically, the theorem assumes that \(\{i,j\}\) is contained in the graph, hence we extract the egonet from the graph including \(\{i,j\}\) (original for edge deletion and perturbed for addition). Next, from the nodes of \(\operatorname{ego}_{d}(i)\) we generate the induced subgraphs \(G_{S}\) and \(\tilde{G}_{S}\) from the original and perturbed graphs respectively. Since the possible alterations of the subgraph-count are enclosed in \(G_{S}\) and \(\tilde{G}_{S}\), we have the following count update rule. **Proposition 5.2**.: _Let \(\tilde{G}\) be a perturbation of a single edge of a graph \(G\), then there holds:_ \[\mathcal{C}(\tilde{G})=\mathcal{C}(G)+\mathcal{C}(\tilde{G}_{S})-\mathcal{C}( G_{S}).\] Following Proposition 5.2 we need to run the subgraph-counting algorithm only on the smaller subgraphs \(G_{S}\) and \(\tilde{G}_{S}\), rather than on the whole graph \(\tilde{G}\). Additionally, Proposition 5.1 guarantees that potential changes in the subgraphs isomorphic to the patterns are also constrained in the egonet, thus it can be used also identify perturbations belonging to the subgraph-preserving perturbation space \(\mathcal{P}_{\Delta}^{s}\). ### Construction of Adversarial Examples To create adversarial examples we need to solve the discrete optimization problem in Equation (4). To do so we develop algorithms that generate more powerful perturbation one change at a time, in this way, we keep track of the exact count with the update rule (Proposition 5.2). **Greedy Search.** We develop an efficient and effective greedy search algorithm (Algorithm 1). At each step we select the most effective perturbation of the current perturbed graph \(\tilde{G}\) in \(\mathcal{P}_{1}(\tilde{G})\) (or in \(\mathcal{P}_{1}^{c}(\tilde{G}),\mathcal{P}_{1}^{s}(\tilde{G})\)) until the budget limit is reached. The new subgraph-counts of perturbations in \(\mathcal{P}_{1}(\tilde{G})\) are computed with Proposition 5.2, whereas the preserving perturbation spaces are generated with Algorithm 2. ``` Input:\(G,\Delta,k\) \(\mathcal{G}^{(0)}=\{G\}\) for\(i=0\)to\(\Delta-1\)do \(\mathcal{P}^{(i)}=\{\}\) for\(\tilde{G}\)in\(\mathcal{G}^{(i)}\)do \(\mathcal{P}^{(i)}=\mathcal{P}^{(i)}\cup\mathcal{P}_{1}(\tilde{G})\) \(\{\text{or}\ \mathcal{P}_{1}^{c}(G),\mathcal{P}_{1}^{s}(G)\}\) endfor \(\mathcal{G}^{(i+1)}=\)greatest \(k\) in \(\{\ell_{adv}(\tilde{G})\mid\tilde{G}\in\mathcal{P}^{(i)}\}\) endfor Return:\(G^{*}=\operatorname*{argmax}_{\tilde{G}\in\mathcal{G}^{(\Delta)}}\{\ell_{adv}( \tilde{G})\}\) ``` **Algorithm 1** Beam search (greedy search for \(k=1\)) **Beam search.** A more advanced algorithm that does not increase the computational complexity is beam search. Concretely, it follows simultaneously \(k\) different paths to explore more extensively the perturbation space (see Algorithm 1). To improve the computational efficiency the perturbations in \(\mathcal{P}_{1}\) can be randomly selected according to the degrees of the end nodes of the perturbed edge. Concretely, the probability to pick the perturbation where the edge \(\{i,j\}\) has been added (or deleted) is proportional to \(d(i)^{2}+d(j)^{2}\), since intuitively these are the most relevant edges. Figure 4: This Figure shows examples demonstrating that not all the count-preserving perturbations are also subgraph-preserving ones. On the left a subgraph- and count-preserving perturbation for 4-cycles where the red edge has been deleted. On the right a perturbation that leaves unchanged the count of 2-paths, but it deletes the induced substructure \(\{2,3,4\}\) to generate \(\{1,2,3\}\). ## 6 Experiments In Section 6.1, we analyze the empirical expressivity of GNNs using our subgraph-counting adversarial attacks and using generalization as a (proxy) measure. Extending on this, in Section 6.2 we investigate if the same GNNs can count subgraph patterns for out-of-distribution graphs. Here we present the results of the induced subgraph-counting of triangles, 4-cycles and chordal cycles, for other patterns refer to Appendix C. ### Adversarial Robustness Here, we analyze the empirical expressivity of GNNs using our subgraph-counting adversarial attacks. Dataset and models.We generate a synthetic dataset of \(5\),\(000\) Stochastic-Block-Model graphs with 30 nodes divided into 3 different communities. The probabilities of generating edges connecting nodes within the same community are \([0.2,0.3,0.4]\), while the probability of generating edges between nodes of different communities is 0.1. We randomly split the dataset into training, validation, and test sets with percentages \(30\%,20\%,50\%\). We then train PPGN (Maron et al., 2019) and I\({}^{2}\)-GNN (Huang et al., 2023). Experimental Settings.We train each model 5 times using different initialization seeds to prevent bad weight initialization influencing the final results. Then, for each of the trained models \(f_{i}\) with seed \(i\), we use our adversarial attacks (see Section 5) to generate adversarial examples from 100 correctly predicted test graphs and average the percentage of successful attacks over all seeds. Furthermore, we investigate if the adversarial graphs for a model \(f_{i}\) transfer to the other models \(f_{j}\) trained with a different initialization seed \(j\neq i\). We inspect all three different perturbation spaces with budgets \(\Delta\) of \(1\%,5\%,10\%\) and \(25\%\) with respect to the average number of edges of the graphs in the dataset and use \(\delta=1\) as margin. In detail, we use beam search with beam width \(k=10\) to explore \(\mathcal{P}^{*}_{\Delta}\) and \(\mathcal{P}^{*}_{\Delta}\), while we rely on greedy search for \(\mathcal{P}_{\Delta}\). Results.The plots in Figure 5 show the percentage of perturbations found by the optimization algorithms that represent a successful adversarial example according to Definition 4.1. To condensate the results in a numerical value, we report in Table 1 the area under the curve (AUC) of the functions Non-Robust and Non-Robust (Transfer) in Figure 5. The results are reported as the proportion with respect to the area under the unity function \(f(x)=1\), which represents the worse case where all permutations generate an adversarial example already at \(\Delta=1\%\). Interestingly, the results show that we can find several adversarial examples for both architectures. In particular, PPGN is highly unrobust in the subgraph-counting of patterns with four nodes. However, several adversarial examples can be found also for the triangle count, even though the theoretical expressivity of PPGN claims that it is a family of functions that can count 3-dimensional subgraphs in the sense of Definition 2.1. Similarly, the more expressive model I\({}^{2}\)-GNN is fooled on 4-dimensional patterns, in spite of being sufficiently powerful to count them. This indicates that the empirical expressivity achieved does not match the theoretical expressivity since the models are not able to generalize to subgraph-counting tasks that they should in theory be able to solve. Additionally, in Appendix C.2 we investigate some structural properties of the adversarial examples. **Experimental Settings.** Firstly, we train the PPGN and I\({}^{2}\)-GNN architectures on the dataset d\({}_{1}\) and test them on both d\({}_{1}\) and d\({}_{2}\) to investigate the OOD generalization performances of the architectures. Additionally, we train the models directly on d\({}_{2}\) to have a comparison of the best performances achievable on this dataset. The errors are expressed using the mean absolute error (\(\ell_{1}\)) and an extension of it, which is obtained by normalizing by the ground-truth count (\(\ell_{c}\)). **Results.** Table 2 shows the test errors of the aforementioned settings averaged over five different initialization seeds. Here we observe that the models achieve very poor performances on general OOD graphs compared to their ideal performances (OOD and d\({}_{2}\) rows). However, if the model were able to perform subgraph-counting, as theoretically claimed, they should be able to perform this task regardless of the graph distribution. This result matches with Section 6.1 and shows that the models do not learn to detect the patterns and they rather overfit on the training distribution. However, this behavior could be intrinsic to the models' architecture. The models are designed to extract a vector representation from each input graph, which is then mapped to the prediction through an MLP. Then, the fact that different graph distributions might generate different graph representations leads us to investigate whether the problem is a poor generalization of the map between the graph embedding and the count. To test this possibility, we retrain on d\({}_{2}\) only the final MLP of the models previously trained on d\({}_{1}\) (row MLP in Table 2). While this adjustment is helpful, the errors are consistently one order of magnitude higher than the ones obtained training directly on d\({}_{2}\). This indicates that the graph representations do not achieve their theoretic separation power and that the problem does _not_ only lie in the last MLP prediction layers. ## 7 Conclusion We proposed a novel approach to assess the empirical expressivity achieved by subgraph-counting GNNs via adversarial robustness. We show that despite being theoretically capable of counting certain patterns, the models lack generalization as they struggle to correctly predict adversarially perturbed and OOD graphs. Therefore, the training algorithms are not able to find weights corresponding to a maximally expressive solution. Extending our study to other related GNNs such as KP-GNN (Feng et al., 2022) or to include adversarial training (Gosch et al., 2023; Geisler et al., 2022) to steer towards more robust and expressive solutions, are interesting directions for future work. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Arch.} & Exp. & \multicolumn{3}{c}{Trangle} & \multicolumn{2}{c}{4-Cycle} & \multicolumn{2}{c}{Chord. C.} \\ \cline{3-8} & Setting & \(\ell_{1}\) & \(\ell_{c}\) & \(\ell_{c}\) & \(\ell_{c}\) & \(\ell_{1}\) & \(\ell_{c}\) \\ \hline \hline \multirow{4}{*}{PPGN} & d\({}_{1}\) & 0.0058 & 7.8e-4 & 0.059 & 0.010 & 0.10 & 0.011 \\ & OOD & 2.98 & 0.041 & 5.40 & 1.17 & 20.0 & 0.25 \\ & d\({}_{2}\) & 0.0091 & 1.7e-4 & 0.040 & 0.0050 & 0.12 & 0.0017 \\ & MLP & 0.059 & 9.8e-4 & 0.29 & 0.043 & 1.083 & 0.014 \\ \hline \multirow{4}{*}{\(\mathrm{I}\)-GNN} & d\({}_{1}\) & 0.0027 & 2.8e-4 & 0.035 & 0.0062 & 0.020 & 0.0023 \\ & OOD & 3.25 & 0.044 & 2.16 & 0.45 & 6.75 & 0.084 \\ \cline{1-1} & d\({}_{2}\) & 0.032 & 6.2e-4 & 0.028 & 0.0031 & 0.30 & 0.0042 \\ \cline{1-1} & MLP & 0.20 & 0.0031 & 0.19 & 0.025 & 1.56 & 0.020 \\ \hline \hline \end{tabular} \end{table} Table 2: Test errors of the OOD experiments that investigate the generalization abilities of the architectures. Specifically, d\({}_{1}\) represents models trained and tested on the same dataset d\({}_{i}\), OOD models trained on d\({}_{1}\) and tested in d\({}_{2}\) and in MLP we additionally retrain the final layers on d\({}_{2}\). Figure 5: The plots illustrate in blue the success rate of our subgraph-counting adversarial attacks at finding perturbations that represent adversarial examples according to Definition 4.1 constrained and subgraph preserving perturbation spaces. In orange, we represent how effective the adversarial examples are when transferred to the models trained with a different initialization seed. The values are the average of the results obtained with 5 different initialization seeds with the relative standard errors.
2302.04712
DeepCAM: A Fully CAM-based Inference Accelerator with Variable Hash Lengths for Energy-efficient Deep Neural Networks
With ever increasing depth and width in deep neural networks to achieve state-of-the-art performance, deep learning computation has significantly grown, and dot-products remain dominant in overall computation time. Most prior works are built on conventional dot-product where weighted input summation is used to represent the neuron operation. However, another implementation of dot-product based on the notion of angles and magnitudes in the Euclidean space has attracted limited attention. This paper proposes DeepCAM, an inference accelerator built on two critical innovations to alleviate the computation time bottleneck of convolutional neural networks. The first innovation is an approximate dot-product built on computations in the Euclidean space that can replace addition and multiplication with simple bit-wise operations. The second innovation is a dynamic size content addressable memory-based (CAM-based) accelerator to perform bit-wise operations and accelerate the CNNs with a lower computation time. Our experiments on benchmark image recognition datasets demonstrate that DeepCAM is up to 523x and 3498x faster than Eyeriss and traditional CPUs like Intel Skylake, respectively. Furthermore, the energy consumed by our DeepCAM approach is 2.16x to 109x less compared to Eyeriss.
Duy-Thanh Nguyen, Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda
2023-02-09T15:54:42Z
http://arxiv.org/abs/2302.04712v1
DeepCAM: A Fully CAM-based Inference Accelerator with Variable Hash Lengths for Energy-efficient Deep Neural Networks ###### Abstract With ever increasing depth and width in deep neural networks to achieve state-of-the-art performance, deep learning computation has significantly grown, and dot-products remain dominant in overall computation time. Most prior works are built on conventional dot-product where weighted input summation is used to represent the neuron operation. However, another implementation of dot-product based on the notion of angles and magnitudes in the Euclidean space has attracted limited attention. This paper proposes _DeepCAM_, an inference accelerator built on two critical innovations to alleviate the computation time bottleneck of convolutional neural networks. The first innovation is an approximate dot-product built on computations in the Euclidean space that can replace addition and multiplication with simple bit-wise operations. The second innovation is a dynamic size content addressable memory-based (CAM-based) accelerator to perform bit-wise operations and accelerate the CNNs with a lower computation time. Our experiments on benchmark image recognition datasets demonstrate that DeepCAM is up to 523\(\times\) and 3498\(\times\) faster than Eyerns and traditional CPUs like Intel Skylake, respectively. Furthermore, the energy consumed by our DeepCAM approach is 2.16\(\times\) to 109\(\times\) less compared to Eyerns. ## I Introduction Deep learning has surpassed humans in various domains, such as image classification, natural language processing, and data generation [16]. However, this phenomenal progress has also led to a significant increase in the parameter size of a deep neural network (DNN) model in terms of its depth (layers) and width (filters). Dot-product computations in DNNs are highly computation-intensive accounting for more than 90% of the time to process various DNN workloads [12]. There have been various hardware accelerators for DNN inference such as Eyeriss [4], TPU [9], Rapid [25] among others to reduce the dot-product computation time in large-scale DNN deployment. However, conventional von-Neumann inference accelerators incur significantly high memory access energy. Specifically in such architectures, the on-chip memory (SRAM) and off-chip (DRAM) accesses incur 6\(\times\) and 200\(\times\) higher energy consumption compared to dot-product operation [4]. Typical systolic array-based accelerators with N\(\times\)N processing-arrays can achieve a computational time of O(N) to carry out dot-product operations. To this end, designing an architecture to further reduce the dot-product computational time to O(1) in traditional von-Neumann architectures with higher energy-efficiency has been a challenge for researchers. Recently, Kai Ni et al. [19] have proposed a sense amplifier for time sensing using a content addressable memory (CAM) based architecture to estimate the hamming distance between a search key and the CAM-data with high parallelism. The work by Kai Ni et al. [19] opens up doors for us to achieve O(1) computation time for dot-products with high parallelism. In this regard, we look into a different kind of dot-product implementation, called geometric dot-product. Typically, all DNN systolic array accelerators are designed to implement algebraic dot-products [22], that essentially involves multiply-and-accumulate (MAC) operations. We show that for achieving dot-product computation time of O(1) with a CAM-based architecture, the geometric implementation comes handy. In the geometric dot-product, operands are treated as vectors with magnitudes and directions. The dot-product of two operands (vectors) can, thus, be computed using their magnitudes and the angle between them. Based on this definition, the angle between two vectors can be estimated using our CAM-based architecture. This crucial concept allows us to achieve significantly higher throughput and better energy-efficiency during DNN inference, compared to state-of-the-art Eyerns accelerator [4]. In this paper, we propose DeepCAM, a novel Process-In-Memory (PIM) based inference accelerator architecture using CAMs, to replace standard algebraic dot-product operations with approximate dot-products (based on geometric implementation) to speed up the DNN computation time and reduce the inference energy. We highlight our key contributions as follows: * We propose an approximate implementation of dot-products with variable hash lengths (based on geometric implementation) for CNN inference on DeepCAM, without significant loss in classification accuracy. * We propose a dynamic size CAM-based inference accelerator with re-configurable hash lengths for processing dot-products with O(1) time-complexity. * We evaluate our DeepCAM accelerator on various CNN architectures- LeNet5, VGG11, VGG16 and ResNet18, using benchmark datasets (MNIST, CIFAR10 and CIFAR100). We obtain \(\sim 523\times\) lower computation time and \(\sim 109\times\) better energy-efficiency per inference compared to the state-of-the-art Eyeriss [4] accelerator. * We also compare our DeepCAM accelerator against previously proposed analog PIM-based CNN inference accelerators [20, 24]. For VGG11 CNN inferred with CIFAR10, DeepCAM is \(\sim 71.68\times\) and \(\sim 7.27\times\) more energy-efficient than [20] and [24], respectively. The remainder of the paper is organized as follows. Firstly, we briefly discuss the background on CAMs and dot-product operations in section II. Secondly, we explain our DeepCAM-related problem in section III. Thirdly, we provide the evaluation methodology and results in section IV. We further discuss the related works and comparison in section V. Finally, we will conclude our work in section VI. ## II Background & Motivation ### _CAM/TCAM - beyond CMOS and non-CMOS technology_ A content-addressable memory (CAM) shown in Fig 1.a facilitates parallel searching of a query data with content stored in the CAM memory. There are two types of CAM: 1) Binary CAM (or CAM) matching values 0 or 1 only [10], and 2) Ternary CAM (or TCAM) matching values 0, 1, or X (don't care). In the basic CMOS CAM cell design, the CAM cell includes a storage node (normally SRAM) and a pull-down CMOS circuit as a peripheral. During the search operation, the match line (ML) is first pre-charged to Vdd. The ML remains charged if the query in the search line (SL) and the content in the memory match and discharged otherwise. Due to the parallel search capability, CAMs can achieve O(1) computation time complexity. Besides search operation, CAM can be used to calculate the hamming distance [19]. Here, the correlation between the number of bit mismatch and the time to pull down the ML voltage is leveraged. Based on this observation, [19] proposed the clocked self-referenced sense amplifiers as shown in Fig 1.c converting the pulled down time to hamming distance. In the CMOS design, CAM and TCAM require 9-10 and 16 transistors, respectively. Because the CMOS memory cell is usually 2-10\(\times\) larger than the non-volatile memory cell [11], the CMOS implementation of CAM/TCAM will incur significant overhead. However, the implementation of CAM and TCAM in non-volatile memory technology requires two transistors and two non-volatile memory nodes, as shown in Fig 1.b. Thus, non-volatile memory nodes are preferred over CMOS transistors in implementing CAM/TCAM. As reported in [27], using FeFETs reduces the cell size to 7.5\(\times\) with 2.4\(\times\) lesser search energy than CMOS. With the premise of both energy saving and hamming-distance estimation in parallel, it is possible to build up a fast and energy-efficient deep learning accelerator. Thus, we consider FeFET CAM in this paper and the design details are provided in section III. ### _Dot-product and its approximation with random projection_ As a fundamental operation for convolution and fully connected layers in CNNs, the algebraic dot-product is computed by the MAC operation between input activation and weight vectors. Assuming **x** and **y** to be two vectors with N elements, their algebraic dot-product is defined as follows: \[\textbf{x.y}=\sum_{i=1}^{N}x_{i}y_{i} \tag{1}\] In Euclidean space, vectors are represented with magnitudes and angular components (directions). The magnitude of a vector is its L2 norm (\(\|.\|^{2}_{2}\)), and the angular component is defined as the cosine of the smallest angle between two vectors. Hence, we define geometric dot-product as follows: \[\textbf{x.y}=\|x\|^{2}_{2}\|y\|^{2}_{2}cos(\theta) \tag{2}\] If **x** and **y** are replaced with input activations and weights for a DNN model, then finding (or estimating) \(cos(\theta)\) for the computation of geometric dot-product is a tedious task. However, this problem can be solved by the Johnson-Lindenstrauss(J-L) lemma [8]. For better understanding, let us define the mapping of \(\textbf{x}\in R^{n}\) to \(\textbf{Z}\in\{0,1\}^{k}\) as a hashing method that maps the n-dimensional **x** vector to a k-dimensional **Z** vector. Say, the conversion hashing function is a matrix \(\textbf{C}\in R^{n\times k}\) and C follows a normal distribution \(\sim N(0,1)\). Any **x** vector can be converted into hyperspace **Z** by taking the signed bits of the projection product of **x** and C matrix: \(hash(x)=sign(xC)\). Because C is a random matrix, we call this operation as random projection with hash length (k). The angle between two vectors \(\theta\) can thus be approximated as the hamming distance (HD) between two hashed vectors **x** and **y**[6]: \[\theta_{\textbf{x.y}}=\pi Pr(hash(x)\neq hash(y))\approx\frac{\pi}{k}HD(hash (x),hash(y)) \tag{3}\] From eq. 2 & 3, we approximate geometric dot-product as follows: \[\textbf{x.y}\approx\|x\|^{2}_{2}\|y\|^{2}_{2}cos(\frac{\pi}{k}HD(hash(x),hash (y))) \tag{4}\] Now, let us consider the following example: If **x** = [0.6012, 0.8383, 0.6859, 0.5712], **y** = [0.9044, 0.5352, 0.8110, 0.9243], the conventional algebraic dot-product will be 2.0765. We find in Fig. 2 that the dot-product approximation using eq. 4 is nearly equal to the result of the algebraic dot-product, and longer hash lengths (k) lead to better approximation. Based on this approximate geometric dot-product formulation, we develop the CAM-based accelerator for the CNNs in this paper. Furthermore, owing to the error-tolerant characteristic of deep CNNs, we will see that our model's performance does not degrade drastically due to the approximation. ## III Overview of DeepCAM architecture In this section, we describe our DeepCAM architecture, which is a fully CAM-based PIM accelerator for CNN inference. The design of DeepCAM comprises of three major Fig. 1: a) Block diagram of content addressable memory(CAM) architecture. b) CMOS and FeFET CAM cells. c) Clocked self-referenced sense amplifier(SA) for detecting the matching degrees. components: 1) a context generator software, 2) a dynamic sized CAM-based accelerator, and 3) a post-processing and transformation unit. These components are shown in Fig. 3. ### _Context Generator_ As discussed in section II-B, the approximate dot-product requires the magnitude and hashed binary data for each input activation and weight (see equation 4). The magnitude is a Euclidean norm or L2 norm with 8-bit minifloat representation [7]. The hashed binary data can be generated by multiplying the activation or weight with a random matrix C. As shown in Fig 4, the context generator is a software that generates the two components: 1) the L2 norms and 2) the hashed binary data, for the input activations and weights. We need to reshape the weight/activation matrices before computing the L2 norm and hashed binary vectors. An example is shown in Fig 4 to describe the process of building a weight context from a kernel of size \(3\times 3\). Note that the contexts for the pre-trained CNN weights and input data can be pre-processed in the software and thus, cause no impact on the computation time during inference on hardware. However, the intermediate activations generated at the end of one CNN layer need to be transformed into the activation contexts before the computation of the subsequent layer. Hence, we propose an online transformation technique (see Post-processing & transformation unit in Fig. 3) for on-the-fly activation context generation, discussed in section III-C. From Fig. 2, we determined that the error in the approximate dot-product operation depends on the hash length (k). We find that each CNN layer requires a certain minimum hash length to maintain the overall classification accuracy (referred to as optimal hash length). Some layers are sensitive to a smaller hash length, while others are very robust. One way to maintain the classification accuracy would be to choose the maximum value out of all optimal hash lengths as the fixed hash length across all CNN layers. However, this would lead to hardware over-utilization. As a result, we propose a variable hash length encoding strategy (_i.e._, using different hash lengths corresponding to each CNN layer) that can help maintain the CNN classification accuracy, as shown in Fig. 5. In order to have variable hash lengths, the size of the CAM module should also be varied. To this end, we propose a dynamic size CAM-based accelerator, described in the next section. ### _Dynamic size CAM-based Accelerator and Dot-product Computations_ As shown in Fig 6, our dynamic size DeepCAM accelerator comprises of four chunks; the word size for each chunk is 256-bits. Each chunk is connected to its adjacent chunks by using transmission gates. The maximum word length for the CAM module can be expanded to 1024-bits. In this design, we use transmission gates (behaving as switches driven by an enable signal \(En\)) since the combination of both NMOS & PMOS transistors prevent signal degradation and forward all the voltages on the bit-line to the next chunk. The sense amplifier [19] detects the pull-down time of ML to 0-voltage and specifies the clock cycle time required for ML to attain 0-voltage, where the hamming distance between search data and row CAM data is computed. By enabling/disabling the transmission gates, we can dynamically change the word length (and hence, the hash length) from 256 to 1024 bits in the CAM module. With this flexibility for choosing the optimal hash lengths for each CNN layer during dot-product computations, our CAM can achieve lower access power as well as better energy efficiency. In the DeepCAM accelerator, the CAM module helps compute the hamming distances in parallel for multiple input activations and weight kernels simultaneously. Note, the computation of hamming distances (needed for approximate dot-products) occurs in CAMs with O(1) time-complexity, and this manifests as significant reduction in computation time per inference compared to Eyeriss [4] as we will see in section IV. However, two further steps are needed to complete the approximate dot-product operations as shown in equation 4: 1) calculating the output of the cosine function, and 2) multiplication of the cosine output with the L2 norms of the operands. Implementing cosine functions on hardware can have a significant overhead as multiple computation cycles or lookup-tables with large memory sizes are required to calculate the cosine output using the hamming distance results from the CAM [3]. To minimize the hardware costs, we apply the approximate cosine function Fig. 4: An example showing the context generation process. Fig. 3: Full architecture overview of DeepCAM. 1) Context generator software for pre-processing the deep learning data. 2) A dynamic size CAM-based accelerator for dot-product operations. 3) Post-processing and transformation module for post dot-product computations and online activation context generation. Fig. 2: Plot showing output result comparison between the approximate dot-product and conventional (algebraic) dot-product. as follows: \[cosine(\theta)=\left\{\begin{array}{ll}-0.96\theta+1.51&\frac{\pi}{3}<\theta\leq \frac{\pi}{3}\\ 1-\frac{\theta}{\pi}&0<\theta\leq\frac{\pi}{3}\\ -cosine(\pi-\theta)&\theta>\frac{\pi}{2}\end{array}\right. \tag{5}\] After obtaining the cosine output, it is multiplied with the L2 norm of the CNN weights and activations to generate the final approximate dot-product. Our deep learning accelerator also supports peripheral operations such as ReLU, pooling, batchnorm in the digital domain that are carried out in the Post-processing & transformation module as shown in Fig 7. ### _Activation post-processing and transformation_ The output activations of a CNN layer, generated as a result of approximate dot-product computations, are required to be converted into activation contexts for computation in the subsequent layer. As shown in Fig. 3, we could send the intermediate activations back to the software to generate the activation contexts. However, it will lead to significant energy and latency overhead owing to the data communication. Hence, we propose the on-the-fly activation context generator in hardware (part of the Post-processing & transformation module) for converting the intermediate activations into activation contexts for the next CNN layer. Similar to the software context generator, the activation context generator will generate L2 norm and hashed binary data from the input activations. The L2 norm functionality is implemented using a simple adder tree and a digital square-root module. Further, we use a non-volatile memory (NVM) based crossbar-array to encode the random vector C (see section III-A) as synaptic weights and implement the on-chip hash function. Since we only need the sign bits for carrying out projection operation using the crossbar-array, we replace the high-resolution ADCs with simple sense amplifiers that detect the negative results. This transformation module is implemented as shown in Fig. 7. ## IV Evaluations and Results In this section, we will evaluate our DeepCAM accelerator using state-of-the-art pre-trained CNN models (LeNet5, VGG11, VGG16 and ResNet18) with benchmark datasets (MNIST, CIFAR10 and CIFAR100) [1]. The details are summarized in Table. I. Note, our FeFET CAM uses variable hash length encoding strategy to maintain the inference accuracy of the CNN models on hardware close to the software accuracy as shown in Fig. 5. We compare our work against other deep learning hardware accelerators that are widely used for CNN inference. ### _Methodology_ To evaluate our DeepCAM accelerator, we carry out system-level and hardware-level simulations. For the system-level evaluation, we consider two dataflows: 1) weight-stationary, where the CAM module stores weight contexts as CAM data and activation contexts are passed as search data; 2) activation stationary, where the CAM module stores activation contexts as CAM data and weight contexts are passed as search data. The DeepCAM simulation system is implemented in the manner shown in Fig. 3. For the dynamic size CAM, we can have CAM row sizes of 64/128/256/512 to store the fetched weight/activation contexts as CAM data, and CAM column sizes of 256/512/768/1024 to represent the variable context hash lengths. We use an FeFET CAM to evaluate our proposed accelerator. The FeFET CAM search energy and area statistics are extracted from EvaCAM [18] to project the hardware overhead results for our chosen row/column sizes (see Fig 8). For the hardware evaluation using DeepCAM, we implement the hardware description code and use Synopsys Design Compiler and PrimeTime [13, 26] to extract the Fig. 5: Plot showing that variable hash lengths are required to maintain the Top-1 classification accuracy of LeNet5, VGG11, VGG16 and ResNet18 CNN models on DeepCAM. Here, BL refers to the % accuracy of baseline software CNN model and DC refers to the % accuracy of CNN on DeepCAM. Fig. 6: Dynamic size CAM-based accelerator for estimating the hamming distance between activations and weights in parallel before conducting the final approximate dot-product in post-processing module. Fig. 7: The post-processing and transformation module includes- 1) a post-processing sub-module and 2) an online activation context generator sub-module power consumption, area, and timing results. The hardware evaluations are carried out at a clock frequency of 300 MHz using 45 nm CMOS technology node. Further, we simulate the crossbar-array in the Post-processing & transformation module having FeFET devices as synapses using the NeuroSim tool [20]. Both the system and hardware evaluation data are used to estimate the overall computation time and energy savings with our DeepCAM accelerator for various CNN models. **Baselines:** We compare our work with the state-of-the-art Eyeriss accelerator based on systolic array architecture [4]. For systolic array evaluations, we modify the SCALE-Sim [23] framework with appropriate network topology and systolic array configuration of Eyeriss [4]. Although, INT16 is used in [4] as the data precision, we choose INT8 representation because INT8 is the state-of-the-art quantization for various CNN workloads [9]. Hence, we implement Eyeriss with a processing-array configuration of 14\(\times\)12 and a datapath with INT8 representation. After running SCALE-Sim on the various CNN models, we extract the computational cycles (indicating overall computation time) and hardware utilization during inference. As a second baseline, we use Intel Skylake CPU with the AVX-512 extension that supports the vector neural network instruction [5]. ### _Hardware Performance with DeepCAM_ We find that the activation-stationary dataflow results in a lower number of computational cycles compared to the weight-stationary dataflow with multiple CNN topologies on our DeepCAM accelerator. To understand this, let us consider the following example. Suppose, we have a single-channeled input of size 32x32 and 6 weight-kernels of size 5x5 for convolution with stride 1. Then, for obtaining the output feature map, we need (28*28-784) input vectors for the 6 kernel-vectors. If weight-stationary mode of mapping is considered for a CAM with 64 rows, whereby only the 6 rows corresponding to the 6 kernels are occupied out of the 64 CAM rows, we have an hardware utilization of \(6/64=9.4\%\). On the other hand with activation-stationary mode of mapping on the 64 CAM rows, the hardware utilization becomes \(100\%\). Hence, activation-stationary mode of dataflow in DeepCAM induces full utilization of the available CAM hardware and thus, facilitates faster convolutions with an overall lower number of computational cycles. Compared to Eyeriss, our DeepCAM (with 64 CAM rows and activation-stationary dataflow) is \(\sim 523.5\times\) efficient in reducing inference computational cycles for the LeNet_MNIST topology and \(\sim 3.3\times\) efficient in case of ResNet18_CIFAR100 topology. The efficiency in reducing computational cycles increases to \(\sim 26.4\times\) for ResNet18_CIFAR100 topology when CAM row size is increased to 512. Compared to Intel Skylake, our DeepCAM (with 64 CAM rows) is \(\sim 235.4\times\) efficient in reducing computational cycles for LeNet_MNIST with weight-stationary dataflow and up to \(\sim 3498\times\) with activation-stationary dataflow. The summary of the above results is presented in Fig 9. ### _Energy Consumption per Inference_ In this section, we only make a comparison between our DeepCAM accelerator and Eyeriss, because traditional CPUs are known to be very energy-hungry architectures. Fig. 10 compares the energy results between our DeepCAM with variable hash lengths to that of Eyeriss. In our comparison, we choose the baseline as CNNs implemented on DeepCAM with homogeneous 256-bit hash lengths across all layers. All results in Fig. 10 are normalized to this baseline. Also, Max DeepCAM refers to a homogeneous 1024-bit hash length implementation across layers. The variable hash length DeepCAM yields 1.78\(\times\) energy reduction compared to Eyeriss in the case of LeNet_MNIST with 512 CAM rows (in weight-stationary mode). However, we can increase the energy reduction up to 109.4\(\times\) by changing the dataflow to activation stationary. In case of ResNet18_CIFAR100, DeepCAM with variable hash length achieves energy reduction of 2.16\(\times\) compared to Eyeriss. ## V Related works and Comparison Utilizing CAM architectures for deep learning applications is a relatively new research direction. Prior works such as [14], Fig. 8: Plot of CAM-based hardware overhead results with various row and column sizes. Fig. 10: Plot of normalized energy consumption of DeepCAM compared to Eyeriss. Fig. 9: Plot of computational cycles and hardware utilization for weight/activation-stationary modes of DeepCAM compared with Eyeriss and traditional CPU. 15, 17, 19, 21] have used CAMs as associative memories for fast and energy-efficient search operations across various deep learning workloads. In [19], the classifier layer of a DNN is implemented using FeFET CAMs operating on Locality-sensitive Hashing (LSH). However, compared to other DNN layers, the classifier has much lower computational overhead. Thus, the application of CAM to implement DNN classifier does not speed up the deep learning system and also incurs additional power consumption by the CAM array. Another work [21] uses Range-encoding (RE) method for data storage to perform few-shot learning tasks. However, the proposed design requires a significant number of CAM accesses to measure the \(L_{\infty}\) and \(L_{1}\) distance and hence, is computationally intensive. We know that dot-product operations are the key computational kernels for deep learning. However, developing large-scale CAM-based deep learning accelerators has been challenging because transforming CAMs from being associative memories to efficient dot-product engines has not been well explored. To this end, exploiting the properties of CAM to estimate the hamming-distance between input activations and weights (using random projection hashing method) in a DNN and performing energy-efficient approximate geometric dot-products have been the key contributions of our work. We have shown that our DeepCAM accelerator opens up possibilities to carry out highly parallelized dot-product operations on hardware for large-scale deep learning tasks. Now, we compare our PIM-based DeepCAM architecture with two previously proposed PIM-based works [20, 24] for the acceleration of deep learning workloads. Both of these works conduct inference of CNNs on analog compute macros (based on SRAM or NVM devices) by computing algebraic dot-products. Analog dot-product PIM engines have been shown to facilitate compact and energy-efficient implementation of DNNs on hardware with high parallelism [2]. Table. II compares DeepCAM against a VGG11_CIFAR10 CNN evaluated on RRAM device-based PIM engine using the NeuroSim tool [20] and SRAM-based PIM engine as described in [24], in terms of dynamic energy and computation cycles per inference. We find that our PIM-based solution (DeepCAM with variable hash length) is \(\sim 71.68\times\) more energy-efficient and requires \(\sim 2.16\times\) less computation cycles per inference than [20]. In comparison to [24], DeepCAM is \(\sim 7.27\times\) energy-efficient, but requires slightly higher computational cycles per inference. ## VI Conclusion This paper proposes DeepCAM, a reconfigurable CAM-based inference accelerator built on critical innovations to alleviate computation time demands of deep learning workloads. We find that DeepCAM can be up to \(523\times\) faster than Eyeriss -conventional systolic array architecture while consuming up to \(109\times\) less energy than Eyeriss. All these savings come with negligible loss in output quality in image recognition tasks. ## Acknowledgement This work was supported in part by C-BRIC, a JUMP center sponsored by DARPA and SRC, Google Research Scholar Award, the National Science Foundation (Grant #1947826), TTI (Abu Dhabi), the DARPA AI Exploration (AIE) program, and the DoE MMICC center SEA-CROGS (Award #DE-SC0023198).
2303.03340
Symbolic Synthesis of Neural Networks
Neural networks adapt very well to distributed and continuous representations, but struggle to generalize from small amounts of data. Symbolic systems commonly achieve data efficient generalization by exploiting modularity to benefit from local and discrete features of a representation. These features allow symbolic programs to be improved one module at a time and to experience combinatorial growth in the values they can successfully process. However, it is difficult to design a component that can be used to form symbolic abstractions and which is adequately overparametrized to learn arbitrary high-dimensional transformations. I present Graph-based Symbolically Synthesized Neural Networks (G-SSNNs), a class of neural modules that operate on representations modified with synthesized symbolic programs to include a fixed set of local and discrete features. I demonstrate that the choice of injected features within a G-SSNN module modulates the data efficiency and generalization of baseline neural models, creating predictable patterns of both heightened and curtailed generalization. By training G-SSNNs, we also derive information about desirable semantics of symbolic programs without manual engineering. This information is compact and amenable to abstraction, but can also be flexibly recontextualized for other high-dimensional settings. In future work, I will investigate data efficient generalization and the transferability of learned symbolic representations in more complex G-SSNN designs based on more complex classes of symbolic programs. Experimental code and data are available at https://github.com/shlomenu/symbolically_synthesized_networks .
Eli Whitehouse
2023-03-06T18:13:14Z
http://arxiv.org/abs/2303.03340v2
# Symbolic Synthesis of Neural Networks ###### Abstract Neural networks adapt very well to distributed and continuous representations, but struggle to generalize from small amounts of data. Symbolic systems commonly achieve data efficient generalization by exploiting modularity to benefit from local and discrete features of a representation. These features allow symbolic programs to be improved one module at a time and to experience combinatorial growth in the values they can successfully process. However, it is difficult to design a component that can be used to form symbolic abstractions and which is adequately overparametrized to learn arbitrary high-dimensional transformations. I present Graph-based Symbolically Synthesized Neural Networks (G-SSNNs), a class of neural modules that operate on representations modified with synthesized symbolic programs to include a fixed set of local and discrete features. I demonstrate that the choice of injected features within a G-SSNN module modulates the data efficiency and generalization of baseline neural models, creating predictable patterns of both heightened and curtailed generalization. By training G-SSNNs, we also derive information about desirable semantics of symbolic programs without manual engineering. This information is compact and amenable to abstraction, but can also be flexibly recontextualized for other high-dimensional settings. In future work, I will investigate data efficient generalization and the transferability of learned symbolic representations in more complex G-SSNN designs based on more complex classes of symbolic programs. Experimental code and data are available here. neural networks symbolic programs graph neural networks library learning distributional program search ## 1 Introduction Most conventional modes of human communication naturally occur in a high-dimensional medium such as text, audio, or images. When processing these media, slight errors can easily accrue across many dimensions of the input where features are distributed. With adequate data, neural networks adapt effectively to these patterns by adjusting many interdependent real-valued parameters. In their basic form, however, neural models are data inefficient. In settings where more data cannot be sourced, pretraining has arisen as the most general and data-driven answer to this challenge. Pretraining allows practitioners to repurpose data that is not specifically suited to a task to enrich a representation or initialize parameters of a neural model. While pretraining grants a model exposure to out-of-distribution data which may be irrelevant or inappropriate to the task at hand, the benefits of this exposure can outweigh the costs. In symbolic systems, modularity can allow for greater data efficiency and generalization in the presence of local and discrete features. By examining the particular dimensions of an input which cause systems to fail, developers can trace issues back to specific modules. When a failure in one module is corrected, the system benefits from this across all combinations of values and dimensions, witnessing potentially exponential growth in the number of inputs on which it succeeds. In contrast to pretraining, the use of modular functionalities to process local and discrete features offers a direct solution to the challenge of data efficient generalization. However, it is difficult to design a unit that is appropriate both for learning arbitrary high-dimensional transformations, and for composing symbolic abstractions, as the overparametrization that gives neural networks their flexibility also makes their semantics unstable. This makes it difficult to judge which combinations of primitive operations are useful enough to merit the creation of a new abstractions. I present a novel approach based on Graph-based Symbolically Synthesized Neural Networks (G-SSNNs), a form of neural module in which symbolic abstraction and gradient-based learning play complementary roles. In a G-SSNN, the output of a symbolic program is used to determine a set of local and discrete features that are injected into the representations processed by the network. The symbolic program is chosen by an evolutionary algorithm, which is used to develop and evaluate a population of G-SSNNs with varying designs. Through the use of distributional program search and library learning--a mechanism for identifying useful new abstractions--the evolutionary algorithm may influence both the content of injected features and the degree of locality they exhibit. In this work, I apply evolving populations of G-SSNNs to a binary prediction task over high-dimensional data that humans find simple and easy to interpret from handfuls of examples. I show that these populations exhibit a reliable pattern of both heightened and curtailed generalization relative to baseline models after training on small quantities of data. ## 2 G-SSNNs In a G-SSNN, the output of a symbolic program fixes a mechanism that injects a set of local and discrete features into the input. A graph may be used to represent not just these individual features as node and edge representations, but also their relationships to each other. After a symbolic program has generated a graph \(g\in G_{k}\) with symbolic node and edge representations \(v\in\mathbb{N}^{k}\), we may embed arbitrary input vectors into this structure piecewise with a function \(e\colon\mathbb{N}^{k}\times\mathbb{R}^{m}\to\mathbb{R}^{q}\). Each symbolic feature thus provides an instruction as to how a region of the input vector should be transformed. With \(\texttt{graph\_map}_{e}:G_{k}\times\mathbb{R}^{m}\to G^{\prime}_{q}\), a function that applies \(e\) for each symbolic feature \(v\in g\), we compute a graph \(g^{\prime}\in G^{\prime}_{q}\) with transformed, real-valued features which may be processed by a standard GNN \(m_{gnn}:G^{\prime}_{q}\to\mathbb{R}^{n}\). The G-SSNN \(m:\mathbb{R}^{m}\to\mathbb{R}^{n}\) is then a triplet of \(g\), \(e\), and \(m_{gnn}\) used to compute \[m(x)=m_{gnn}(\texttt{graph\_map}_{e}(g,x)).\] Provided that \(e\) is differentiable with respect to \(x\), the G-SSNN \(m\) may be augmented with a readout or global pooling function to create a generic neural module. There are many definitions of \(G_{k}\) and \(e\) practitioners may adopt depending on the intended role of the injected features. The definitions used in this work are presented in section A.1 of the appendix. ## 3 Evolutionary Framework While the GNN of a G-SSNN can be trained with standard gradient-based optimization techniques, its graph structure is a hyperparameter whose values must be explored with search. To conduct this search, I repurpose the cycle of library learning and distributional program search pioneered by the DreamCoder system (Ellis et al., 2021) within an evolutionary framework, incorporating the improved STITCH library learning tool of Bowers et al. (2023) and heap search algorithm of Matricon et al. (2022). As in DreamCoder, I perform rounds of library learning and distributional program search, after which I evaluate the current crop of programs on the task. Unlike symbolic programs, however, G-SSNNs may exhibit widely-varying degrees of generalization. As such, it is not appropriate to take their performance on a handful of training examples as indicative of goodness of fit. Rather, a heuristic must be applied to select those symbolic programs whose corresponding G-SSNNs are expected to generalize best based on their training performance. G-SSNNs, along with such a heuristic, may be understood as generalizing the evaluation mechanisms used for symbolic programs within the DreamCoder system. In this work, I develop a single population of G-SSNNs for application to a single task. However, in principle, nothing prevents the multitask Bayesian program learning framework of the DreamCoder system from being applied to the development of multiple populations of G-SSNNs. After each evolutionary step, a new population is produced for each task. At the next step of library learning, novel primitives are judged according to which provide the greatest degree of compression of this corpus (i.e. reduction in the number of primitives needed to express a program). The creation of new primitives is therefore guided by the pruning of the population that has occurred at previous stages. By inferring the distribution of unigrams of primitives in the current population, we may guide search towards programs that utilize these primitives at similar frequencies. Through these mechanisms, the choice of heuristic influences the evolution of future G-SSNN designs. Though no heuristic is best in all cases, the class of heuristics that discard models with the Lowest Training Performance (LTP) is simple and agreeable for a wide variety of applications. In this work, I utilize a heuristic I refer to as rank-LTP-50, which directs that the bottom half of G-SSNNs in terms of ranked training performance be discarded at each step of evolution. For additional details on the search space explored with these techniques, see section A.2 of the appendix. ## 4 Related Work Neural Architecture Search (NAS) is the process of using automated methods to explore the discrete space of neural network topologies. In a G-SSNN \(m\), the symbolic program may be considered to influence the topology of the model through its role in structuring the applications of graph convolutions within \(m_{gmn}\), and also through its role in distinguishing dimensions of the input through the embedding function \(e\). However, despite variation in the design of their search space (Jin et al., 2022), search strategies (Real et al., 2020), and model evaluation strategies (Abdelfattah et al., 2021), all approaches I reviewed assume that parameters will be learned with gradient-based optimization before each model is assessed. This differs from G-SSNNs, as the embedding function described in section A.1 includes a term whose function is equivalent to that of a bias parameter, yet remains static throughout gradient-based learning. The act of adding features to a representation through a static mechanism seems more analogous to an act of preprocessing, as might be performed with word embeddings (Mikolov et al., 2013) or positional encoding (Vaswani et al., 2017). However, I found no works in which similar acts of preprocessing were designed with the help of distributional program search and library learning. G-SSNNs draw significant inspiration from the DreamCoder system; however, I did not find examples of neurosymbolic systems in which adjacent parametric and nonparametric mechanisms are separately optimized through gradient-based learning and library learning. ## 5 Experiments ### Setup & Hyperparameters To facilitate comparison, I utilize a base model architecture consisting of a transformer with a convolutional stem (Xiao et al., 2021). I use subpixel convolution to perform downsampling from \(128\times 128\) to \(8\times 8\)(Shi et al., 2016), followed by a pointwise convolution and three residual convolutional units. The patches are then flattened and fed to a simplified Vision Transformer (ViT) (Beyer et al., 2022) with 2 transformer blocks. Each block has an MLP dimension of 512 and 4 128-dimensional attention heads. Global average pooling is then applied to produce an output of the target dimension for baseline models, or with the dimensionality of a node/edge representation for experimental models in which the output is fed to a G-SSNN unit. To construct G-SSNNs, I use the Graph Isomorphism Network with Edge features (GINE) of Hu et al. (2020) as implemented in the DGL library (Wang et al., 2019). I use 512-dimensional node and edge representations, which are passed through three GINE layers parametrized by an MLP with identical input and output dimensionalities. Between GINE layers, I apply 30% dropout (Srivastava et al., 2014). To produce the final output, I apply average pooling followed by another MLP which projects its input to the target output. For both baseline and experimental models I use a batch size of 8, train each model for 16 epochs with the Adam optimizer (Kingma and Ba, 2015), and reduce the learning rate by a factor of 1/2 when a new low in the loss has not been experienced in the first 50 batches, or in the most recent 65 batches since the learning rate was last reduced. Across iterations of evolutionary selection, I retain a population of at most 50 and apply the rank-LTP-50 heuristic if the size of the population is greater than or equal to 25. I run distributional program search for a 15 second period. In the course of this run, I retain only the most likely \(50-\)population_size novel programs; however, if programs are discovered that are shorter than, but semantically equivalent to, programs currently in the population, these shorter reformulations are kept and their equivalents are set aside. I do not consider programs that consist of more than 150 primitives from the most updated DSL. Prior to program search, I reweight the parameters of the program distribution such that the log likelihoods of the most and least likely primitives differ by no more than 0.5. I implemented this transformation such that it preserves the relative distance of each primitive's log likelihood from the mean and therefore does not alter the DSL's total probability mass.1 During library learning, I perform up to three rounds of compression, each producing a primitive of arity 2 or less. All other parameters of the STITCH compression routine were allowed to take their default values.2 Footnote 1: This and other domain-general aspects of program search are implemented in the antireduce library. Footnote 2: Explanation of these settings may be found in the documentation of the stitch_core Python package. ### Dataset The RAVEN dataset (Zhang et al., 2019) is an artificial dataset consisting of visual puzzles known as Raven's Progressive Matrices (RPMs). Originated as a psychometric test, RPMs have since been extensively studied as tests of Abstract Visual Reasoning (AVR) in both symbolic and neural AI systems. An RPM is a \(9\times 9\) matrix of panels depicting simple arrangements of polygons. Within each row, panels exhibit one of a handful of consistent patterns with respect to each attribute-size, shape, color, angle, position, and multiplicity-of their polygons. These patterns are easily observed by humans from relatively few examples. In comparison, neural models typically require orders of magnitude more data to match or surpass human performance. Classically, the final panel of the last sequence (bottom right) is omitted and some number of candidate panels are provided, only one of which contains polygons which obey the same patterns exhibited in the first two rows for all attributes. The goal of the solver is to distinguish the correct panel from the pool of candidates. Odouard and Mitchell (2022) have demonstrated that several recent high-performing neural models from the RAVEN literature perform poorly when more exhaustively evaluated on their understanding of the various instantiations of an individual pattern. This inspired the Prediction of Constant Color Pattern (PCCP) task, a binary prediction task in which a model examines a complete and correct RPM and predicts whether it exhibits constancy of polygon color across rows. The two classes in this task-constant and non-constant-were relatively evenly-distributed among generated data without additional constraints on pattern instantiation or attribute values.3 Furthermore, color is unrelated to and unaffected by the settings of other attributes, so indiscernible spurious correlation with the presence or absence of other pattern-attribute pairings is unlikely, even with small data. Footnote 3: Some constraints on pattern instantiation and attribute values were necessary to ensure legibility at the desired resolution of \(128\times 128\). Specifically, polygon size was kept large and not allowed to vary and oblique angles of polygon rotation were disallowed. Neither of these constraints contribute to determining the color constancy of an RPM. The RAVEN-like data used in my experiments were generated with a reimplementation of the original RAVEN data generation code extended to support creation of RPMs at lower resolutions than were originally accommodated. I use a training dataset size of 128 RPMs and a validation set of 1200 RPMs. These sizes make it easy to fit to the training data, but difficult to generalize without appropriate inductive biases. Figure 1: Training and validation set performances of populations of neural networks. Each model of a population is represented in both unsmoothed trend lines of a plot at parallel points with respect to the x-axis. Parallel points are sorted in descending order of training set accuracy. The baseline plot depicts the spread of training and validation set accuracies for 50 instantiations of a baseline model. The experimental plots depict the spread of training set and validation set accuracies for populations of G-SSNNs at various stages of evolution. At stage \(n\), the population has undergone \(n\) rounds of heuristic selection and program synthesis. At iterations 1-5, the populations were of sizes 2, 7, 44, 50 and 50, respectively. Dotted lines represent unsmoothed data and solid lines represent smoothed data with a window size of 6. I present validation set accuracies in both smooth and unsmoothed form except where the population of models seem too small to meaningfully benefit from smoothing (\(n<15\)). ### Results Figure 1 presents comparative results for populations of baseline and experimental models. Generally, experimental models exhibit a much smaller gap in performance between seen and unseen data than baseline models do. For both baseline and experimental models, validation set accuracy is highest for models whose training set accuracy is in the range of 80-90%. Among baseline models less than 20% fall in this range. After the first iteration, 30% or more of the population of experimental models fall in this range. For both baseline and experimental models, validation set accuracy is lowest for models whose training set accuracy falls below 80%. While only one or two baseline models fall in this range, by the third iteration 20% or more of experimental models fall in this range. The experimental condition thus makes both peak validation set accuracies and degraded validation set accuracies more numerous, providing evidence that the use of G-SSNN modules modulates the data efficiency and generalization of the baseline model. It also appears that among experimental models, there is a clearer relationship between training set accuracy and validation set accuracy, making it easier to select models that exhibit goodness of fit on both seen and unseen data. The distributions of validation set accuracies illustrated in these data suggest that it is actually models with a training set performance of a more intermediate rank that are most likely to exhibit peak generalization. In these models, injected features raise validation set performance without lowering training set performance to a harmful degree. In future work and applications of G-SSNNs, it may therefore be beneficial to explore the space of Intermediate Training Performance (ITP) heuristics for selecting from the population. ## 6 Discussion Across high-dimensional media, there are tasks that require data efficient generalization. In these settings, modular systems may be better at responding to local and discrete features that simplify and accelerate learning. In G-SSNNs, we stimulate modularity by injecting a fixed set of local and discrete features which the model must contextualize for the task at hand. This allows G-SSNN modules to be trained with standard techniques. By training G-SSNNs, we also derive information about desirable semantics of symbolic programs without manual engineering. Unlike the knowledge contained in neural network weights, this information can be abstracted upon just as the functions of handwritten software libraries are today. However, unlike the knowledge contained in purely symbolic systems, it can also be flexibly adapted to high-dimensional settings in ways that cannot be concisely expressed with symbols. In advancing this paradigm, I believe that G-SSNNs capture much of what is desirable about the promise of common sense: the ability to quickly adapt and elaborate on compact knowledge in a range of perceptually-grounded circumstances. In future work, I will investigate data efficient generalization and the transferability of learned symbolic representations in more complex G-SSNN designs based on more complex classes of symbolic programs. Such programs could include operations that create more varied node and edge features and access richer external sources of information to inform their values. Based on these features, embedding functions could utilize more complex topological structures, or incorporate transformations from pretrained models. ## 7 Conclusion I have presented G-SSNNs, a class of neural modules whose representations are modified with synthesized symbolic programs to include a fixed set of local and discrete features. I presented the results of applying neural models containing G-SSNN modules to a binary prediction task over Raven's Progressive Matrices (RPMs), psychometric tests of Abstract Visual Reasoning (AVR) on which humans exhibit data efficient generalization. These results demonstrate that the injected features of G-SSNNs modulate the data efficiency and performance of baseline neural models. In addition to their potential to improve data efficiency and generalization on tasks involving high-dimensional transformations, G-SSNNs also allow us to derive information about the desirable semantics of symbolic programs without manual engineering. This information is compact and amenable to abstraction, and can also be flexibly contextualized for other high-dimensional settings in response to data. In future work, I will investigate data efficient generalization and the transferability of learned symbolic representations in more complex G-SSNN designs based on more complex classes of symbolic programs.
2310.16675
Agreeing to Stop: Reliable Latency-Adaptive Decision Making via Ensembles of Spiking Neural Networks
Spiking neural networks (SNNs) are recurrent models that can leverage sparsity in input time series to efficiently carry out tasks such as classification. Additional efficiency gains can be obtained if decisions are taken as early as possible as a function of the complexity of the input time series. The decision on when to stop inference and produce a decision must rely on an estimate of the current accuracy of the decision. Prior work demonstrated the use of conformal prediction (CP) as a principled way to quantify uncertainty and support adaptive-latency decisions in SNNs. In this paper, we propose to enhance the uncertainty quantification capabilities of SNNs by implementing ensemble models for the purpose of improving the reliability of stopping decisions. Intuitively, an ensemble of multiple models can decide when to stop more reliably by selecting times at which most models agree that the current accuracy level is sufficient. The proposed method relies on different forms of information pooling from ensemble models, and offers theoretical reliability guarantees. We specifically show that variational inference-based ensembles with p-variable pooling significantly reduce the average latency of state-of-the-art methods, while maintaining reliability guarantees.
Jiechen Chen, Sangwoo Park, Osvaldo Simeone
2023-10-25T14:40:33Z
http://arxiv.org/abs/2310.16675v2
Agreeing to Stop: Reliable Latency-Adaptive Decision Making via Ensembles of Spiking Neural Networks ###### Abstract Spiking neural networks (SNNs) are recurrent models that can leverage sparsity in input time series to efficiently carry out tasks such as classification. Additional efficiency gains can be obtained if decisions are taken as early as possible as a function of the complexity of the input time series. The decision on when to stop inference and produce a decision must rely on an estimate of the current accuracy of the decision. Prior work demonstrated the use of conformal prediction (CP) as a principled way to quantify uncertainty and support adaptive-latency decisions in SNNs. In this paper, we propose to enhance the uncertainty quantification capabilities of SNNs by implementing ensemble models for the purpose of improving the reliability of stopping decisions. Intuitively, an ensemble of multiple models can decide when to stop more reliably by selecting times at which most models agree that the current accuracy level is sufficient. The proposed method relies on different forms of information pooling from ensemble models, and offers theoretical reliability guarantees. We specifically show that variational inference-based ensembles with p-variable pooling significantly reduce the average latency of state-of-the-art methods, while maintaining reliability guarantees. Spiking neural networks, conformal prediction, delay adaptivity, Bayesian learning. ## I Introduction **Context:** With the advent of large language models, sequence models are currently among the most studied machine learning techniques. Unlike methods based on conventional neural networks, such as transformers, spiking neural networks (SNNs) process time series with the prime objective of optimizing energy efficiency, particularly in the presence of sparse inputs [1, 2, 3]. The energy consumption of an SNN depends on the number of spikes generated internally by the constituent spiking neurons [4], and inference energy can be further reduced if decisions are taken as early as possible as a function of the complexity of the input time series [5]. The decision on when to stop inference and produce a decision must rely on an estimate of the current accuracy of the decision, as stopping too early may cause unacceptable drops in accuracy. The delay-adaptive rule proposed in [5] uses the SNN's output confidence levels to estimate the true accuracy, while reference [6] determined the stopping time via a separate policy network. SNN models, like their conventional neural network counterpart, tend to be poorly calibrated, producing overconfident decisions [7] (see also Fig. 1 in [8]). As a consequence, the schemes in [5, 6] do not offer any reliability guarantee at the stopping time. To address this problem, recent work [8] demonstrated the use of _conformal prediction_ (CP) [9, 10, 11, 12] as a principled way to quantify uncertainty and support adaptive-latency decisions in SNNs. In the SpikeCP method introduced in [8], the SNN produces _set predictions_ consisting of a subset of the set of all possible outputs. For instance, given as an input electroencephalography (EEG) or electrocardiography (ECG) time series, a set predictor determines a set of plausible conditions that a doctor may need to test for. Accordingly, for many applications, set predictors provide actionable information, while also offering an inherent measure of uncertainty in the form of the size of the predicted set [9]. SpikeCP leverages the theoretical properties of CP to define reliable stopping rules based on the size of the predicted set. **Motivation:** Predictive uncertainty can be decomposed into _aleatoric uncertainty_, which refers to the inherent randomness of the data-generation mechanism, and _epistemic uncertainty_, which arises due to the limited knowledge that can be extracted from a finite data set [13, 14]. While aleatoric uncertainty is captured by individual machine learning models, like SNNs, epistemic uncertainty is typically accounted for by using _ensembles_ of models. In particular, epistemic uncertainty is quantified by gauging the level of _disagreement_ among the models in the ensembles [13, 14]. By relying on conventional SNN models, SpikeCP does not attempt to quantify _epistemic uncertainty_, focusing only on aleatoric uncertainty quantification. Epistemic uncertainty. The application of Bayesian learning and model _ensembling_ as means to quantify epistemic uncertainty in SNNs was investigated in [15, 16, 17], showing improvements in standard calibration metrics. In this paper, we propose to enhance the uncertainty quantification capabilities of SpikeCP by implementing ensemble SNN models for the purpose of improving the reliability of stopping decisions. Intuitively, an ensemble of multiple models can decide when to stop more reliably by selecting times at which most models agree that the current accuracy level is sufficient. The proposed method relies on tailored information pooling strategies across the models in the ensemble that preserve the theoretical guarantees of CP and SpikeCP. Fig. 1: In the proposed system, an ensemble of \(K\) SNN models processes an input a agreeing on when to stop in order to make a classification decision. Each \(k\)th SNN model produces a score \(p_{c}^{k}\) for every candidate class \(c=1,...,C\). The scores are combined to determine in an adaptive way whether to stop inference or to continue processing the input. **Main contributions:** The main contributions of this work are summarized as follows. \(\bullet\) We propose a novel ensemble-based SNN model that can reliably decide when to stop, producing set predictions with coverage guarantees and with an average latency that is significantly lower than the state of the art. \(\bullet\) We compare two ensembling strategies - _deep ensembles_ (DE) [18, 19] and Bayesian learning via _variational inference_ (VI) [14, 15] - and introduce two methods to efficiently combine the decisions from multiple models, namely _confidence merging_ (CM) and _p-variable merging_ (PM). In both cases, the resulting set predictors satisfy theoretical reliability guarantees. \(\bullet\) Experiments show that VI-based ensembles with PM significantly reduce the average latency of state-of-the-art methods, while maintaining reliability guarantees. **Organization:** The remainder of the paper is organized as follows. Section II presents the problem, and Section III introduces the proposed framework. Section IV describes the experimental setting and results. ## II Problem Definition In this paper, we study adaptive-latency multi-class classification for time series via SNNs [5, 6, 8]. As illustrated in Fig. 1, unlike prior work [5, 6, 8], we propose to enhance the reliability of stopping decisions by explicitly accounting for epistemic uncertainty when deciding whether to stop or to continue processing the input. The end goal is to produce reliable set predictions with complexity and latency tailored to the difficulty of each example. In this section, we start by defining the problem and performance metrics. ### _Multi-Class Classification with SNNs_ We wish to classify a vector time series \(\mathbf{x}=\mathbf{x}_{1},\mathbf{x}_{2},...\), with \(N\times 1\) time samples \(\mathbf{x}_{t}=[x_{t,1},...,x_{t,N}]\) into \(C\) classes using an SNN model. The entries of input vector \(\mathbf{x}_{t}\) can be arbitrary, although typical SNN implementations assume binary inputs [20]. As shown in Fig. 1, based on the time samples \(\mathbf{x}^{t}=(\mathbf{x}_{1},...,\mathbf{x}_{t})\) observed so far, at any time \(t\), the \(C\) read-out neurons of the SNN produce the \(C\times 1\) binary vector \(\mathbf{y}_{t}=[y_{t,1},...,y_{t,C}]\), with entries equal to 1 representing spikes. Internally, an SNN model can be viewed as a recurrent neural network (RNN) with binary activations. Its operation is defined by a vector \(\mathbf{\theta}\) of synaptic weights, which determines the response of each spiking neuron to incoming spikes. As in most existing art and implementations, we adopt a standard spike response model (SRM) [21] for the spiking neurons. Carrying out decision on the basis of the outputs of the \(C\) read-out neurons is typically achieved by _rate decoding_[22]. In rate decoding, at each time \(t\), the SNN maintains a _spike count vector_\(\mathbf{r}(\mathbf{x}^{t})=[r_{1}(\mathbf{x}^{t}),...,r_{C}(\mathbf{x}^{t})]\) in which each \(c\)th entry \[r_{c}(\mathbf{x}^{t})=\sum_{t^{\prime}=1}^{t}y_{t^{\prime},c} \tag{1}\] counts the number of spikes emitted so far by read-out neuron \(c\). A normalized measure of _confidence_ can then be obtained via the softmax function as [22] \[f_{c}(\mathbf{x}^{t})=e^{r_{c}(\mathbf{x}^{t})}/\sum_{c^{\prime}=1}^{C}e^{r_{c^{\prime }}(\mathbf{x}^{t})}, \tag{2}\] for each class \(c\). Conversely, the _loss_ assigned by the SNN model to label \(c\) for input \(x^{t}\) is given by the _log-loss_ \[s_{c}(\mathbf{x}^{t})=-\log f_{c}(\mathbf{x}^{t}). \tag{3}\] The general goal of this work is to make reliable classification decisions at the earliest possible time \(t\) on the basis of the confidence levels (2), or equivalently of the losses (3), produced by SNN classifiers. ### _Ensemble Inference and Learning for SNNs_ Conventional SNN models consist of a single SNN making decisions on the basis of the confidence levels (2), or (3), at a fixed time \(t=T\). Neuroscience has long explored the connection between networks of spiking neurons and Bayesian reasoning [23], and the recent work [15] has explored the advantages of Bayesian learning and model ensembling in terms of uncertainty quantification for SNN classifiers. In this work, we leverage the enhanced uncertainty quantification capabilities of ensemble models to improve the reliability of adaptive-latency decision making via SNN models. As illustrated in Fig. 1, in the considered setting, _\(K\) pre-trained_ SNN classifiers are used in parallel on an input sequence \(\mathbf{x}_{1},\mathbf{x}_{2},...\). The operation of each \(k\)th SNN classifier is defined by a vector \(\mathbf{\theta}^{k}\) of synaptic weights as explained in the previous subsection. We specifically consider two design methods for the ensembles, namely _deep ensembles_ (DE) [19] and _Bayesian learning_ via _variational inference_ (VI) [14]. In DE, the \(K\) models are obtained by running conventional SNN training methods based on surrogate gradient [24] with \(K\) independent weight initializations, with each weight selected in an independent identically distribution (i.i.d.) manner as Gaussian \(\mathcal{N}(0,\sigma^{2})\) variables for some fixed variance \(\sigma^{2}\). In contrast, in VI, assuming an i.i.d. Gaussian prior distribution \(\mathcal{N}(0,\sigma^{2})\) for the model parameter vector \(\mathbf{\theta}\), one optimizes over a variational posterior distribution \(\mathcal{N}(\mathbf{\mu},\mathbf{\zeta}^{2})\) parameterized by mean vector \(\mathbf{\mu}\) and diagonal covariance matrix with diagonal elements given by vector \(\mathbf{\zeta}^{2}\). The optimization is done by using gradient descent via the reparameterization trick [15]. At inference time, the \(K\) models are generated by sampling the weight vectors \(\mathbf{\theta}^{k}\) from the optimized distribution \(\mathcal{N}(\mathbf{\mu},\mathbf{\zeta}^{2})\). With DE, generating the \(K\) models in the ensemble requires retraining from scratch, while this can be done by simply drawing Gaussian variables in the case of VI. Therefore, with DE, the ensemble should be practically shared across many input test sequences, while for VI it is possible to draw new ensembles more frequently, possibly even for each new input. ### _Set Prediction and Delay-Adaptivity_ As mentioned, we focus on delay-adaptive classifiers in which the time at which a decision is made is a function of the input \(\mathbf{x}\) through the vector \(\mathbf{p}(\mathbf{x}^{t})=[p_{1}(\mathbf{x}^{t}),...,p_{C}(\mathbf{x}^{t})]\) of confidence levels (2) produced by the read-out neurons. Intuitively, when the model confidence is high enough, the classifier can produce a decision. We denote as \(T_{s}(\mathbf{x})\) the time at which a decision is made for input \(\mathbf{x}\). Furthermore, we allow the decision to be in the form of a set \(\Gamma(\mathbf{x})\subseteq\{1,...,C\}\) of the set of \(C\) labels [9]. As mentioned in Sec. I, set decisions provide actionable information in many applications of interest, such as for robotics, medical diagnosis, and language modelling, and they provide a measure of uncertainty via the predicted set's size \(|\Gamma(\mathbf{x})|\)[9]. The performance of the classifier is measured in terms of reliability and latency. A predictive set \(\Gamma(\mathbf{x})\) is said to be _reliable_ if the probability that the correct label \(c\) is included in the set is no smaller than a pre-determined target accuracy \(p_{\rm{targ}}\), i.e., \[\Pr(c\in\Gamma(\mathbf{x}))\geq p_{\rm{targ}}, \tag{4}\] where the probability is taken with respect to the distribution of the test example \((\mathbf{x},c)\), as well as of the calibration data to be discussed next. The latency of the set prediction is defined as \(\mathbb{E}[T_{s}(\mathbf{x})]\), where the expectation is taken over the same distribution as for (4). The models are assumed to be pre-trained, and we assume to have access to a separate _calibration data set_ \[\mathcal{D}^{\rm{cal}}=\{(\mathbf{x}[i],c[i])\}_{i=1}^{|\mathcal{D}^{\rm{cal}}|}, \tag{5}\] with \(|\mathcal{D}^{\rm{cal}}|\) examples \((\mathbf{x}[i],c[i])\) generated i.i.d. from the same distribution followed by the test example \((\mathbf{x},c)\)[8, 9]. As we will discuss in the next section, calibration data is used to optimize the process of deciding when to stop so as to guarantee the reliability requirement (4). ## III Ensemble-based Adaptive Classification via SNNs In this section, we introduce _ensemble-based SpikeCP_, a novel framework for delay-adaptive classification that wraps around any pre-trained ensemble of SNN classifiers, including ensembles obtained via DE and VI. We propose two implementations corresponding to different ways of pooling information across the \(K\) models in the ensemble. ### _SpikeCP_ We first review SpikeCP [8], which applies to a single SNN model, i.e., with \(K=1\). The presentation here, unlike in [8], adopts the language of p-variables (see, e.g., [12, 25]) in order to facilitate the extension to ensemble models. SpikeCP fixes a pre-determined set of _checkpoint times_\(\mathcal{T}_{s}\subseteq\{1,...,T\}\) at which inference may stop to produce a decision. The information available to determine whether to stop or not are the losses \(\{s_{c}(\mathbf{x}^{t})\}_{c=1}^{C}\) in (3) for the current input \(\mathbf{x}^{t}\), as well as the corresponding losses \(s_{c[i]}(\mathbf{x}^{t}[i])\) for the calibration data points indexed by \(i=1,...,|\mathcal{D}^{\rm{cal}}|\). For each class \(c\), SpikeCP computes the quantity \[p_{c}(\mathbf{x}^{t})=\frac{\sum_{i=1}^{|\mathcal{D}^{\rm{cal}}|}\mathbb{1}\left( s_{c}(\mathbf{x}^{t})\leq s_{c[i]}(\mathbf{x}^{t}[i])\right)+1}{|\mathcal{D}^{\rm{ cal}}|+1}, \tag{6}\] where \(\mathbb{1}\left(\cdot\right)\) equals 1 if the argument is true and 0 otherwise. The quantity (6) corresponds, approximately, to the fraction of calibration data points whose loss is no smaller than the loss for label \(c\) when assigned to the current test input \(\mathbf{x}^{t}\). The corrections by 1 at numerator and denominator are required to guarantee the following property, which follows from the standard theory of CP [26, Proposition 1]. **Theorem 1**.: _Let \(\mathcal{D}^{t,\rm{cal}}=\{(\mathbf{x}^{t}[i],c[i])\}_{i=1}^{|\mathcal{D}^{\rm{cal }}|}\) be the calibration data set with samples up to time \(t\), and define as \(\mathcal{H}^{t}_{c}\) the hypothesis that the pair \((\mathbf{x}^{t},c)\) and the calibration data \(\mathcal{D}^{t,\rm{cal}}\) are i.i.d. The quantity (6) is a p-variable for null hypothesis \(\mathcal{H}^{t}_{c}\), i.e., we have the conditional probability_ \[\Pr(p_{c}(\mathbf{x}^{t})\leq\alpha|\mathcal{H}^{t}_{c})\leq\alpha, \tag{7}\] _for all \(\alpha\in(0,1)\), where the probability is taken over the distribution of test and calibration data._ At each checkpoint \(t\in\mathcal{T}_{s}\), SpikeCP constructs a predictive set by including all classes \(c\) with p-variable larger than threshold \(\alpha\) \[\Gamma(\mathbf{x}^{t})=\{c\in\mathcal{C}:p_{c}(\mathbf{x}^{t})>\alpha\}. \tag{8}\] By (7), the probability that the set (8) does not include the true test label \(c\) is smaller or equal than \(\alpha\), or equivalently [26, Proposition 1] \[\Pr(c\in\Gamma(\mathbf{x}^{t}))\geq 1-\alpha. \tag{9}\] Accordingly, SpikeCP sets \(\alpha=(1-p_{\rm{targ}})/|\mathcal{T}_{s}|\) to ensure that condition (9) is satisfied irrespective of which checkpoint is selected. As detailed in [8], this is a form of _Bonferroni correction_[27, Appendix 2]. SpikeCP stops inference at the first time \(T_{s}(\mathbf{x})\) for which the size of the predicted set is smaller than a target set size \(I_{\rm{th}}\), so the stopping time is given by \[T_{s}(\mathbf{x})=\min\{t\in\mathcal{T}_{s}:|\Gamma(\mathbf{x}^{t})|\leq I_{\rm{th}}\}. \tag{10}\] The threshold \(I_{\rm{th}}\) is a design choice that is dictated by the desired informativeness of the resulting set predictor. For any threshold \(I_{\rm{th}}\), by construction, SpikeCP satisfies the reliability property (4) [8, Theorem 1]. ### _Ensemble-based SpikeCP with Confidence Merging_ In the proposed ensemble-SNN architecture in Fig. 1, each SNN classifier parameterized by \(\mathbf{\theta}^{k}\), \(k=1,...,K\), produces a generally different probability \(f^{k}_{c}(\mathbf{x}^{t})\) in (2), or correspondingly a different loss \(s^{k}_{c}(\mathbf{x}^{t})\), for each class \(c\) given an input \(\mathbf{x}^{t}\). In this paper, we study and compare two combining mechanisms. First, in order to produce a confidence level for each possible label \(c\), the confidence levels output by the \(K\) models in the ensemble can be combined using the generalized mean [28] \[f_{c}(\mathbf{x}^{t})=\left(\frac{1}{K}\sum_{k=1}^{K}\left(f^{k}_{c}(\mathbf{x}^{t}) \right)^{r}\right)^{1/r} \tag{11}\] for some integer \(r\in[-\infty,+\infty]\). When \(r=1\), the ensemble probability (11) reduces to standard model averaging. Other values of \(r\) may in practice be advantageous, e.g., to enhance robustness [29, 30], with maximum operation recovered for \(r=\infty\) and the minimum operation obtained with \(r=-\infty\). The probability (11) is used to calculate the score via (3), which is then directly used in (6) and (8) to determine the set predictor. Note that the same combination in (11) is also applied to calibration data. By the same arguments as for SpikeCP, this approach guarantees the reliability condition (4) by setting \(\alpha=(1-p_{\text{targ}})/|\mathcal{T}_{s}|\). ### _Ensemble-based SpikeCP with P-Variable Merging_ Given the reliance of the predicted set (8) on p-variables, merging directly the confidence levels may be suboptimal [31]. Accordingly, in this subsection, we explore the idea of pooling directly the p-variables, rather than combining confidence levels. To this end, we first calculate the losses for the calibration set by using the \(k\)th model as \(\{s_{c[i]}^{k}(\mathbf{x}^{t}[i])\}_{i=1}^{\left[D^{t}\right]^{k}=\left[D^{t} \right]}\) for \(k=1,...,K\). Then, for a test input \(\mathbf{x}^{t}\), we evaluate the p-variable (6) for the \(k\)th model as \[p_{c}^{k}(\mathbf{x}^{t})=\frac{1+\sum_{i=1}^{\left|\mathcal{D}^{\text{cal}} \right|}\mathbbm{1}(s_{c}^{k}(\mathbf{x}^{t})\leq s_{c[i]}^{k}(\mathbf{x}^{t}[i]))}{ \left|\mathcal{D}^{\text{cal}}\right|+1}. \tag{12}\] The p-variables \(\{p_{c}^{k}(\mathbf{x}^{t})\}_{k=1}^{K}\) are then pooled by using any _p-merging_ function \(F(\cdot)\), as defined next. **Definition 1** ([32, 33]).: _A function \(F:[0,1]^{K}\rightarrow[0,\infty)\) is said to be a p-merging function if, when the inputs are p-variables, the output is also a p-variable, i.e., we have the inequality_ \[\Pr(F\big{(}p_{c}{}^{1}(\mathbf{x}^{t}),...,p_{c}{}^{K}(\mathbf{x}^{t})\big{)}\leq \alpha^{\prime})\leq\alpha^{\prime},\text{ for all }\alpha^{\prime}\in(0,1), \tag{13}\] _where the probability is taken over the joint distribution of the \(K\) input p-variables._ Using the merged p-value generated as \[p_{c}(\mathbf{x}^{t})=F\big{(}p_{c}{}^{1}(\mathbf{x}^{t}),...,p_{c}{}^{K}(\mathbf{x}^{t}) \big{)} \tag{14}\] for any p-merging function \(F(\cdot)\), the predictive set can be constructed by following (8). By definition of p-merging function, the resulting set predictor also satisfies the reliability condition (4). In the experiments reported in the next section, we focus on the class of p-merging functions of the form [33] \[F(p^{1},...,p^{K})=a_{r}\bigg{(}\frac{1}{K}\sum_{k=1}^{K}\big{(}p^{k}\big{)}^ {r}\bigg{)}^{1/r}, \tag{15}\] where \(a_{r}\) is a constant chosen so as to ensure (13) as specified in [33, Table 1]. For example, setting \(r=-\infty\), and correspondingly \(a_{r}=K\), yields the p-merging function \(F(p^{1},...,p^{K})=K\min(p^{1},...,p^{K})\), while setting \(r=\infty\) with \(a_{\infty}=1\) yields \(F(p^{1},...,p^{K})=\max(p^{1},...,p^{K})\). ## IV Experiments and Conclusions For numerical evaluations, we consider the standard DVS128 Gesture dataset [34] and the CIFAR-10 dataset. The first data set represents a video recognition task, and the latter an image classification task. The calibration data set \(\mathcal{D}^{\text{cal}}\) is obtained by randomly sampling \(\left[\mathcal{D}^{\text{cal}}\right]=50\) examples from the test set, with the rest used for training, which is done via the surrogate gradient method [24]. For both dataset, we adopt the SNN architecture in [35]. The set of possible checkpoints is \(\mathcal{T}_{s}=\{20,40,60,80\}\), the target set size is set to \(I_{\mathrm{th}}=3\) (\(30\%\) and \(27\%\) of the entire label set for CIFAR-10 and DVS128, respectively), and the target accuracy \(p_{\mathrm{targ}}\) is set to \(0.9\). We compare the performance of ensemble-based SpikeCP using DE or VI equipped with confidence merging (CM) or p-variable merging (PM). For comparisons with other adaptive SNN predictors we refer to [8], which shows that the schemes [5, 6] cannot guarantee the condition (4) defined for point predictors. For DE, we follow standard random initialization made available by PyTorch, while for VI we set the prior distribution to have variance 0.03. The parameter \(r\) in (11) for CM is set to \(1\), yielding standard model averaging [15]; while \(r\) in (15) for PM is set to \(r=45\), with \(a_{r}=K^{1/r}\) following [33, Table 1], based on the numerical minimization of latency on a held-out data set. The results are averaged over \(20\) different realizations of calibration and test data sets. In Fig. 2, we show the accuracy, given by the probability \(\Pr(c\in\Gamma(\mathbf{x}))\) in (4) and the average decision latency as a function of the ensemble size \(K\). Confirming their theoretical properties, all SpikeCP schemes meet the target accuracy \(p_{\text{targ}}=0.9\). Furthermore, the average latency decreases with the ensemble size \(K\), providing substantial improvements as compared to the original SpikeCP scheme with \(K=1\)[8]. VI methods tend to have a better performance in terms of latency, showcasing the benefits of VI as a more principled approach for Bayesian learning. Finally, PM generally yields smaller latency values as compared to CM, indicating that merging p-variables offers a more efficient information pooling strategy. Fig. 2: Accuracy \(\Pr(c\in\Gamma(\mathbf{x}))\) and normalized latency \(\mathbb{E}[T_{s}(\mathbf{x})]/T\) as a function of the ensemble size \(K\) for DVS128 Gesture dataset (top row) and CIFAR-10 dataset (bottom row).
2310.11654
Subject-specific Deep Neural Networks for Count Data with High-cardinality Categorical Features
There is a growing interest in subject-specific predictions using deep neural networks (DNNs) because real-world data often exhibit correlations, which has been typically overlooked in traditional DNN frameworks. In this paper, we propose a novel hierarchical likelihood learning framework for introducing gamma random effects into the Poisson DNN, so as to improve the prediction performance by capturing both nonlinear effects of input variables and subject-specific cluster effects. The proposed method simultaneously yields maximum likelihood estimators for fixed parameters and best unbiased predictors for random effects by optimizing a single objective function. This approach enables a fast end-to-end algorithm for handling clustered count data, which often involve high-cardinality categorical features. Furthermore, state-of-the-art network architectures can be easily implemented into the proposed h-likelihood framework. As an example, we introduce multi-head attention layer and a sparsemax function, which allows feature selection in high-dimensional settings. To enhance practical performance and learning efficiency, we present an adjustment procedure for prediction of random parameters and a method-of-moments estimator for pretraining of variance component. Various experiential studies and real data analyses confirm the advantages of our proposed methods.
Hangbin Lee, Il Do Ha, Changha Hwang, Youngjo Lee
2023-10-18T01:54:48Z
http://arxiv.org/abs/2310.11654v1
# Subject-specific Deep Neural Networks for Count Data with High-cardinality Categorical Features ###### Abstract There is a growing interest in subject-specific predictions using deep neural networks (DNNs) because real-world data often exhibit correlations, which has been typically overlooked in traditional DNN frameworks. In this paper, we propose a novel hierarchical likelihood learning framework for introducing gamma random effects into the Poisson DNN, so as to improve the prediction performance by capturing both nonlinear effects of input variables and subject-specific cluster effects. The proposed method simultaneously yields maximum likelihood estimators for fixed parameters and best unbiased predictors for random effects by optimizing a single objective function. This approach enables a fast end-to-end algorithm for handling clustered count data, which often involve high-cardinality categorical features. Furthermore, state-of-the-art network architectures can be easily implemented into the proposed h-likelihood framework. As an example, we introduce multi-head attention layer and a sparsemax function, which allows feature selection in high-dimensional settings. To enhance practical performance and learning efficiency, we present an adjustment procedure for prediction of random parameters and a method-of-moments estimator for pretraining of variance component. Various experiential studies and real data analyses confirm the advantages of our proposed methods. ## 1 Introduction Deep neural networks (DNNs), which have been proposed to capture the nonlinear relationship between input and output variables (LeCun et al., 2015; Goodfellow et al., 2016), provide outstanding marginal predictions for independent outputs. However, in practical applications, it is common to encounter correlated data with high-cardinality categorical features, which can pose challenges for DNNs. While the traditional DNN framework overlooks such correlation, random effect models have emerged in statistics to make subject-specific predictions for correlated data. Lee and Nelder (1996) proposed hierarchical generalized linear models (HGLMs), which allow the incorporation of random effects from an arbitrary conjugate distribution of generalized linear model (GLM) family. Both DNNs and random effect models have been successful in improving prediction accuracy of linear models but in different ways. Recently, there has been a rising interest in combining these two extensions. Simchoni and Rosset (2021, 2023) proposed the linear mixed model neural network for continuous (Gaussian) outputs with Gaussian random effects, which allow explicit expressions for likelihoods. Lee and Lee (2023) introduced the hierarchical likelihood (h-likelihood) approach, as an extension of classical likelihood for Gaussian outputs, which provides an efficient likelihood-based procedure. For non-Gaussian (discrete) outputs, Tran et al. (2020) proposed a Bayesian approach for DNNs with normal random effects using the variational approximation method (Bishop and Nasrabadi, 2006; Blei et al., 2017). Mandel et al. (2023) used a quasi-likelihood approach (Breslow and Clayton, 1993) for DNNs, but the quasi-likelihood method has been criticized for its prediction accuracy. Lee and Nelder (2001) proposed the use of Laplace approximation to have approximate maximum likelihood estimators (MLEs). Although Mandel et al. (2023) also applied Laplace approximation for DNNs, their method ignored many terms in the second derivatives due to computational expense, which could lead to inconsistent estimations (Lee et al., 2017). Therefore, a new approach is desired for non-Gaussian DNNs to obtain the exact MLEs for fixed parameters. Clustered count data are widely encountered in various fields (Roulin and Bersier, 2007; Henderson and Shimakura, 2003; Thall and Vail, 1990; Henry et al., 1998), which often involve high-cardinality categorical features, i.e., categorical variables with a large number of unique levels or categories, such as subject ID or cluster name. However, to the best of our knowledge, there appears to be no available source code for the subject-specific Poisson DNN models. In this paper, we introduce Poisson-gamma DNN for the clustered count data and derive the h-likelihood that simultaneously provides MLEs of fixed parameters and best unbiased predictors (BUPs) of random effects. In contrast to the ordinary HGLM and DNN framework, we found that the local minima can cause poor prediction when the DNN model contains subject-specific random effects. To resolve this issue, we propose an adjustment to the random effect prediction that prevents from violation of the constraints for identifiability. Additionally, we introduce the method-of-moments estimators for pretraining the variance component. It is worth emphasizing that incorporating state-of-the-art network architectures into the proposed h-likelihood framework is straightforward. As an example, we implement a feature selection method with multi-head attention. In Section 2 and 3, we present the Poisson-gamma DNN and derive its h-likelihood, respectively. In Section 4, we present the algorithm for online learning, which includes an adjustment of random effect predictors, pretraining variance components, and feature selection method using multi-head attention layer. In Section 5, we provide experimental studies to compare the proposed method with various existing methods. The results of the experimental studies clearly show that the proposed method improves predictions from existing methods. In Section 6, real data analyses demonstrate that the proposed method has the best prediction accuracy in various clustered count data. Proofs for theoretical results are derived in Appendix. Source codes are included in Supplementary Materials. ## 2 Model Descriptions ### Poisson DNN Let \(y_{ij}\) denote a count output and \(\mathbf{x}_{ij}\) denote a \(p\)-dimensional vector of input features, where the subscript \((i,j)\) indicates the \(j\)th outcome of the \(i\)th subject (or cluster) for \(i=1,...,n\) and \(j=1,...,q_{i}\). For count outputs, Poisson DNN (Rodrigo and Tsokos, 2020) gives the marginal predictor, \[\eta_{ij}^{m}=\log\mu_{ij}^{m}=\text{NN}(\mathbf{x}_{ij};\mathbf{w},\mathbf{ \beta})=\sum_{k=1}^{p_{L}}g_{k}(\mathbf{x}_{ij};\mathbf{w})\beta_{k}+\beta_{0}, \tag{1}\] where \(\mu_{ij}^{m}=\text{E}(Y_{ij}|\mathbf{x}_{ij})\) is the marginal mean, \(\text{NN}(\mathbf{x}_{ij};\mathbf{w},\mathbf{\beta})\) is the neural network predictor, \(\mathbf{\beta}=(\beta_{0},\beta_{1},...,\beta_{p_{L}})^{T}\) is the vector of weights and bias between the last hidden layer and the output layer, \(g_{k}(\mathbf{x}_{ij};\mathbf{w})\) denotes the \(k\)-th node of the last hidden layer, and \(\mathbf{w}\) is the vector of all the weights and biases before the last hidden layer. Here the inverse of the log function, \(\exp(\cdot)\), becomes the activation function of the output layer. Poisson DNNs allow highly nonlinear relationship between input and output variables, but only provide the marginal predictions for \(\mu_{ij}^{m}\). Thus, Poisson DNN can be viewed as an extension of Poisson GLM with \(\eta_{ij}^{m}=\mathbf{x}_{ij}^{T}\beta\). ### Poisson-gamma DNN To allow subject-specific prediction into the model (1), we propose the Poisson-gamma DNN, \[\eta_{ij}^{c}=\log\mu_{ij}^{c}=\text{NN}(\mathbf{x}_{ij};\mathbf{w},\mathbf{\beta })+\mathbf{z}_{ij}^{T}\mathbf{v}, \tag{2}\] where \(\mu_{ij}^{c}=\text{E}(Y_{ij}|\mathbf{x}_{ij},v_{i})\) is the conditional mean, \(\text{NN}(\mathbf{x}_{ij};\mathbf{w},\mathbf{\beta})\) is the marginal predictor of the Poisson DNN (1), \(\mathbf{v}=(v_{1},...,v_{n})^{T}\) is the vector of random effects from the log-gamma distribution, and \(\mathbf{z}_{ij}\) is a vector from the model matrix for random effects, representing the high-cardinality categorical features. The conditional mean \(\mu_{ij}^{c}\) can be formulated as \[\mu_{ij}^{c}=\exp\left\{\text{NN}(\mathbf{x}_{ij};\mathbf{w},\mathbf{\beta}) \right\}\cdot u_{i},\] where \(u_{i}=\exp(v_{i})\) is the gamma random effect. Note here that, for any \(\epsilon\in\mathbb{R}\), the model (2) can be expressed as \[\log\mu^{c}_{ij}=\sum_{k=1}^{p_{L}}g_{k}(\mathbf{x}_{ij};\mathbf{w})\beta_{k}+ \beta_{0}+v_{i}=\sum_{k=1}^{p_{L}}g_{k}(\mathbf{x}_{ij};\mathbf{w})\beta_{k}+( \beta_{0}-\epsilon)+(v_{i}+\epsilon),\] or equivalently, for any \(\delta=\exp(\epsilon)>0\), \[\mu^{c}_{ij}=\exp\left\{\text{NN}(\mathbf{x}_{ij};\mathbf{w},\mathbf{\beta}) \right\}\cdot u_{i}=\exp\left\{\text{NN}(\mathbf{x}_{ij};\mathbf{w},\mathbf{\beta} )-\log\delta\right\}\cdot(\delta u_{i}),\] which leads to an identifiability problem. Thus, it is necessary to place constraints on either the fixed parts \(\text{NN}(\mathbf{x}_{ij};\mathbf{w},\mathbf{\beta})\) or the random parts \(u_{i}\). Lee and Lee (2023) developed subject-specific DNN models with Gaussian outputs, imposing the constraint \(\text{E}(v_{i})=0\), which is common for normal random effects. For Poisson-gamma DNNs, we use the constraints \(\text{E}(u_{i})=\text{E}(\exp(v_{i}))=1\) for subject-specific prediction of count outputs. The use of constraints \(\text{E}(u_{i})=1\) has an advantage that the marginal predictions for multiplicative model can be directly obtained, because \[\mu^{m}_{ij}=\text{E}[\text{E}(Y_{ij}|\mathbf{x}_{ij},u_{i})]=\text{E}\left[ \exp\left\{\text{NN}(\mathbf{x}_{ij};\mathbf{w},\mathbf{\beta})\right\}\cdot u_{ i}\right]=\exp\left\{\text{NN}(\mathbf{x}_{ij};\mathbf{w},\mathbf{\beta})\right\}.\] Thus, we employ \(v_{i}=\log u_{i}\) to the model (2), where \(u_{i}\sim\text{Gamma}(\lambda^{-1},\lambda^{-1})\) with \(\text{E}(u_{i})=1\) and \(\text{var}(u_{i})=\lambda\). By allowing two separate output nodes, the Poisson-gamma DNN provides both marginal and subject-specific predictions, \[\widehat{\mu}^{m}_{ij}=\exp(\widehat{\eta}^{m}_{ij})=\exp\left\{\text{NN}( \mathbf{x}_{ij};\widehat{\mathbf{w}},\widehat{\mathbf{\beta}})\right\}\quad\text {and}\quad\widehat{\mu}^{c}_{ij}=\exp\left\{\text{NN}(\mathbf{x}_{ij};\widehat {\mathbf{w}},\widehat{\mathbf{\beta}})+\mathbf{z}^{T}_{ij}\widehat{\mathbf{v}} \right\},\] where the hats denote the predicted values. Subject-specific prediction can be achieved by multiplying the marginal mean predictor \(\widehat{\mu}^{m}_{ij}\) and the subject-specific predictor of random effect \(\widehat{u}_{i}=\exp(\widehat{v}_{i})\). Note here that \[\text{var}(Y|\mathbf{x})=\text{E}(\text{var}(Y|\mathbf{x},\mathbf{v}))+\text {var}(\text{E}(Y|\mathbf{x},\mathbf{v}))\geq\text{E}(\text{var}(Y|\mathbf{x},\mathbf{v})),\] where \(\text{var}(\text{E}(Y|\mathbf{x},\mathbf{v}))\) represents the between-subject variance and \(\text{E}(\text{var}(Y|\mathbf{x},\mathbf{v}))\) represents the within-subject variance. To enhance the predictions, Poisson DNN improves the marginal predictor \(\mathbf{\mu}^{m}=\text{E}(Y|\mathbf{x})=\)E\(\{\text{E}(Y|\mathbf{x},\mathbf{v})\}\) by allowing highly nonlinear function of \(\mathbf{x}\), whereas Poisson-gamma DNN further uses the conditional predictor \(\mathbf{\mu}^{c}=\text{E}(Y|\mathbf{x},\mathbf{v})\), eliminating between-subject variance. Figure 1 illustrates an example of the proposed model architecture including feature selection in Section 4. ## 3 Construction of h-likelihood For subject-specific prediction via random effects, it is important to define the objective function for obtaining exact MLEs of fixed parameters \(\mathbf{\theta}=(\mathbf{w},\mathbf{\beta},\lambda)\). In the context of linear mixed models, Henderson et al. (1959) proposed to maximize the joint density with respect to fixed and random parameters. However, it cannot yield MLEs of variance components. There have been various attempts to extend joint maximization schemes with different justifications Figure 1: An example of the proposed model architecture. The input features are denoted by \(\mathbf{x}_{ij}\) and the high-cardinality categorical features are denoted by \(\mathbf{z}_{ij}\) in the proposed model (2). (Gilmour et al., 1985; Harville and Mee, 1984; Schall, 1991; Breslow and Clayton, 1993; Wolfinger, 1993), but failed to obtain simultaneously the exact MLEs of all fixed parameters and BUPs of random parameters by optimizing a single objective function. It is worth emphasizing that defining joint density requires careful consideration because of the Jacobian term associated with the random parameters. For \(\mathbf{\theta}\) and \(\mathbf{u}\), an extended likelihood (Lee et al., 2017) can be defined as \[\ell_{e}(\mathbf{\theta},\mathbf{u})=\sum_{i,j}\log f_{\mathbf{\theta}}(y_{ij}|u_{i})+ \sum_{i}\log f_{\mathbf{\theta}}(u_{i}). \tag{3}\] However, a nonlinear transformation \(v_{i}=v(u_{i})\) of random effects \(u_{i}\) leads to different extended likelihood due to the Jacobian terms: \[\ell_{e}(\mathbf{\theta},\mathbf{v}) =\sum_{i,j}\log f_{\mathbf{\theta}}(y_{ij}|v_{i})+\sum_{i}\log f_{\bm {\theta}}(v_{i})\] \[=\sum_{i,j}\log f_{\mathbf{\theta}}(y_{ij}|u_{i})+\sum_{i}\log f_{ \mathbf{\theta}}(u_{i})+\sum_{i}\log\left|\frac{du_{i}}{dv_{i}}\right|\neq\ell_{e} (\mathbf{\theta},\mathbf{u}).\] The two extended likelihoods \(\ell_{e}(\mathbf{\theta},\mathbf{u})\) and \(\ell_{e}(\mathbf{\theta},\mathbf{v})\) lead to different estimates, raising the question on how to obtain the true MLEs. In Poisson-gamma HGLMs, Lee and Nelder (1996) proposed the use of \(\ell_{e}(\mathbf{\theta},\mathbf{v})\) that can give MLEs for \(\mathbf{\beta}\) and BUPs for \(\mathbf{u}\) by the joint maximization. However, it could not yield MLE for the variance component \(\lambda\). In this paper, we derive the new h-likelihood whose joint maximization simultaneously yields MLEs of the whole fixed parameters including the variance component \(\lambda\), BUPs of the random effects \(\mathbf{u}\), and conditional expectations \(\mathbf{\mu}^{c}\). Suppose that \(\mathbf{v}^{*}=(v_{1}^{*},...,v_{n}^{*})^{T}\) is a transformation of \(\mathbf{v}\) such that \[v_{i}^{*}=v_{i}\cdot\exp\{-c_{i}(\mathbf{\theta};\mathbf{y}_{i})\},\] where \(c_{i}(\mathbf{\theta};\mathbf{y}_{i})\) is a function of \(\mathbf{\theta}\) and \(\mathbf{y}_{i}=(y_{i1},...,y_{iq_{i}})^{T}\) for \(i=1,2,...,n\). Then we define the h-likelihood as \[h(\mathbf{\theta},\mathbf{v})\equiv\log f_{\mathbf{\theta}}(\mathbf{v}^{*}|\mathbf{y} )+\log f_{\mathbf{\theta}}(\mathbf{y})=\ell_{e}(\mathbf{\theta},\mathbf{v})+\sum_{i=1 }^{n}c_{i}(\mathbf{\theta},\mathbf{y}_{i}), \tag{4}\] if the joint maximization of \(h(\mathbf{\theta},\mathbf{v})\) leads to MLEs of all the fixed parameters and BUPs of the random parameters. A sufficient condition for \(h(\mathbf{\theta},\mathbf{v})\) to yield exact MLEs of all the fixed parameters in \(\mathbf{\theta}\) is that \(f_{\mathbf{\theta}}(\widetilde{\mathbf{v}}^{*}|\mathbf{y})\) is independent of \(\mathbf{\theta}\), where \(\widetilde{\mathbf{v}}^{*}\) is the mode, \[\widetilde{\mathbf{v}}^{*}=\operatorname*{arg\,max}_{\mathbf{v}^{*}}h(\mathbf{ \theta},\mathbf{v}^{*})=\operatorname*{arg\,max}_{\mathbf{v}^{*}}\log f_{\mathbf{ \theta}}(\mathbf{v}^{*}|\mathbf{y}).\] For the proposed model, Poisson-gamma DNN, we found that the following function \[c_{i}(\mathbf{\theta};\mathbf{y}_{i})=c_{i}(\lambda;y_{i+})=(y_{i+}+\lambda^{-1}) +\log\Gamma(y_{i+}+\lambda^{-1})-(y_{i+}+\lambda^{-1})\log(y_{i+}+\lambda^{-1})\] satisfies the sufficient condition, \[\log f(\widetilde{\mathbf{v}}^{*}|\mathbf{y})=\sum_{i=1}^{n}\log f_{\mathbf{ \theta}}(\widetilde{v}_{i}^{*}|\mathbf{y})=\sum_{i=1}^{n}\left\{\log f_{\mathbf{ \theta}}(\widetilde{v}_{i}|\mathbf{y})+c_{i}(\mathbf{\theta};\mathbf{y}_{i}) \right\}=0,\] where \(y_{i+}=\sum_{j=1}^{q_{i}}y_{ij}\) is the sum of outputs in \(\mathbf{y}_{i}\) and \(\widetilde{v}_{i}=\widetilde{v}_{i}^{*}\cdot\exp\{c_{i}(\mathbf{\theta};\mathbf{y}_ {i})\}\). Then, the h-likelihood at mode \(h(\mathbf{\theta},\widetilde{\mathbf{v}})\) becomes the classical (marginal) log-likelihood, \[\ell(\mathbf{\theta};\mathbf{y})=\log f_{\mathbf{\theta}}(\mathbf{y})=\log\int\exp\left\{ \ell_{e}(\mathbf{\theta},\mathbf{v})\right\}d\mathbf{v}. \tag{5}\] Thus, joint maximization of the h-likelihood (4) provides exact MLEs for the fixed parameters \(\mathbf{\theta}\), including the variance component \(\lambda\). BUPs of \(\mathbf{u}\) and \(\mathbf{\mu}^{c}\) can be also obtained from our h-likelihood, \[\widetilde{\mathbf{u}}=\exp(\widetilde{\mathbf{v}})=\mathrm{E}(\mathbf{u}| \mathbf{y})\quad\text{and}\quad\widetilde{\mathbf{\mu}}^{c}=\exp(\widetilde{ \mathbf{v}})\cdot\exp\left\{\text{NN}(\mathbf{X};\mathbf{w},\mathbf{\beta})\right\}= \mathrm{E}(\mathbf{\mu}^{c}|\mathbf{y}).\] The proof and technical details for the theoretical results are derived in Appendix A.1. ## 4 Learning algorithm for Poisson-gamma DNN models In this section, we introduce the h-likelihood learning framework for handling the count data with high-cardinality categorical features. We decompose the negative h-likelihood loss for online learning The entire learning algorithm of the proposed method is briefly described in Algorithm 1. ### Loss function for online learning The proposed Poisson-gamma DNN can be trained by optimizing the negative h-likelihood loss, \[\text{Loss}=-h(\mathbf{\theta},\mathbf{v})=-\log f_{\mathbf{\theta}}(\mathbf{y}|\mathbf{v })-\log f_{\mathbf{\theta}}(\mathbf{v})-c(\mathbf{\theta};\mathbf{y}),\] which is a function of the two separate output nodes \(\mu_{ij}^{m}=\text{NN}(\mathbf{x}_{ij};\mathbf{w},\mathbf{\beta})\) and \(v_{i}=\mathbf{z}_{ij}^{T}\mathbf{v}\). To apply online stochastic optimization methods, the proposed loss function is expressed as \[\text{Loss}=\sum_{i,j}\left[-y_{ij}\left(\log\mu_{ij}^{m}+v_{i}\right)+e^{v_{i} }\mu_{ij}^{m}-\frac{v_{i}-e^{v_{i}}}{q_{i}\lambda}+a_{i}(\lambda;\mathbf{y}_{i })\right], \tag{6}\] where \(a_{i}(\lambda;\mathbf{y}_{i})=q_{i}^{-1}\left\{\lambda^{-1}\log\lambda+\log \Gamma(\lambda^{-1})-c_{i}(\lambda,y_{i+})\right\}.\) ### Random Effect Adjustment While DNNs often encounter local minima, Dauphin et al. (2014) claimed that in ordinary DNNs, local minima may not necessarily result in poor predictions. In contrast to HGLM and DNN, we observed that the local minima can lead to poor prediction when the network reflects subject-specific random effects. In Poisson-gamma DNNs, we impose the constraint \(\mathds{E}(u_{i})=1\) for identifiability, because for any \(\delta>0\), \[\mu_{ij}^{c}=\exp\left\{\text{NN}(\mathbf{x}_{ij};\mathbf{w},\mathbf{\beta})\right\} \cdot u_{i}=\left[\exp\left\{\text{NN}(\mathbf{x}_{ij};\mathbf{w},\mathbf{\beta}) -\log\delta\right\}\right]\cdot\left(\delta u_{i}\right).\] However, in practice, Poisson-gamma DNNs often end with local minima that violate the constraint. To prevent poor prediction due to local minima, we introduce an adjustment to the predictors of \(u_{i}\), \[\widehat{u}_{i}\leftarrow\frac{\widehat{u}_{i}}{\frac{1}{n}\sum_{i=1}^{n} \widehat{u}_{i}}\quad\text{and}\quad\widehat{\beta}_{0}\leftarrow\widehat{ \beta}_{0}+\log\left(\frac{1}{n}\sum_{i=1}^{n}\widehat{u}_{i}\right) \tag{7}\] to satisfy \(\sum_{i=1}^{n}\widehat{u}_{i}/n=1\). The following theorem shows that the proposed adjustment improves the local h-likelihood prediction. The proof is given in Appendix A.2. **Theorem 1**.: _In Poisson-gamma DNNs, suppose that \(\widehat{\beta}_{0}\) and \(\widehat{u}_{i}\) are estimates of \(\beta_{0}\) and \(u_{i}\) such that \(\sum_{i=1}^{n}\widehat{u}_{i}/n=1+\epsilon\) for some \(\epsilon\in\mathbb{R}\). Let \(\widehat{u}_{i}^{*}\) and \(\widehat{\beta}_{0}^{*}\) be the adjusted estimators in (7). Then,_ \[h(\widehat{\mathbf{\theta}}^{*},\widehat{\mathbf{v}}^{*})\geq h(\widehat{\mathbf{ \theta}},\widehat{\mathbf{v}}),\] _and the equality holds if and only if \(\epsilon=0\), where \(\widehat{\mathbf{\theta}}\) and \(\widehat{\mathbf{\theta}}^{*}\) are vectors of the same fixed parameter estimates but with different \(\widehat{\beta}_{0}\) and \(\widehat{\beta}_{0}^{*}\) for \(\beta_{0}\), respectively._ Theorem 1 shows that the adjustment (7) improves the random effect prediction. According to our experience, even though limited, this adjustment becomes important, especially when the cluster size is large. Figure 2 is the plot of \(\widehat{u}_{i}\) against the true \(u_{i}\) under \((n,q)=(100,100)\) and \(\lambda=1\). Figure 2 (a) shows that the use of fixed effects for subject-specific effects (PF-NN) produces poor prediction of \(u_{i}\). Figure 2 (b) and (c) show that the use of random effects for subject-specific effects (PG-NN) improves the subject-specific prediction, and the proposed adjustment improves it further. ### Pretraining variance components We found that the MLE for variance component \(\lambda=\text{var}(u_{i})\) could be sensitive to the choice of initial value, giving a slow convergence. We propose the use of method-of-moments estimator (MME) for pretraining \(\lambda\), \[\widehat{\lambda}=\left[\frac{1}{n}\sum_{i=1}^{n}(\widehat{u}_{i}-1)^{2}\right] \left[\frac{1}{2}+\sqrt{\frac{1}{4}+\frac{n\sum_{i}^{n}\widehat{\mu}_{i+}^{-1} (\widehat{u}_{i}-1)^{2}}{\left\{\sum_{i}^{n}(\widehat{u}_{i}-1)^{2}\right\}^{2 }}}\right], \tag{8}\] where \(\widehat{\mu}_{i+}=\sum_{j=1}^{q_{i}}\widehat{\mu}_{ij}^{m}\). Convergence of the MME (8) is shown in Appendix A.3. Figure 3 shows that the proposed pretraining accelerates the convergence in various settings. In Appendix A.4, we demonstrate an additional experiments for verifying the consistency of \(\widehat{\lambda}\) of the proposed method. ### Feature Selection in High Dimensional Settings Feature selection methods can be easily implemented to the proposed PG-NN. As an example, we implemented feature selection using the multi-head attention layer with sparsemax function (Martins and Astudillo, 2016; Skrlj et al., 2020; Arik and Pfister, 2021), \[\text{sparsemax}(\mathbf{z})=\operatorname*{arg\,min}_{p\in\Delta^{K-1}}|| \mathbf{p}-\mathbf{z}||^{2},\] where \(\Delta^{K-1}=\{\mathbf{p}\in\mathbb{R}^{K}:\mathbf{1}^{T}\mathbf{p}=1,\mathbf{ p}\geq 0\}\). As a high dimensional setting, we generate input features \(x_{kij}\) from \(N(0,1)\) for \(k=1,...,100\), including 10 genuine features \((k\leq 10)\) and 90 irrelevant features \((k>10)\). The output Figure 3: Learning curve for the variance component \(\lambda\) when (a) \(\lambda=0\), (b) \(\lambda=0.5\), and (c) \(\lambda=1\). Figure 2: Predicted values of \(u_{i}\) from two replications (marked as o and x for each) when \(u_{i}\) is generated from the Gamma distribution with \(\lambda=1\), \(n=100\), \(q=100\). is generated from \(\text{Poi}(\mu_{ij}^{c})\) with the mean model, \[\mu_{ij}^{c}=u_{i}\cdot\exp\left[0.2\left\{1+\cos x_{1ij}+\cdots+\cos x_{6ij}+(x _{7ij}^{2}+1)^{-1}+\cdots+(x_{10ij}^{2}+1)^{-1}\right\}\right],\] where \(u_{i}=e^{v_{i}}\) is generated from \(\text{Gamma}(2,2)\). The number of subjects is \(n=10^{4}\), i.e. the cardinality of the categorical feature is \(10^{4}\). The number of repeated measures (cluster size) is set to be \(q=20\), which is smaller than number of features \(p=100\). We employed a multi-layer perceptron with 20-10-10 number of nodes and three-head attention layer for feature selection. Other details are derived in Section 5. Figure 4 shows the average attention scores of 50 repetitions. It is evident that all the genuine features are ranked at the top 10. ## 5 Experimental Studies To investigate the performance of the Poisson-gamma DNN, we conducted experimental studies. The five input variables \(\mathbf{x}_{ij}=(x_{1ij},...,x_{5ij})^{T}\) are generated from the AR(1) process with autocorrelation \(\rho=0.5\) for each \(i=1,...,n\) and \(j=1,...,q\). The random effects are generated from either \(u_{i}\sim\text{Gamma}(\lambda^{-1},\lambda^{-1})\) or \(v_{i}\sim\text{N}(0,\lambda)\) where \(v_{i}=\log u_{i}\). When \(\lambda=0\), the conditional mean \(\mu_{ij}^{c}\) is identical to the marginal mean \(\mu_{ij}^{m}\). The output variable \(y_{ij}\) is generated from \(\text{Poisson}(\mu_{ij}^{c})\) with \[\mu_{ij}^{c}=u_{i}\cdot\exp\left[0.2\left\{1+\cos x_{1ij}+\cos x_{2ij}+\cos x_ {3ij}+(x_{4ij}^{2}+1)^{-1}+(x_{5ij}^{2}+1)^{-1}\right\}\right],\] Results are based on the 100 sets of simulated data. The data consist of \(q=10\) observations for \(n=1000\) subjects. For each subject, 6 observations are assigned to the training set, 2 are assigned to the validation set, and the remaining 2 are assigned to the test set. For comparison, we consider the following models. * **P-GLM** Classic Poisson GLM for count outputs using R. * **N-NN** Conventional DNN for continuous outputs. * **P-NN** Poisson DNN for count outputs. * **PN-GLM** Poisson-normal HGLM using lme4 (Bates et al., 2015) package in R. * **PG-GLM** Poisson-gamma HGLM using the proposed method. * **NF-NN** Conventional DNN with fixed subject-specific effects for continuous outputs. * **NN-NN** DNN with normal random effects for continuous outputs (Lee and Lee, 2023). * **PF-NN** Conventional Poisson DNN with fixed subject-specific effects for count outputs. * **PG-NN** The proposed Poisson-gamma DNN for count outputs. To evaluate the prediction performances, we consider the root mean squared Pearson error (RMSPE) \[\text{RMSPE}=\sqrt{\frac{1}{N}\sum_{i,j}\frac{(y_{ij}-\widehat{\mu}_{ij})^{2}} {V(\widehat{\mu}_{ij})}},\] Figure 4: Average attention scores. First 10 features are genuine and the others are irrelevant. where \(\text{Var}(y_{ij}|u_{i})=\phi V(\mu_{ij})\) and \(\phi\) is a dispersion parameter of GLM family. For Gaussian outputs, the RMSPE is identical to the ordinary root mean squared error, since \(\phi=\sigma^{2}\) and \(V(\mu_{ij})=1\). For Poisson outputs, since \(\phi=1\) and \(V(\mu_{ij})=\mu_{ij}\), the RMSPE for test set is given by \[\text{RMSPE}=\sqrt{\frac{1}{N_{\text{test }}}\sum_{(i,j)\in\text{test}}\frac{(y_{ij}- \widehat{\mu}_{ij})^{2}}{\widehat{\mu}_{ij}}}.\] P-GLM, N-NN, and P-NN give marginal predictions \(\widehat{\mu}_{ij}=\widehat{\mu}_{ij}^{m}\), while the others give subject-specific predictions \(\widehat{\mu}_{ij}=\widehat{\mu}_{ij}^{c}\). N-NN, NF-NN, and NN-NN are models for continuous outputs, while the others are models for count outputs. For NF-NN and PF-NN, predictions are made by maximizing the conditional likelihood \(\sum_{i,j}\log f_{\mathbf{\theta}}(y_{ij}|v_{i})\). On the other hand, for PN-GLM, PG-GLM, NN-NN, and PG-NN, subject-specific predictions are made by maximizing the h-likelihood. PN-GLM is the generalized linear mixed model with random effects \(v_{i}\sim N(0,\lambda)\). Current statistical software for PN-GLM and PG-GLM (1me4 and dhglm) provide approximate MLEs using Laplace approximation. The proposed learning algorithm can yield exact MLEs for PG-GLM, using solely the input and output layers while excluding the hidden layers. Among various methods for NN-NN (Tran et al., 2020; Mandel et al., 2023; Simchoni and Rosset, 2021, 2023; Lee and Lee, 2023), we applied the state-of-the-art method proposed by Lee and Lee (2023). All the DNNs and PG-GLMs were implemented in Python using Keras (Chollet et al., 2015) and TensorFlow (Abadi et al., 2015). For all DNNs, we employed a standard multi-layer perceptron (MLP) consisting of 3 hidden layers with 10 neurons and leaky ReLU activation function. We applied the Adam optimizer with a learning rate of 0.001 and an early stopping process based on the validation loss while training the DNNs. NVIDIA Quadro RTX 6000 were used for computations. Table 1 shows the mean and standard deviation of test RMSPEs from the experimental studies. When the true model does not have random effects (G(0) and N(0)), the PG-NN is comparable to the P-NN without random effects, which should perform the best (marked by the bold face) in terms of RMSPE. N-NN (P-NN) without random effects is also better than NF-NN and NN-NN (PF-NN and PG-NN) with random effects. When the distribution of random effects is correctly specified (G(0.5) and G(1)), the PG-NN performs the best in terms of RMSPE. Even when the distribution of random effects is misspecified (N(0.5), N(1)), the PG-NN still performs the best. This result is in accordance with the simulation results of McCulloch and Neuhaus (2011), namely, in GLMMs, the prediction accuracy is little affected for violations of the distributional assumption for random effects: see similar performances of PN-GLM and PG-GLM. It has been known that handling the high-cardinality categorical features as random effects has advantages over handling them as fixed effects (Lee et al., 2017), especially when the cardinality of categorical feature is close to the sample size, i.e., the number of observations in each category (cluster size \(q\)) is relatively small. Thus, to emphasize the advantages of PG-NN over PF-NN in high-cardinality categorical features, we consider two additional scenarios for experimental study with cluster size \(q_{\text{train}}=3\) and \(q_{\text{train}}=1\) where \(\lambda=0.2\) and \(n=1000\). Mean and standard deviation of RMSPE of PF-NN are 1.269 (0.038) and 1.629 (0.086) for \(q_{\text{train}}=3\) and \(q_{\text{train}}=1\), respectively. Those of PG-NN are 1.124 (0.028) and 1.284 (0.049) for each scenario. Therefore, the proposed method enhances subject-specific predictions as the cardinality of categorical features becomes high. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \multicolumn{4}{c}{Distribution of random effects (\(\lambda\))} \\ \cline{2-5} Model & G(0) \& N(0) & G(0.5) & G(1) & N(0.5) & N(1) \\ \hline P-GLM & 1.046 (0.029) & 1.501 (0.055) & 1.845 (0.085) & 1.745 (0.113) & 2.818 (0.467) \\ N-NN & 1.013 (0.018) & 1.473 (0.042) & 1.816 (0.074) & 1.713 (0.097) & 1.143 (0.432) \\ P-NN & **1.011 (0.018)** & 1.470 (0.042) & 1.812 (0.066) & 1.711 (0.099) & 1.161 (0.440) \\ PN-GLM & 1.048 (0.029) & 1.112 (0.033) & 1.115 (0.035) & 1.124 (0.030) & 1.152 (0.034) \\ PG-GLM & 1.048 (0.020) & 1.123 (0.027) & 1.106 (0.023) & 1.139 (0.026) & 1.161 (0.028) \\ NF-NN & 1.152 (0.029) & 1.301 (0.584) & 1.136 (0.311) & 1.241 (1.272) & 1.402 (0.298) \\ NN-NN & 1.020 (0.020) & 1.121 (0.026) & 1.209 (0.067) & 1.256 (0.097) & 2.773 (0.384) \\ PF-NN & 1.147 (0.025) & 1.135 (0.029) & 1.128 (0.027) & 1.129 (0.024) & 1.128 (0.027) \\ PG-NN & 1.016 (0.019) & **1.079 (0.024)** & **1.084 (0.023)** & **1.061 (0.022)** & **1.085 (0.026)** \\ \hline \hline \end{tabular} \end{table} Table 1: Mean and standard deviation of test RMSPEs of simulation studies over 100 replications. G(0) implies the absence of random effects, i.e., \(v_{i}=0\) for all \(i\). Bold numbers indicate the minimum. ## 6 Real Data Analysis To investigate the prediction performance of clustered count outputs in practice, we examined the following five real datasets: * **Epilepsy data**(Thall and Vail, 1990) * **CD4 data**(Henry et al., 1998) * **Bolus data**(Henderson and Shimakura, 2003) * **Owls data**(Roulin and Bersier, 2007) * **Fruits data**(Santa et al., 2010) For all the DNNs, a standard MLP with one hidden layer of 10 neurons and a sigmoid activation function were employed. For longitudinal data (Epilepsy, CD4, Bolus), the last observation for each patient was used as the test set. For clustered data (Owls, Fruits), an observation was randomly selected as the test set from each cluster. RMSPEs are reported in Table 2, which shows that the use of subject-specific models (PG-NN and PG-GLM) for count data are the best. Throughout the datasets, P-GLM performs better than P-NN, implying that non-linear model does not improve the linear model in the absence of subject-specific random effects. Meanwhile, in the presence of subject-specific random effects, PG-NN is always preferred to PG-GLM except for Fruits data. The results imply that introducing subject-specific random effects in DNNs can help to identify the nonlinear effects of the input variables. Therefore, while DNNs are widely recognized for improving predictions in independent datasets, introducing subject-specific random effects could be necessary for DNNs to improve their predictions in correlated datasets with high-cardinality categorical features. ## 7 Concluding Remarks When the data contains high-cardinality categorical features, introducing random effects into DNNs is advantageous. We develop subject-specific Poisson-gamma DNN for clustered count data. The h-likelihood enables a fast end-to-end learning algorithm using the single objective function. By introducing subject-specific random effects, DNNs can effectively identify the nonlinear effects of the input variables. Various state-of-the-art network architectures can be easily implemented into the h-likelihood framework, as we demonstrate with the feature selection based on multi-head attention.
2302.13520
Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks
Bit-flip attacks (BFAs) have attracted substantial attention recently, in which an adversary could tamper with a small number of model parameter bits to break the integrity of DNNs. To mitigate such threats, a batch of defense methods are proposed, focusing on the untargeted scenarios. Unfortunately, they either require extra trustworthy applications or make models more vulnerable to targeted BFAs. Countermeasures against targeted BFAs, stealthier and more purposeful by nature, are far from well established. In this work, we propose Aegis, a novel defense method to mitigate targeted BFAs. The core observation is that existing targeted attacks focus on flipping critical bits in certain important layers. Thus, we design a dynamic-exit mechanism to attach extra internal classifiers (ICs) to hidden layers. This mechanism enables input samples to early-exit from different layers, which effectively upsets the adversary's attack plans. Moreover, the dynamic-exit mechanism randomly selects ICs for predictions during each inference to significantly increase the attack cost for the adaptive attacks where all defense mechanisms are transparent to the adversary. We further propose a robustness training strategy to adapt ICs to the attack scenarios by simulating BFAs during the IC training phase, to increase model robustness. Extensive evaluations over four well-known datasets and two popular DNN structures reveal that Aegis could effectively mitigate different state-of-the-art targeted attacks, reducing attack success rate by 5-10$\times$, significantly outperforming existing defense methods.
Jialai Wang, Ziyuan Zhang, Meiqi Wang, Han Qiu, Tianwei Zhang, Qi Li, Zongpeng Li, Tao Wei, Chao Zhang
2023-02-27T05:15:02Z
http://arxiv.org/abs/2302.13520v1
# Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks ###### Abstract Bit-flip attacks (BFAs) have attracted substantial attention recently, in which an adversary could tamper with a small number of model parameter bits to break the integrity of DNNs. To mitigate such threats, a batch of defense methods are proposed, focusing on the untargeted scenarios. Unfortunately, they either require extra trustworthy applications or make models more vulnerable to targeted BFAs. Countermeasures against targeted BFAs, stealthier and more purposeful by nature, are far from well established. In this work, we propose Aegis, a novel defense method to mitigate targeted BFAs. The core observation is that existing targeted attacks focus on flipping critical bits in certain important layers. Thus, we design a dynamic-exit mechanism to attach extra internal classifiers (ICs) to hidden layers. This mechanism enables input samples to early-exit from different layers, which effectively upsets the adversary's attack plans. Moreover, the dynamic-exit mechanism randomly selects ICs for predictions during each inference to significantly increase the attack cost for the adaptive attacks where all defense mechanisms are transparent to the adversary. We further propose a robustness training strategy to adapt ICs to the attack scenarios by simulating BFAs during the IC training phase, to increase model robustness. Extensive evaluations over four well-known datasets and two popular DNN structures reveal that Aegis could effectively mitigate different state-of-the-art targeted attacks, reducing attack success rate by 5-10\(\times\), significantly outperforming existing defense methods. We open source the code of Aegis1. Footnote 1: [https://github.com/wjl12wjl/Aegis.git](https://github.com/wjl12wjl/Aegis.git) ## 1 Introduction The recent revolutionary development of deep neural network (DNN) models has promoted various security- and safety-sensitive intelligent applications, such as autonomous driving [52], AI on satellites [13], and medical diagnostics [55]. An adversary could manipulate data used by DNN models and model parameters to launch various attacks. The security and robustness of DNN models have become the key factors affecting the deployment of these systems. A significant amount of research efforts have been devoted to protecting DNN models from data-oriented attacks, e.g. adversarial attacks [7, 14, 23, 40, 41, 57] that manipulate inference data, or DNN backdoor attacks [2, 11, 33, 35, 37, 39, 53, 68] that manipulate training data. These efforts can secure the model against data-oriented threats. But little attention has been paid to mitigate the emerging parameter-oriented attacks. Recent studies have shown that well-trained DNN models are vulnerable to parameter-oriented attacks, which tamper with model parameters [8, 45, 46, 47, 48, 64]. For instance, flipping a small number of critical bits (i.e. \(0\to 1\) or \(1\to 0\)) of off-the-shelf DNN model parameters can trigger catastrophic changes in the inference process [20, 45], lowering the prediction accuracy or manipulating the inference to any target labels. These bit-flip attacks (BFAs) experiment in real-world scenarios. DeepHammer [64] performs BFAs on a PC via rowhammer. Also, BFAs are performed on the multi-tenant FPGA devices in cloud-based machine learning services [48]. Current state-of-the-art BFAs can be classified into untargeted and targeted attacks. The _untargeted_ BFAs aim to comprise the victim model accuracy to the random guess level [45, 48, 64]. For instance, with optimized critical bit search algorithms, the BFA in [45] needs to flip only 13 out of 93 million bits of an 8-bit quantized ResNet-18 model on ImageNet, to degrade its top-1 accuracy from 69.8% to 0.1%. In comparison, the _targeted_ BFAs are stealthier, by misleading target models to target labels on specific samples (i.e., sample-wise attacks) or samples with special triggers (i.e., backdoor attacks) while preserving model accuracy for other samples. For instance, the sample-wise BFA [4] can flip only less than 8 critical bits on average of an 8-bit quantized ResNet-18 model on ImageNet to manipulate the prediction of specific input samples. Existing defenses for data-oriented attacks, e.g., adversarial training, are proven not useful to mitigate BFAs [19]. Instead, a small number of dedicated defenses have been proposed to mitigate BFAs, which can be classified into two categories, namely integrity verification, and model enhancement. First, integrity verification-based approaches [25, 36, 38] verify the integrity of model parameters at runtime to detect BFAs. For instance, HashTAG [25] verifies the integrity of model parameters on the fly by extracting and comparing the unique signatures of the original and the runtime DNN models. A low-collision hashing scheme could be used for generating signatures and achieving almost zero false positives. This type of approach could detect any attempts to tamper with the models and thus could defeat both targeted and untargeted BFAs. However, they in general require an additional _trusted_ and _secure_ monitoring process to _continuously_ monitor the target DNN model. It introduces additional performance overhead and resource cost and is not applicable to commercial off-the-shelf devices and practical scenarios. Second, model enhancement-based approaches [67, 32] focus on improving the robustness of target models directly, making BFAs difficult or impossible to launch. Note that, precisely flipping a large number of bits via hardware-level attack is not practical [64]. An important metric for evaluating the difficulty or cost of BFAs is _the number of bits to flip_ (e.g. DeepHammer [64] sets 24 bits as the maximum number of bits can be flipped). Thus, this approach aims to significantly increase the number of bits to flip for achieving the same attack goals. One promising approach is to quantize the model parameters to constrain the weights' value range that may potentially be changed by BFAs. For instance, a binarization architecture, BNN, is proposed in [19] to retrain the model from scratch to generate a model with weights only equal to -1 or +1. An enhanced version, RA-BNN [49] further extends the quantization on the activation function outputs by -1 or +1. This approach can significantly increase the difficulty for untargeted attacks (e.g. flipping 40 \(\sim\) 300\(\times\) more bits in [19], which is infeasible). However, this solution can only mitigate untargeted attacks, while making the model even more vulnerable to targeted attacks. According to our experiments, TBT attacks [46] can achieve a similar success rate by flipping even fewer bits in a binarization-trained model (from 206 bits in the vanilla model to 50 bits in the corresponding binarization model. See details in Section 5.3). In this work, we propose Aegis2, a novel method to mitigate different targeted BFAs. Our key observation is that existing targeted BFAs achieve their goals by locating the most critical bits according to the model inference process. Particularly, they flip the critical bits in either the final layer of the target model or the most important layer determined by some optimization methods. Based on the observation, we design our solution by using a _dynamic multi-exit_ architecture to train extra internal classifiers (ICs) for hidden layers [27]. These ICs can distribute the early exit of input samples to different hidden layers of the target model. This can mitigate the existing attacks which flip bits in one specific layer. Furthermore, considering the adaptive BFAs where the defense is transparent, adversaries could use the sample exit distribution to locate critical hidden layers to flip their critical bits. Aegis can mitigate such adaptive BFAs by dynamic masking certain exits. Lastly, we design robust training on ICs to simulate the attacks when the critical bits in each hidden layer are flipped. This can defeat more sophisticated adaptive BFAs that include all exits to flip critical bits in all layers of the model. Footnote 2: Aegis is a powerful shield carried by Athena and Zeus in Greek mythology, which can defeat various attacks from thunders, hammers, etc. Aegis aims to achieve three important goals, i.e. _non-intrusive_, _platform-independent_, and _utility-preserving_. It can protect the model without modifying any of its parameters. This will exclude the defenses that retrain the model from scratch, which is either too costly on a large-scale dataset or infeasible when training datasets are unavailable. We design Aegis at the application level without requiring any additional reliable program or hardware protection, making it generally applicable. For utility-preserving, Aegis has negligible impacts on the prediction accuracy. We conduct extensive experiments to evaluate Aegis against three state-of-the-art targeted BFAs with different goals, as well as their potential adaptive attack counterparts. We consider two well-known model architectures (ResNet-32 and VGG-16) and four datasets (CIFAR-10, CIFAR-100, STL-10, and Tiny-ImageNet). The results show that we can mitigate different BFAs by significantly increasing the number of bits to flip (e.g. flip 35\(\times\) more bits to achieve a similar attack success rate of [8]) or reducing their attack success rate to a low level (e.g. keep attack success rate lower than 4% with a similar number of bits flipped as [4]). ## 2 Background ### Targeted Bit-flip Attacks We introduce three state-of-the-art targeted BFAs, i.e., TBT attack [46], ProFlip attack [8] and TA-LBF attack [4]. **TBT**[46] is a targeted BFA that injects backdoors into the target model through flipping bits. The attacker's goal is that the compromised model still operates with normal inference accuracy on benign inputs but makes mistakes on samples with specific triggers. Specifically, when the adversary embeds the trigger into any input, the model is forced to classify this input to a certain target class. Note that this method only flips bits in the final layer of the target model. In the final layer, the adversary first selects \(w_{b}\) critical network neurons which have the most significant impact on the target class, then generates a specific trigger to activate these neurons. Finally, the adversary formalizes an optimization problem to modify critical bits corresponding to these neurons. **ProFlip**[8] inserts a backdoor into the target model by flipping bits in the network weights to manipulate the prediction of all inputs attached with the trigger to a certain target class. This method could flip bits in all the layers of the model by selecting salient neurons through forwarding derivative-based saliency map construction (also known as jacobian saliency map attack (JSMA) [43]). Then the adversary uses the gradient descent method to generate triggers, which can stimulate salient neurons to large values. Finally, ProFlip proposes an efficient retrieval algorithm to select the optimal parameter, and determine critical bits in the parameter to flip. **TA-LBF**[4] does not need a trigger but only misclassifies a specific sample to a target class by flipping the critical bits of the parameters, which makes the attack stealthier than TBT and ProFlip. The adversary formalizes the attack as binary integer programming since the parameters are stored as binary bits (i.e., 0 and 1) in the memory. It further equivalently reformulates this binary integer programming problem as a continuous optimization problem. Using the alternating direction method of multipliers (ADMM) method [63] solves the optimization problem to determine critical bits to flip. ### Existing Defense and Analysis Existing defense methods could be categorized into two types. Details are given as follows. **Model enhancement.**Aegis also falls into this defense category. Li et al. [32] adopt a weight reconstruction method, which could defuse the changed values on several parameters to multiple parameters, thus mitigating the effects brought by untargeted BFAs. Zhan et al. [67] modify rectified linear unit (ReLU), a commonly used activation function in DNNs, to tolerate the faults incurred by bit-flipping on weights. The above two defense methods are proved to be less effective than the binarization strategy such as BIN [19] and RA-BNN [49]. This strategy applies the binarization-aware training [50] to retrain a binarization model from scratch to mitigate untargeted BFAs. Its point is to constrain the range of parameters' values to force attackers to flip more bits in order to achieve the same attack success rate. Specifically, BIN [19] converts a part of the model parameters from high precision, e.g., 32-bit floating-point, to a binary format (\(\{-1,+1\}\)). RA-BNN [49] uses a more aggressive way to further quantize the output of activation functions to \(\{-1,+1\}\) as well. Although these methods can effectively mitigate untargeted attacks, they still have three limitations. First, they require retraining a target model from scratch, which introduces significant computation costs. Second, aggressive precision reduction on models will affect model accuracy. Third, more importantly, they make the model even more vulnerable to targeted attacks such as TBT [46] (See Section 5.3). **Integrity verification.**This approach is orthogonal to model enhancement, which protects models from another dimension. One approach [15, 25, 38, 31] is to apply the integrity verification to defend BFAs is that the defender extracts a ground-truth signature from the model before deployment. Once the model is deployed, new hashes are extracted during inference to compare with the ground-truth one. This approach can also be realized at the hardware or system level based on techniques such as ECC [10]. However, it has three main practical obstacles. (1) They are restricted to specific platforms, e.g., [15] requires new CUDA kernel for integrity protection and [26] requires new processors with targeted row refresh. Also, some techniques such as ECC are not deployed in some embedded devices such as Nvidia Nano or Jetson AGX Xavier. (2) These methods are not absolutely secure against bit-flip attack [10]. (3) They only detect whether a model is changed in memory, but do not provide mitigation against specific attacks. **Comparison with existing defense.**Existing model enhancement methods could effectively mitigate untargeted BFAs, but pay no attention to targeted BFAs. Compared with untargeted attacks, targeted attacks are more threatening and stealthier, as the compromised model could still behave normally on clean samples. Thus, we aim to fill this gap. Besides, Aegis is non-intrusive compared with existing methods as we do not modify the original models or retrain from scratch. Integrity verification approaches experiments via hardware or system-level solutions. Instead, Aegis aims to give an application-level solution that is generally effective regardless of the underlying hardware circuits, operating systems, or DL libraries. Besides, Aegis is orthogonal to integrity verification such as ECC so Aegis can provide extra protection on ECC-enabled systems on different levels. We also notice there are defense methods that are specific to DNN backdoor attacks [12, 34, 61, 66]. However, they have different threat models with mitigating BFAs. Specifically, they aim to detect or remove an existing backdoor in offline trojan models. However, BFAs usually experiment on a deployed clean model under attack at runtime. ### Multi-exit DNN models The initial motivation for setting exits for inference at hidden layers is to solve the overthinking issue. Since the growing performance of modern DNN models brings a significantly increasing number of layers and parameters in most of the state-of-the-art DNN models, Huang _et al._[22] point out that forcing all samples, especially canonical samples to inference all layers of a DNN model definitely brings a waste of energy and time. Moreover, Kaya _et al._[27] find that forcing certain samples classified correctly with only a few shallow layers but inference through all layers will lead to the wrong prediction. Many solutions have been proposed to let samples early exit the model to address the above issues [17, 18, 22, 62]. One promising technique is the shallow-deep network (SDN) [27]. The key insight of SDN is that during the inference process for a sample, it is highly possible that some layer in the middle of the network already has high confidence for prediction. So it can early exit from the model without the need to go through all the layers to significantly reduce the inference time and energy consumption. It is very convenient to convert a vanilla DNN model (e.g. ResNet) into an SDN model. We can select some appropriate convolution layers, and attach an internal classifier (IC) to each of them to form an early exit. When the prediction confidence of the input sample as one label is higher than a threshold at an exit, the inference will stop and output that label. A proper threshold can realize early-exit with a tiny accuracy loss. Deploying multi-exit model architectures such as SDN for security purposes is proposed in [21, 69] to mitigate adversarial attacks. Particularly, an input-adaptive multi-exit DNN structure with a dynamic inference process can mitigate the adversarial perturbation generation and further increase the difficulty of adaptive adversarial attacks. However, since BFAs aim at manipulating model parameters rather than input samples, simply deploying multi-exit DNN structures cannot achieve the defense requirements and more sophisticated methods are needed. ## 3 Threat Model and Defense Requirements We consider an adversarial scenario, where the adversary is able to perform BFAs against the victim DNN models. He can precisely flip a number of parameter bits to affect the model prediction results. The exploitability and practicality of such threats have been validated and evaluated in previous works [6, 44, 48, 64]. For instance, attackers verified an untargeted BFA on DNNs with a PC platform via row-hammer [48]. Also, the adversary can co-locate his malicious program on the same machine with the victim DNN model and then use methods like row-hammer [48] to perform BFAs. The feasibility of BFAs is also validated in our paper on a PC platform following previous works [44, 64] (see details in Section 5.7). Following the previous works [44, 46, 4, 64, 8], we assume the adversary has very strong capabilities. He has full knowledge of the victim model, including the DNN architecture, model parameters, etc. We further assume that the adversary knows every detail of any possible defense deployed in the system, such as the mechanism, algorithm, parameters, etc. If a defense solution employs randomization-based techniques, we assume the random numbers generated in real-time are perfect with a large entropy such that the adversary cannot obtain or guess the correct values. It is worth noting that these assumptions represent the strongest adversary, which significantly increases the difficulty of defense designs. **Adversarial goals.** Previous works have demonstrated different goals for BFAs, as summarized below: * **Untargeted attack**. The adversary aims to drastically degrade the overall accuracy of the victim model. A powerful untargeted attack can decrease the model accuracy to nearly random guess after the exploitation [64]. * **Backdoor targeted attack**. The adversary designs a specific trigger and injects the corresponding backdoor [8] into the DNN via the BFAs. Then for any input sample containing the trigger, the compromised model will mispredict it as the target class. * **Sample-wise targeted attack**. The adversary aims to tamper with the model such that it only mispredicts a specific sample as the target class while having normal predictions for other samples. In this paper, we focus on the last two categories of targeted BFAs due to two reasons. (1) As indicated in Section 2.2, a number of past works have explored the mitigation approaches against untargeted BFAs. In contrast, how to effectively thwart targeted BFAs is rarely investigated. (2) The backdoor or sample-wise targeted BFAs are much stealthier than the untargeted attack, as the compromised model behaves normally for clean samples. This significantly increases the defense difficulty and an effective solution is urgently needed. Besides, although untargeted BFAs are not within our scope, our approach can mitigate untargeted BFAs (see Section 6). **Defense requirements.** The purpose of this paper is to design a defense approach to comprehensively protect DNN models from different targeted BFAs and their possible adaptive attacks. It is worth highlighting that our goal is to increase the attack cost rather than totally preventing BFAs. Theoretically, the adversary can tamper with more parameter bits even if a strong defense is applied. So we aim to significantly increase the number of flipped bits required to achieve the desired adversarial goal, thus making the attack less feasible or practical. Our defense requirements are as follows. * **Non-intrusive**. The defense should be easy to deploy on off-the-shelf DNN models. The defender does not need to modify parameters of the original model, e.g., retraining a model with binarization [19] from scratch since this can incur significant computation cost, especially for large-scale DNN models (e.g. ImageNet scale [51]). * **Platform-independent**. Previous works propose hardware or system-level solutions to prevent fault injection attacks, e.g., new CUDA kernels for integrity protection [15], new processors with targeted row refresh [26]. However, these solutions are restricted to some specific platforms. Instead, we hope to have an application-level solution that is generally effective regardless of the underlying hardware circuits, operating systems, and deep learning libraries. * **Utility-preserving**. The defense solution should have a negligible impact on the model inference process. It should preserve the usability of the original model without hugely decreasing its prediction accuracy. ## 4 Methodology ### Design Insight We propose Aegis, a novel approach to mitigate different types of targeted BFAs. Our approach is composed of a Dynamic-Exit SDN (DESDN) mechanism followed by a robust training (ROB) strategy only on ICs. We illustrate our design insight via three steps as follows. First, there are BFAs that only flip bits in the final layer since the parameters of the final layer are directly related to the prediction results. It is straightforward to deduce that a multi-exit mechanism can thwart the basic BFAs that flip bits only in the final layer (as shown in Figure 1). (1) Using a multi-exit DNN structure such as the SDN can interfere with the adversarial perturbations carried by the samples (triggers generated only from the vanilla model). Malicious samples may exit early to stop inference at an arbitrary hidden layer which generates different predictions compared with the inference on the target vanilla model. (2) By forcing most samples to exit early, the flipped bits at the final layer will be probably ignored during inference to achieve the defense goals. Second, we consider the existing more sophisticated BFAs that are not targeting only the final layer. For instance, the adversary may use an optimal way to locate the critical bits in the hidden layers (e.g. flipping bits in the \(K_{th}\) layer in Figure 1). Directly using the SDN structure cannot provide protection when the critical bits are flipped in the shallow layers. Moreover, in a white-box scenario, the adversary can observe the exit distribution of samples to locate critical exits for performing attacks in the corresponding critical layers (e.g. 78% samples will exit in the last five layers of a VGG-based SDN model). Thus, we propose a Dynamic-Exit SDN (DESDN) mechanism that randomizes the exit for each inference. This DESDN can mitigate the case when the adversary flips bits in shallow layers since the sample exits the model in a random layer, which has a low probability of containing the flipped bits. Moreover, DESDN can push the exit distribution to a uniform one (see our experiments in Figure 5 (a)) such that there are no critical exits for the adversary to consider. Third, we further consider the most powerful adaptive attack. With the knowledge of all the details of Aegis, the adversary may include all exits to optimize his critical bit search. Although this will increase the attack cost (more bits to flip), it is possible to flip bits, particularly targeting at Aegis to achieve the attack regardless of where the sample exits. Our insight is to further design a robust training (ROB) strategy to find the critical and vulnerable bits for clean samples' inference process and simulate the influence when they are flipped. Note we only perform ROB on ICs without touching the target vanilla model to guarantee the non-intrusive defense requirement. This can improve the robustness of ICs to mitigate the significant convolutional output change when certain bits are flipped. Therefore, the difficulty of performing bit-flip attacks will be further increased. Below we detail two core components (DESDN and ROB) of Aegis and analyze its security against various attacks. ### Dynamic-Exit SDN (DESDN) As the first component of Aegis, DESDN consists of two steps: converting a model \(M\) to an SDN model \(\hat{M}\) (offline), and performing a random exit strategy during inference (online). #### 4.2.1 Stage 1: Constructing SDN Model We adopt the technique in prior work [27] to build the SDN model \(\hat{M}\) from \(M\), which shows negligible accuracy degradation during conversion. Specifically, we assume \(M\) consists of \(N\) internal layers \(F_{i}\), \((1\leq i\leq N)\), and ends with the final layer \(F_{final}\). For an inference sample, \(M\) performs the classification as \(M(x)=F_{final}(F_{N}(...F_{1}(x)))\). For simplicity, we denote the output of the \(i\)-th internal layer as \(F_{i}(x)\), and the output of the final layer \(M(x)\) as \(F_{final}(x)\). Then our goal is to train an IC (\(C_{i}\)) for each internal layer \(i\), as shown in Figure 2. Then \(C_{i}\) is attached to layer \(i\) and makes the prediction \(C_{i}(F_{i}(x))\), which is simplified as \(C_{i}(x)\). To restrict the size of the IC, each \(C_{i}\) only contains one convolutional layer and one dense layer. Such a simple structure makes it efficient to learn the parameters while maintaining high classification accuracy. This design is general and can be applied to different models. During construction, the defender freezes the parameters of the original model \(M\) but just trains the ICs. Note, this training process is much more efficient than training a complete model from scratch. For instance, training ICs for a vanilla model is \(3.2\sim 8.2\times\) faster than training this model. Further, there is a trade-off between IC training cost and model accuracy. For instance, if we allow a tiny accuracy drop (e.g. 2% like previous work [19]), IC training cost will be less than 10% of Figure 1: An example of flipping bits in the \(K_{th}\) or final layer. Figure 2: An example of the model attached by ICs. \(C_{i}\) denotes the \(i\)-th IC attached by us. During inference, each IC could make a prediction. For example, \(C_{i}(x)\) is the prediction of \(C_{i}\). the original model training and can be negligible. We leave further reduction of training cost as our future work. #### 4.2.2 Stage 2: Randomizing Exits During Inference After attaching the trained ICs to \(M\), the constructed SDN model \(\hat{M}\) allows early exit. A threshold \(\tau\) is then introduced to judge whether the inference should exit at each internal layer \(F_{i}\). Specifically, for a given sample \(x\), when the inference process reaches the \(i\)-th layer, we compute the confidence score of the corresponding IC \(max(C_{i}(x))\). If this score is larger than \(\tau\), then the process will exit from \(C_{i}\) with the corresponding output without going into deeper layers. This deterministic exit mechanism can thwart the basic BFAs, but may still be vulnerable to adaptive attacks. To further secure the inference computation, we design a dynamic exit strategy. Particularly for each query sample \(x\), among all the ICs \(C=\{C_{0},C_{1},...,C_{N},F_{final}\}\), we randomly select a set of \(q\) candidate ICs, denoted as \(\hat{C}\). Then we perform the early exit within these candidate ICs based on their confidence scores: we find the first IC \(C_{i}\) in \(\hat{C}\) whose confidence score \(max(C_{i}(x))\) is larger than the threshold \(\tau_{i}\). Then this layer is selected as the early exit for this inference sample. If none of the candidate ICs can satisfy the early exit criteria, we will choose the final layer in \(\hat{C}\) as the exit for prediction. There exists a trade-off between model accuracy and security, determined by the hyper-parameter \(q\). Specifically, a smaller \(q\) can make the selected exit more random with larger entropy. However, it also increases the probability that these \(q\) ICs cannot meet the early exit criteria, and the prediction has lower confidence. In Section 5 we will show that we can find the appropriate value of \(q\) that brings high stochasticity to the exit selection, with negligible impact on the model accuracy. ### Robust Training on ICs We then introduce the second core component of Aegis: ROB, which further enhances the defense effectiveness against BFAs. In particular, when the adversary flips bits in the \(i\)-th layer of \(M\), it could still possibly affect any ICs after this layer with a certain probability, as the layer output \(F_{i}(x)\) passes to these ICs and affects their predictions. To reduce such impacts, we propose ROB to improve the robustness of the attached ICs. Note here the ROB is only performed on ICs so there is no modification on the target model. The key insight of ROB is to help ICs adapt to the cases when critical bits are flipped in \(M\). In general, without prior knowledge of the BFAs, we construct a bit-flipped model by considering only the benign samples' inference process to simulate the target compromised model. Then, we craft new samples based on this bit-flipped model for ROB. These training samples simulate the outputs of flipped layers, and ICs will learn such data to correct the prediction from the adversarial scenarios. Figure 3 illustrates the basic idea of this approach, which consists of two steps. The first step is to construct the flipped model. For defense generality, we assume the defender does not know the exact attack methods to mitigate. In this case, to make the flipped model closer to the real-world victim model, we design a vulnerable-protection algorithm (VPA) to figure out vulnerable bits that could be potentially flipped. Our VPA aims to find bits that are critical to the model decisions. Such bits might significantly affect the prediction results and are vulnerable to being flipped by the adversary. We note that existing attack methods [4, 46, 64, 8, 45] treat gradients w.r.t bits as a key component to select flipped bits. Indeed, the gradients of the model output w.r.t bits reflect the importance of bits in model decisions. Inspired by this, the basic idea of our VPA is to select critical bits according to the gradients of inference loss \(\mathcal{L}_{inf}\) w.r.t bits, where \(\mathcal{L}_{inf}\) is defined as follows: \[\mathcal{L}_{inf}=\mathcal{L}_{ce}(F_{final}(x);l), \tag{1}\] where \(\mathcal{L}_{ce}\) is the cross-entropy loss, \(l\) is the ground-truth label of the input sample \(x\). Below we describe VPA in detail. In particular, given a target model \(M\), we denote all bits in \(M\) as \(B\). We first establish a substitute model which is exactly the same as \(M\). In each iteration of VPA, (1) we follow previous work [45] to calculate the gradients of \(\mathcal{L}_{inf}\) w.r.t. each bit \(b\) (\(b\in B\)), denoted as \(\nabla_{b}\mathcal{L}_{inf}\). (2) Then, we descendingly rank the vulnerability of bits by the absolute value of their \(\nabla_{b}\mathcal{L}_{inf}\), and select the bits with the top-\(k\) gradients. (3) We treat these bits as vulnerable bits, and flip them. We iterate the above process until the maximum iteration budget \(N_{\text{vpa}}\) exhausts to get the flipped model. Note that in each iteration, VPA must recalculate the gradients of each bit: as the model \(M\) is dynamically changed due to the flipped bits, old gradients could not reflect the importance of bits in the newly changed model decisions. After this step, we can get the flipped model \(\hat{M}\), which consists of \(N\) internal layers \(\hat{F}_{i}\), (\(1\leq i\leq N\)). The second step is to synthesize training samples for ROB. We freeze the original weights and only train weights in ICs. Given an input sample \(x\) from the original training set, we denote the output of the layer \(i\) from the flipped model \(\hat{M}\) as \(\hat{F}_{i}(x)\). So \(\hat{F}_{i}(x)\) will serve as the special training sample for the IC attached to layer \(i\). It can help the IC better adapt to the attack scenarios in advance, so as to improve the robustness of the IC. During training, except for \(\hat{F}_{i}(x)\), we also need to apply original training data \(F_{i}(x)\) provided by \(M\), so as to ensure the accuracy of ICs. ### Security Analysis After introducing the design of Aegis, we give a comprehensive qualitative analysis of the resilience of this methodology against different types of BFAs. The first two are basic attacks from existing works, while the last one depicts a possible adaptive strategy targeting our new defense mechanism. **Attack I**: the adversary only flips the bits in the final layer of the model. This strategy is adopted in TBT [46] and TA-LBF [4]. The basic SDN model can defeat these two attacks. Formally, the TBT attack defines a target class \(t\) and a specific trigger \(\Delta\) for activating the backdoor in the victim model. Given a clean input sample \(x\) with the ground truth label \(l\), the attacker aims to make the compromised model mispredict \(x+\Delta\) as \(t\). Identification of the critical bits can be modeled as an optimization problem, which aims to minimize the following loss function: \[\mathcal{L}_{\text{{{TBT}}}}=\mathcal{L}_{\text{{{ce}}}}(F_{final}(x);l)+ \mathcal{L}_{\text{{{ce}}}}(F_{final}(x+\Delta);t), \tag{2}\] where \(\mathcal{L}_{\text{{{ce}}}}\) is the Cross-Entropy loss function. We observe that the loss function only considers the final layer. As a result, the basic SDN model is able to thwart this attack, as most inference samples will exit earlier before the final layer \(F_{final}\), and not be affected by the flipped bits. Similarly, TA-LBF is a sample-wise attack, which aims to cause the misclassification of a specific sample \(x\) from its ground truth label \(l\) to the target label \(t\). This can also be formulated as an optimization problem: \[\mathcal{L}_{\text{{{TA-LBF}}}}=\mathcal{L}_{1}(x;l;t;F_{final})+\lambda \mathcal{L}_{2}(x;l;t;F_{final}), \tag{3}\] where \(\lambda\) is a hyperparameter, \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) are two specific loss functions that ensure the attack effectiveness and stealthiness, respectively. We observe that this loss function is also only related to the final layer \(F_{final}\). With an SDN, the sample \(x\) has a high chance to exit the model earlier than the final layer, and will not be affected by the flipped bits in \(F_{final}\). **Attack II**: the adversary flips bits in arbitrary layers based on a more sophisticated search method, as exploited in ProFlip [8]. Specifically, the adversary selects a trigger \(\Delta\). Given a clean input sample \(x\) and the target class \(t\), the adversary aims to search for critical bits in all the layers to inject the backdoor. This process is also modeled as an optimization problem with the following loss function: \[\mathcal{L}_{\text{{{ProFlip}}}}=\mathcal{L}_{\text{{{ce}}}}(F_{final}(x+ \Delta);t)-\mathcal{N}(x+\Delta), \tag{4}\] where \(\mathcal{N}\) denotes the salient neurons, determined by the target label through conducting the Jacobian Saliency Map Attack (JSMA) [43]. Particularly, the adversary calculates the gradients of the model inference \(F_{final}(x)\) w.r.t. each neuron output, and then selects neurons with top gradient values as \(\mathcal{N}\). We observe that the joint impact of all the searched bits can only affect the output of \(F_{final}\). Samples that exit earlier from the ICs might still give the correct prediction results. **Attack III**: based on the above analysis, we conclude that the basic BFAs that flip bits in certain layers cannot break the basic SDN. So we further assume a stronger adversary, who knows our Aegis mechanism and aims to design an adaptive strategy to break it. He may design a loss function \(\mathcal{L}^{*}\) that considers all the ICs to optimize. However, breaking our defense is still difficult, as explained below: (1) For adaptive attacks based on Attack I (i.e., adaptive TBT and TA-LBF), considering all the ICs could identify the vulnerable bits that can affect each exit. However, this could significantly increase the attack cost (i.e., the number of bits to flip). In particular, to design an adaptive TBT, we consider the following loss function, which includes the ICs as well: \[\mathcal{L}^{*}_{\text{{{TBT}}}}=\mathcal{L}_{\text{{{TBT}}}}+\sum_{i=1}^{N} \mathcal{L}_{\text{{{ce}}}}(C_{i}(x);l)+\mathcal{L}_{\text{{{ce}}}}(C_{i}(x+ \Delta);t). \tag{5}\] In this case, the adversary flips bits in the final layer of each IC as well as the final layer of \(M\). Recall that in the TBT algorithm in Section 2.1, the adversary sets a fixed number of candidate parameters in the final layer as \(w_{b}\). For each candidate parameter, the adversary figures out several bits to flip. This implies that the number of flipped bits is positively Figure 3: Illustration of ROB. We first construct the flipped model \(\hat{M}\), and then synthesize training samples for ROB. In particular, we use \(\hat{M}\) to generate special training data, e.g., \(\hat{F}_{1}(x)\). Then, we apply these special training data, together with the original training data to train ICs. For example, we use \(F_{1}(x)\) and \(\hat{F}_{1}(x)\) to train \(C_{1}\). correlated with \(w_{b}\). In adaptive scenarios, since each IC needs to be attacked, the adversary needs to modify \(w_{b}\) parameters in the final layer of each IC. This results in the scale of \(n\times w_{b}\) parameters for compromising, which is a huge cost for the adversary, especially when \(n\) is large. The adaptive TA-LBF can also be designed in a similar way by including the ICs in the loss function as follows: \[\mathcal{L}^{*}_{TA-LBF}=\mathcal{L}_{TA-LBF}+\sum_{i=1}^{N}\mathcal{L}_{1}(x; l;t;C_{i})+\mathcal{L}_{2}(x;l;t;C_{i}). \tag{6}\] Increasing the number of candidate layers also results in a larger number of bits to flip, since the adversary needs to ensure all the exits are affected. (2) For adaptive attacks based on Attack II (i.e., adaptive ProFlip), although the adversary considers all the ICs, he might still fail to attack each exit. Samples exiting from the unattacked ICs are not affected, and the attacks are mitigated. In particular, for the adaptive ProFlip attack, the adversary optimizes the following loss function: \[\mathcal{L}^{*}_{ProFlip}=\mathcal{L}_{ce}(F_{final}(x);t)+\sum_{i=1}^{N} \mathcal{L}_{ce}(C_{i}(x);t)-\widetilde{\mathcal{N}}(x+\Delta). \tag{7}\] Different from \(\mathcal{N}\), the adversary considers all ICs as well as the final layer, i.e., \(\sum_{i=1}^{N}C_{i}(x)+F_{final}(x)\). Particularly, the adversary adopts JSMA to calculate the gradients of \(\sum_{i=1}^{N}C_{i}(x)+F_{final}(x)\) w.r.t each neuron outputs, then select neurons with top gradients, denoted as \(\widetilde{\mathcal{N}}\). The adversary selects the optimal parameter to modify through optimizing Eq. 7, and the optimal parameter might locate in any layer. Here, the determination of the optimal parameter is highly dependent on \(x\) given \(t\), \(\Delta\), and the target model. For example, on CIFAR-100 and VGG16, we repeat the ProFlip attack 100 times, and randomly select 256 different input samples \(x\) for each time. For each independent attack, we observe the optimal parameter could be determined in different layers of \(\hat{M}\). We assume the optimal parameter is located in the \(i\)-th layer. In this case, all ICs attached before the \(i\)-th layer are not destroyed by the adversary. Thus the samples exiting from these ICs are not affected. Besides, for the ICs attached after the \(i\)-th layer, our ROB could reduce the impacts brought by the flipped bits. Another case is that the optimal parameter is located in an IC. Only the IC is attacked by the adversary, and all other ICs are not affected. ## 5 Evaluation We evaluate Aegis on different targeted BFAs: two backdoor attacks (TBT and TA-LBF) and one sample-wise attack (ProFlip). We also consider many adaptive attacks. All experiments are conducted on a machine with an Intel Xeon Gold 6154 CPU and 8 NVIDIA Tesla V100 GPUs. ### Experimental Setup **Datasets and models.** We conduct our experiments on four widely-used datasets with two DNN structures: VGG16 [56] and ResNet32 [16]. * CIFAR-10 [30]: This dataset contains 50,000 training images and 10,000 test images. Each image has a size of \(32\times 32\times 3\) and belongs to one of 10 classes. * CIFAR-100 [30]: This has the same number of training and testing images as CIFAR-10. Each image also has the same size as CIFAR-10, but belongs to one of 100 classes. * STL-10 [9]: This dataset contains 50,00 training images and 8,000 test images. We note that it also contains 100,000 unlabeled images. Each image has a size of \(96\times 96\times 3\) and belongs to one of 10 classes. * Tiny-ImageNet [1]: This dataset is a simplified version of ImageNet consisting of color images with a size of \(64\times 64\times 3\) belonging to 200 classes. Each class has 500 training images and 50 testing images. Inspired by [3], we strictly separate the training and testing data without any overlap. In particular, (1) CIFAR-10: we follow [7, 19, 42, 60] to select 50,000/10,000 images for training/testing. (2) CIFAR-100: we follow [5, 28] to select 50,000/10,000 images for training/testing. (3) STL-10: we follow [58, 59] to select 5,000/8,000 images for training/testing. (4) Tiny-ImageNet: we follow [59, 60] to select 100,000/10,000 images for training/testing. **Hyperparameters.** As mentioned in Section 4.2.2, \(\tau\) and \(q\) affect the early-exit distribution and model accuracy. For generalization on unseen data and avoiding the selection of biased hyperparameters, we tune hyperparameters on training data guided by two goals: (1) making early-exists uniformly distributed to prevent the attacker from targeting only those popular exits; (2) maintaining high ACC on benign samples. Table 1 list the values of these hyperparameters. Note that we tune these hyperparameters without considering any specific attacks. They are general and fixed to mitigate all attacks in our consideration. We also evaluate the sensitivity of these hyperparameters and find the mitigation results are stable to hyperparameters that meet the two goals (see Appendix A). We also conduct ROB for the ICs. The only randomness of Aegis is from the random selection of ICs in inference. We repeated each experiment 10 times and the ASR variance is below 2% which does not affect our conclusion. \begin{table} \begin{tabular}{c c c c} \hline \hline **Dataset** & **Model** & \(\mathbf{\uptau}\) & \(q\) \\ \hline \multirow{2}{*}{CIFAR-10} & ResNet32 & 0.95 & 3 \\ \cline{2-4} & VGG16 & 0.95 & 3 \\ \cline{2-4} & ResNet32 & 0.90 & 5 \\ \cline{2-4} & VGG16 & 0.80 & 3 \\ \hline \multirow{2}{*}{STL-10} & ResNet32 & 0.95 & 5 \\ \cline{2-4} & VGG16 & 0.95 & 3 \\ \hline \multirow{2}{*}{Tiny-ImageNet} & ResNet32 & 0.90 & 3 \\ \cline{2-4} & VGG16 & 0.95 & 4 \\ \hline \hline \end{tabular} \end{table} Table 1: Values of \(\tau\) and \(q\) on our datasets and models. **Baselines.** We compare Aegis with the state-of-the-art defense methods BIN [19] and RA-BNN [49]. We also compare Aegis with the basic SDN [27] to demonstrate the effectiveness of DESDN and ROB mechanisms. For a fair comparison, we slightly modify SDN to make sure Aegis uses the same structure and hyperparameters as SDN, except for the DESDN and ROB mechanisms. Besides, we compare Aegis with the baseline models (BASE) with no defense. ### Model Utility Evaluation A qualified defense method should preserve the utility, i.e. tiny model accuracy (ACC) drop. Table 2 compares the impacts of different methods on ACC. We observe that Aegis slightly degrades ACC by approximately less than 2%, while BIN and RA-BNN have much more ACC degradation, i.e., roughly \(2\sim 12\%\) and \(2\sim 7\%\), respectively. This is due to BIN and RA-BNN adopting an aggressive binarization on weights (each parameter occupies only 1 bit) or activation function outputs which harm the model ACC. We also notice that SDN has comparable ACC with Aegis, which validates that the DESDN mechanisms in Aegis do not affect the model utility. In summary, Aegis could preserve the model utility. ### Mitigating Targeted Attacks We evaluate the defense effectiveness of Aegis against the state-of-the-art targeted attacks including two backdoors targeted BFAs (TBT [46] and ProFlip [8]) and one sample-wise targeted BFA (TA-LBF [4]). We reproduce these BFAs with their open-sourced code and the recommended parameters (Appendix B) and list the visual results (e.g. triggers and samples) in Appendix C. The possible adaptive attacks based on these BFAs are evaluated in Section 5.4. **Metrics.** While applying an attack to all defense methods, we compare the attack success rate (ASR) of each defense method. For fair comparisons, we make sure the given attack pays the same attack cost on each defense method: flipping the same number of bits. We denote the number of flipped bits as \(N_{b}\) and consider different \(N_{b}\) with two steps. First, we restrict the bit flipping number limit (\(N_{b}\)) to 50 for all attacks. Such a bit flipping number limit is set as 24 in the state-of-the-art Rowhammer attack on cloud platforms [24]: it tests a batch of high-quality dual inline memory modules (DIMMs) and reveals that flipping 24 bits needs a significantly long time (several hours). We also confirm the feasibility of performing BFAs on physical systems (in Section 5.7) to illustrate that flipping 50 bits is very difficult for attackers. Furthermore, we relax the restriction on \(N_{b}\) (i.e. 500) and evaluate the ASR for comprehensiveness. In the following, we first evaluate all defense methods under TBT and TA-LBF, as they all flip bits in the final layer of the target model. We then evaluate all defense methods under ProFlip, which could flip bits in any layer of the target model. **Mitigating TBT.** Table 3 shows the defense results against TBT. Aegis can significantly decrease the ASR and outperform other methods. In most cases, Aegis can decrease the ASR to less than 20%, which is significantly lower than others. In contrast, we note BIN and RA-BNN even perform worse than BASE in some cases. For example, on CIFAR-10 and VGG16, the ASR for BASE is 71.1%, which is lower than that of BIN (90.4%) and RA-BNN (82.9%). This means the defenses designed for untargeted BFAs might make models even more vulnerable to targeted BFAs. **Mitigating TA-LBF.** Table 4 shows the ASR on all datasets and models against TA-LBF. Overall, Aegis could effectively mitigate TA-LBF and outperform other methods. In most cases, Aegis can limit ASR below 10.0%, which is much smaller than others. For example, on STL-10 and ResNet32, Aegis limits the ASR to 9.6% while the ASR for BASE, BIN, RA-BNN, and SDN is 100.0%, 100.0%, 100.0% and 47.7%, respectively. We find the highest ASR for Aegis is 20.1% on Tiny-ImageNet and ResNet32 which is still much smaller than the ASR for other cases. **Summary for the evaluation of TBT and TA-LBF.** We summarize the defense evaluation against TBT and TA-LBF as follows. These two attacks only flip bits in the final layer, which could not affect ICs. Aegis can keep the model ACC well and let many input samples early-exit from ICs to effectively mitigate TBT and TA-LBF. We notice that SDN also \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**ASR (\%)**} \\ \cline{3-6} & & **BASE** & **BIN** & **RA-BNN** & **SDN** & **Aegis\(\bullet\)** \\ \hline \multirow{2}{*}{CIFAR-10} & ResNet32 & 100.0 & 100.0 & 100.0 & **3.5** & 6.3 \\ \cline{2-6} & VGG16 & 57.6 & 100.0 & 100.0 & 1.1 & **0.3** \\ \hline \multirow{2}{*}{CIFAR-100} & ResNet32 & 100.0 & 100.0 & 100.0 & 38.0 & **16.4** \\ \cline{2-6} & VGG16 & 56.4 & 100.0 & 100.0 & 19.4 & **4.4** \\ \hline \multirow{2}{*}{STL-10} & ResNet32 & 100.0 & 100.0 & 100.0 & 47.7 & **9.6** \\ \cline{2-6} & VGG16 & 81.4 & 99.7 & 98.7 & **0.3** & 2.0 \\ \hline \multirow{2}{*}{Tiny-ImageNet} & ResNet32 & 100.0 & 100.0 & 100.0 & 71.1 & **20.1** \\ \cline{2-6} & VGG16 & 51.8 & 98.1 & 90.7 & 27.2 & **17.3** \\ \hline \end{tabular} \end{table} Table 4: Evaluation results of ASR against TA-LBF. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**BASE**} & \multicolumn{4}{c}{**ASR (\%)**} \\ \cline{3-6} & & **BASE** & **BIN** & **RA-BNN** & **SDN** & **Aegis** \\ \hline \multirow{2}{*}{CIFAR-10} & ResNet32 & 70.7 & 94.8 & 74.5 & **16.3** & 19.9 \\ \cline{2-6} & VGG16 & 71.1 & 90.4 & 82.9 & 42.6 & **36.0** \\ \hline \multirow{2}{*}{CIFAR-100} & ResNet32 & 95.8 & 99.8 & 25.5 & 20.5 & **10.8** \\ \cline{2-6} & VGG16 & 65.9 & 58.4 & 47.4 & 53.8 & **10.6** \\ \hline \multirow{2}{*}{STL-10} & ResNet32 & 100.0 & 72.5 & 29.4 & 47.1 & **13.0** \\ \cline{2-6} & VGG16 & 64.1 & 99.7 & 88.0 & **9.0** & 10.5 \\ \hline \multirow{2}{*}{Tiny-ImageNet} & ResNet32 & 100.0 & 63.3 & 31.4 & 65.8 & **27.9** \\ \cline{2-6} & VGG16 & 69.7 & 72.3 & 40.2 & 48.9 & **10.1** \\ \hline \hline \end{tabular} \end{table} Table 3: Evaluation results of ASR against TBT. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**ASR (\%)**} \\ \cline{3-6} & & **BASE** & **BIN** & **RA-BNN** & **SDN** & **Aegis** \\ \hline \multirow{2}{*}{CIFAR-10} & ResNet32 & 70.7 & 94.8 & 74.5 & **16.3** & 19.9 \\ \cline{2-6} & VGG16 & 71.1 & 90.4 & 82.9 & 42.6 & **36.0** \\ \hline \multirow{2}{*}{CIFAR-100} & ResNet32 & 95.8 & 99.8 & 25.5 & 20.5 & **10.8** \\ \cline{2-6} & VGG16 & 65.9 & 58.4 & 47.4 & 53.8 & **10.6** \\ \hline \multirow{2}{*}{STL-10} & ResNet32 & 100.0 & 72.5 & 29.4 & 47.1 & **13.0** \\ \cline{2-6} & VGG16 & 64.1 & 99.7 & 88.0 & **9.0** & 10.5 \\ \hline \multirow{2}{*}{Tiny-ImageNet} & ResNet32 & 100.0 & 63.3 & 31.4 & 65.8 & **27.9** \\ \cline{2-6} & VGG16 & 69.7 & 72.3 & 40.2 & 48.9 & **10.1** \\ \hline \end{tabular} \end{table} Table 2: Model ACC influence evaluation. performs better than BASE, BIN, and RA-BNN in most cases. However, SDN uses a static multi-exit mechanism, making it less effective than Aegis for adaptive attacks (Section 5.4). **Mitigating ProFlip.** Different from TBT and TA-LBF, ProFlip does not restrict the layer but uses an optimization method to flip critical bits in arbitrary layers. Tables 5 shows the results in which Aegis can limit the ASR below 30% in most cases. On the contrary, the ASR for BASE, BIN, and RA-BNN is higher than 70% in most cases. For example, on CIFAR-10 dataset with VGG16, Aegis limits the ASR to 28.9%. However, the adversary acquires 88.2%, 78.6%, and 84.6% ASR for BASE, BIN, and RA-BNN. Aegis makes that the flipped bits could not effectively affect samples that exit earlier. This is also why SDN has comparable defense performance in some cases. **Evaluation with more bits flipped.** Here we do not restrict \(N_{b}\) to 50 but consider larger \(N_{b}\) to compare Aegis with other methods. We take CIFAR-100 and VGG16 as an example to evaluate when \(N_{b}\) increases to up to 500 for all defense methods. Evaluation results are shown in Figure 4. Compared with existing defense methods, we observe the ASR for Aegis is still limited at a low level for all attacks. For TBT (Figure 4 (a)) and TA-LBF (Figure 4 (b)), even the attacker could flip as many as 500 bits, the ASR is still less than 13%, which proves the defense effectiveness of our method. For ProFlip (Figure 4 (c)), ASR increases slowly with more bits flipped and Aegis can clearly outperform all baselines. Even if the attacker could flip as many as 500 bits, Aegis can restrict the ASR of ProFlip to 58.3% while ASR for other cases is already 100% with significantly fewer bits flipped. Note that when more bits are flipped, Aegis clearly outperforms SDN against all three attacks. Especially, considering ProFlip, the ASR achieves almost 100% when about 200\(\sim\)300 bits are flipped. Thus, we claim that simply deploying a multi-exit DNN structure on a target vanilla model cannot sufficiently defeat these targeted BFAs even without considering their adaptive attacks. Also, Aegis includes two components, and the ablation study to evaluate their defense effectiveness respectively is given in Section 5.6. ### Mitigating Adaptive Attacks Beyond basic adversaries, any effective defense should also be capable of withstanding any adaptive attackers who are aware of the existence and mechanism of the defense. We consider a sophisticated adversary who knows the detail of our defense mechanism and aims to design an adaptive strategy to break it. We consider crafting such advanced attacks from the three state-of-the-art attack methods (i.e., TBT, TA-LBF, and ProFlip). As analyzed in Section 4.4, the adversary knows all details of Aegis and tries to attack all ICs to increase the ASR regardless of where the input samples exit. He can design new loss functions dedicated to Aegis by adding optimization terms for all ICs. Thus, the adaptive TBT and TA-LBF attacks will flip bits in the final layer of all ICs in addition to the final layer of the model. All other attack settings and models are the same as in the previous sections. In the following, we evaluate the effectiveness of Aegis against these adaptive attack scenarios. We still first set \(N_{b}\) to 50 for all attacks. Then, we relax the restriction of \(N_{b}\) and evaluate the ASR when the adversary could flip more bits. **Baselines.** Since BIN is proven to make the victim model even more vulnerable to targeted BFAs, we do not consider it as a baseline defense and mainly compare with SDN. Both Aegis and SDN have ICs attached to the hidden layers for an early-exit mechanism, enabling the adversary to design a similar dedicated adaptive attack against them. We also report the ASR of BASE to reflect the defense effectiveness. **Adaptive TBT.** Table 6 shows the ASR of each defense method. We observe that SDN can be defeated by the adaptive TBT attack. By including all ICs of the SDN model to perform the adaptive TBT attack, the ASR of SDN is even higher than that of BASE in many cases. In contrast, our Aegis can restrict the ASR below 40% in most cases. This is because the adaptive TBT attack will flip bits for critical parameters in ICs and the model's final layers to manipulate inference on all exits of the SDN model. However, Aegis still effectively mitigates this strategy due to the random exit mechanism. **Adaptive TA-LBF.** Table 7 reports the defense results against adaptive TA-LBF, indicating the effectiveness of Aegis. Similarly, SDN becomes even more vulnerable than BASE under the adaptive TA-LBF attack. We analyze that compared with the basic SDN, Aegis still effectively mitigates the adaptive TA-LBF attack due to the DESDN. **Summary of evaluating adaptive TBT and TA-LBF.** We analyze why Aegis is better than SDN against the adaptive TBT and TA-LBF is the DESDN scheme. Since SDN uses a static multi-exit mechanism, input samples have a stable exit pattern, i.e. most samples exit from a few fixed ICs. We give an example on CIFAR-100 and VGG16 to list the proportion \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{**ASR (\%)**} \\ \cline{3-5} & & **BASE** & **SDN** & **Aegis** \\ \hline \multirow{2}{*}{CIFAR-10} & ResNet32 & 70.7 & 37.2 & **31.1** \\ \cline{2-5} & VGG16 & 71.1 & 86.5 & **58.1** \\ \hline \multirow{2}{*}{CIFAR-100} & ResNet32 & 95.8 & 79.3 & **49.7** \\ \cline{2-5} & VGG16 & 65.9 & 85.9 & **44.8** \\ \hline \multirow{2}{*}{STL-10} & ResNet32 & 100.0 & 35.0 & **31.8** \\ \cline{2-5} & VGG16 & 64.1 & 93.0 & **27.0** \\ \hline \multirow{2}{*}{Tiny-ImageNet} & ResNet32 & 100.0 & 96.3 & **28.2** \\ \cline{2-5} & VGG16 & 69.7 & 63.4 & **54.4** \\ \hline \hline \end{tabular} \end{table} Table 6: Evaluation results of ASR against adaptive TBT. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multicolumn{5}{c|}{**ASR (\%)**} \\ \cline{3-6} & & **BASE** & **BIN** & **RA-BNN** & **SDN** & **Aegis** \\ \hline \multirow{2}{*}{CIFAR-10} & ResNet32 & 96.9 & 99.4 & 90.6 & 47.3 & **19.8** \\ \cline{2-6} & VGG16 & 88.2 & 78.6 & 84.6 & 70.5 & **28.9** \\ \hline \multirow{2}{*}{CIFAR-100} & ResNet32 & 89.8 & 100.0 & 82.9 & 58.3 & **19.2** \\ \cline{2-6} & VGG16 & 80.0 & 80.4 & 70.5 & 64.9 & **20.3** \\ \hline \multirow{2}{*}{STL-10} & ResNet32 & 77.4 & 52.4 & 91.2 & 58.1 & **33.9** \\ \cline{2-6} & VGG16 & 87.2 & 96.0 & 90.3 & 19.9 & **18.7** \\ \hline \multirow{2}{*}{Tiny-ImageNet} & ResNet32 & 99.1 & 82.5 & 80.4 & 75.0 & **20.1** \\ \cline{2-6} & VGG16 & 88.2 & 44.1 & 39.2 & 26.8 & **15.6** \\ \hline \hline \end{tabular} \end{table} Table 5: Evaluation results of ASR against ProFlip. of samples exiting pattern in Figure 5. In Figure 5 (a), we observe that SDN lets more than 75% samples exit from three exits, i.e. \(IC_{9}\), \(IC_{13}\), and \(IC_{15}\). Once the adversary modifies TBT and TA-LBF adaptively by including the loss for all ICs, its optimization process will utilize this exit pattern and locate the adaptive critical bits in these ICs to perform BFAs. Aegis adopts the DESDN mechanism to make input samples exit from all ICs uniformly. Even if the adversary adaptively attacks all ICs, the uniformly distributed exits will increase the attack cost (more bits to flip), thus enhancing security. Therefore, under the same \(N_{b}\), Aegis significantly outperforms SDN. We further consider whether the exit distributions of these defenses can be affected by the adaptive attack. Taking TA-LBF as an example, we denote the compromised SDN and Aegis as SDN-Flipped and Aegis-Flipped, respectively. Figures 5 (b) and (c) show that the original and flipped models of each method have similar exit distributions. Such results reveal that the distribution of Aegis is still uniform under the attack, giving the adversary no chances to utilize the exit pattern to locate the critical parameters of critical ICs. In contrast, the flipped SDN model exhibits the same exit distribution (i.e., vulnerability) as the original one. **Adaptive ProFlip.** Table 8 shows the defense results for the adaptive ProFlip attack. Compared with SDN and BASE, the ASR of Aegis is always significantly lower on all datasets and models. Taking CIFAR-100 and ResNet32 as an example, the ASR of Aegis is 25.8%, while the ASR of SDN and BASE is 69.1% and 89.8% respectively. We analyze the main reason: with the optimization function for determining the critical parameters, the flipped bits are more likely to be concentrated in one layer. Thus, the adversary cannot effectively affect ICs before the modified layer, making Aegis resilient against this attack. The reason why SDN has relatively good defense results in some cases also comes from this. However, SDN is much inferior to Aegis as it only uses a static early exit mechanism. Such a static mechanism makes the number of affected ICs in the SDN greater than that in Aegis. Besides DESDN, our ROB contributes to the defense, which will be evaluated in Section 5.6. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**ASR (\%)**} \\ \cline{3-5} & & **BASE** & SDN & **Aegis** \\ \hline \multirow{2}{*}{CIFAR-10} & ResNet32 & 100.0 & 99.1 & **60.8** \\ \cline{2-5} & VGG16 & 70.2 & 89.3 & **50.3** \\ \hline \multirow{2}{*}{CIFAR-100} & ResNet32 & 100.0 & 100.0 & **26.4** \\ \cline{2-5} & VGG16 & 56.4 & 78.2 & **44.8** \\ \hline \multirow{2}{*}{STL-10} & ResNet32 & 100.0 & 100.0 & **10.2** \\ \cline{2-5} & VGG16 & 81.4 & 89.9 & **26.8** \\ \hline \multirow{2}{*}{Tiny-ImageNet} & ResNet32 & 100.0 & 100.0 & **16.2** \\ \cline{2-5} & VGG16 & 51.8 & 90.4 & **15.0** \\ \hline \end{tabular} \end{table} Table 7: Evaluation results of ASR against adaptive TA-LBF. Figure 4: Comparison between Aegis and other defense methods on CIFAR-100 with VGG16 when more bits are flipped. Aegis performs better than other methods: the ASR for Aegis is much lower than others under different numbers of flipped bits. Figure 5: The proportion of samples exit from different ICs or final layer (15 denotes the final layer) on CIFAR-100 and VGG16. Samples exit more uniformly in Aegis than SDN, even under BFA attacks (Aegis-Flipped and SDN-Flipped). \begin{table} \begin{tabular}{c|c|c|c|c} \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**ASR (\%)**} \\ \cline{3-5} & & **BASE** & **SDN** & **Aegis** \\ \hline \multirow{2}{*}{CIFAR-10} & ResNet32 & 96.9 & 74.2 & **38.4** \\ \cline{2-5} & VGG16 & 88.2 & 79.1 & **43.6** \\ \hline \multirow{2}{*}{CIFAR-100} & ResNet32 & 89.8 & 69.1 & **25.8** \\ \cline{2-5} & VGG16 & 80.0 & 92.4 & **33.7** \\ \hline \multirow{2}{*}{STL-10} & ResNet32 & 77.4 & 57.8 & **41.3** \\ \cline{2-5} & VGG16 & 87.2 & 87.5 & **34.5** \\ \hline \multirow{2}{*}{Tiny-ImageNet} & ResNet32 & 99.1 & 64.4 & **36.1** \\ \cline{2-5} & VGG16 & 88.2 & 73.1 & **40.8** \\ \hline \end{tabular} \end{table} Table 8: Evaluation results of ASR against adaptive ProFlip. **Targeting shallow hidden layers.** We further consider another adaptive attack, which focuses on shallow hidden layers to flip bits (denoted as **Shallow**). Note that only ProFlip could be extended to achieve such an adaptive attack since TBT and TA-LBF must modify parameters connected to the target class, which are located in the last dense layer. In particular, we modify ProFlip to choose critical bits among the first three hidden layers. Table 9 shows the results of Shallow. We observe that attacking shallow layers is not effective in bypassing Aegis. Indeed, flipping bits in shallow layers cannot guarantee successful targeted attacks on the following hidden layers' outputs while Aegis lets samples randomly exit from arbitrary layers to limit ASR. Besides, targeting shallow layers may significantly decrease clean ACC (\(6.2-27.8\%\)), while the original ProFlip only degrades \(2\%\). This makes such adaptive attacks easy to be detected. **Evaluation with more bits flipped.** We further evaluate the defense methods with up to \(500\) bits flipped for comprehensiveness. We conduct experiments on CIFAR-100 and VGG16 and adopt the adaptive TBT, TA-LBF, and ProFlip attacks. Results are shown in Figure 6. Compared with existing defense methods, under the same value of \(N_{b}\), we observe the ASR of Aegis is always the lowest against all attacks. ### Evaluation of Model Size Aegis can increase the model size during deployment. We evaluate the model size increase for all datasets and model structures used by us and find models ranging from \(6.1\)MB to \(97.6\)MB. We find that the size increase depends on the datasets and model structures. Particularly, Aegis introduces a tiny increase for VGG16. For example, on CIFAR-10, the size of VGG16 is \(58.3\)MB and \(65.6\)MB for BASE and Aegis, respectively. In contrast, ResNet is a small model so the increase is relatively larger. For instance, also on CIFAR-10, the size of VGG16 is \(1.9\)MB and \(6.1\)MB for BASE and Aegis. We emphasize that such a size increase is not a bottleneck for model deployment in practice. We find that for common embedded devices (e.g, Nvidia Jetson Nano), the memory capacity is usually at the GB level (far more sufficient to support Aegis). Besides, Aegis significantly improves the inference efficiency with almost no accuracy drop since most samples could early-exit from the network. Such benefits at the cost of acceptable size increase are very attractive for inference applications at the edge. To validate the above points, we deploy our Aegis on two real-world widely-used edge devices: (1) Nvidia Jetson Nano with \(4\)GB memory and \(16\)GB storage; (2) Raspberry Pi \(4\) with \(4\)GB memory and \(32\)GB storage. The size increase is totally affordable for these two devices. We further evaluate the inference acceleration brought by Aegis. We observe the average inference time is \(46.1-59.4\%\) of the original model, which is a big improvement. ### Ablation Study on ROB We verify the effectiveness of ROB. We compare Aegis with the setting without ROB, i.e., just DESDN. Note that ROB aims to help ICs adapt to the adversarial scenario where bits in the layers attached by ICs are flipped. Therefore, we choose ProFlip to evaluate the effectiveness of ROB, as it can flip bits in the layers attached by ICs. Table 10 shows that ROB effectively improves defense results. For the basic attacks, the ASR of Aegis is \(3\%-13\%\) less than that of DESDN. For the adaptive attacks, Aegis also performs better than DESDN by reducing \(4\%-15\%\) of ASR. Overall, we prove that ROB effectively contributes to mitigating targeted BFAs. ### Attack Feasibility Analysis We validate the feasibility of BFAs (more specifically, TBT attack), and use ResNet32 and Tiny-ImageNet as an example. The main idea is to adopt the DeepSteal [44] technique, which provides a memory massaging mechanism to realize Rowhammer [29] attack, and is able to flip multiple bits within a \(4\)KB page. This satisfies TBT's requirement, which targets one row of weights in the model's final layer and needs flipping multiple bits within a \(4\)KB page. This mechanism massages a page multiple times using memory swapping (the feature of swapping physical pages from DRAM to the disk swap space under memory pressure and then swapping them back when needed by the processor). Below we describe the details. _Step 1: evicting victim pages._ This step aims to evict the victim's pages from the main memory to swap space such that they can be relocated by the OS when they are accessed by the victim next time. To accomplish this, adversaries first allocate a large chunk of memory using mmap with the MAP_POPULATE \begin{table} \begin{tabular}{c|c|c|c|c} \hline \multirow{3}{*}{**Dataset**} & \multirow{3}{*}{**Model**} & \multicolumn{3}{c}{**ASR (\%)**} \\ \cline{3-5} & & **Basic ProFlip** & **Adaptive ProFlip** \\ \cline{3-5} & & **Design** & **Aegis** & **DESDN** & **Aegis** \\ \hline \multirow{3}{*}{CIFAR-10} & ResNet32 & 24.1 & **19.8** & 45.1 & **38.4** \\ \cline{2-5} & VGG16 & 33.7 & **28.9** & 49.4 & **43.6** \\ \hline \multirow{3}{*}{CIFAR-100} & ResNet32 & 29.8 & **19.2** & 39.2 & **25.8** \\ \cline{2-5} & VGG16 & 28.7 & **20.3** & 51.4 & **33.7** \\ \hline \multirow{3}{*}{STL-10} & ResNet32 & 36.2 & **33.9** & 45.0 & **41.3** \\ \cline{2-5} & VGG16 & 22.9 & **18.7** & 39.6 & **34.5** \\ \hline \multirow{3}{*}{Tiny-ImageNet} & ResNet32 & 33.4 & **20.1** & 45.4 & **36.1** \\ \cline{2-5} & VGG16 & 22.9 & **15.6** & 50.2 & **40.8** \\ \hline \end{tabular} \end{table} Table 10: Impact of ROB on basic and adaptive ProFlip. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**ASR (\%)**} & \multirow{2}{*}{**A ACC (\%)**} \\ \cline{3-5} & & **BASE** & **Aegis** & **Shallow** \\ \hline \multirow{3}{*}{CIFAR-10} & ResNet32 & 96.9 & 38.4 & 40.5 & **-18.7** \\ \cline{2-5} & VGG16 & 88.2 & 43.6 & 49.5 & **-21.5** \\ \hline \multirow{3}{*}{CIFAR-100} & ResNet32 & 98.9 & 25.8 & 22.8 & **-6.2** \\ \cline{2-5} & VGG16 & 80.0 & 33.7 & 37.0 & **-8.8** \\ \hline \multirow{3}{*}{STL-10} & ResNet32 & 77.4 & 41.3 & 48.4 & **-23.1** \\ \cline{2-5} & VGG16 & 87.2 & 34.5 & 51.7 & **-8.9** \\ \hline \multirow{3}{*}{Tiny-ImageNet} & ResNet32 & 99.1 & 36.1 & 30.3 & **-27.8** \\ \cline{2-5} & VGG16 & 88.2 & 40.8 & 51.3 & **-10.4** \\ \hline \end{tabular} \end{table} Table 9: Evaluation results of attacking shallow layers. flag. This triggers the OS to evict other data (including the victim's pages) from the main memory to the swap space. Thus, we can occupy most of the physical memory space with victim pages stored in swap space. _Step 2: releasing pages._ This step aims to systematically release the occupied pages to enforce the desired relocation of the victim's pages. In detail, adversaries create a list of potential pages for the victim to occupy as aggressors during the attack. At each round, adversaries choose a predetermined number of pages from the list and release the selected pages by calling munmap. _Step 3: deterministic relocation._ This step aims to place victim pages in predetermined locations to create an appropriate memory layout for Rowhammer. It also ensures that the victim page location is known to adversaries so that they correlate flipped bits with exact data in the victim domain. Adversaries follow DeepSteal [44] to exploit the per-core page-frame cache structure to manipulate the operating system page allocation, which allows them to control where the victim pages are relocated. After the above steps, adversaries mount Rowhammer [29] to flip bits in the victim pages placed in the appropriate locations. Since TBT targets one row of weights in a model's final layer which requires flipping multiple bits within a 4KB page, adversaries iterate the aforementioned steps until all target bits are flipped. Note that after each iteration, the weight page (with bit flips) will be swapped to the disk under memory pressure. When this page is needed again, it is swapped back. As a consequence, bit flips will occur with this operation. When the weight page is swapped back, it has a high probability to be put into a new location in the memory where a different bit can be flipped. Then adversaries perform Rowhammer again to this page. Adversaries iterate the entire process until all the required bits are flipped. **Evaluation results.** Using the above technique, adversaries are able to flip 10 bits in the target model to achieve TBT. The index of the flipped bits can be found in Appendix E. For base models, flipping these bits can achieve an ASR of 77.8%. Now with our Aegis, the TBT attack can only get an ASR of 2.0%. This confirms the effectiveness of Aegis. We further measure the attack cost. Flipping one bit takes about tens or even hundreds of seconds. Therefore, flipping more bits requires a much higher attack cost. DeepHammer [64] assumes that the maximum number of bits the adversary is allowed to flip is 24. To highlight the effectiveness of our defense, we consider a more powerful adversary who is allowed to flip \(N_{b}=50\) bits (taking several hours) in Section 5.3. Our Aegis is still effective against such an attack. Furthermore, we assume an unrealistically strong attack (\(N_{b}=500\)), and Aegis is still able to defeat it. Figures 4 and 6 show the evaluation results for non-adaptive and adaptive attacks under this attack cost. ## 6 Discussion and Future Work **Difference between ROB and adversarial training.** Both adversarial training [41] and our ROB aim to improve the model robustness by modifying model parameters but they are significantly different. Adversarial training aims to specifically defeat adversarial attacks by using perturbed samples generated from a clean model. Various previous work [19, 45] have proved that directly using adversarial training cannot mitigate existing BFAs. In contrast, ROB improves robustness by considering a compromised model. By considering the effects brought by flipping critical bits of a target model, ROB can particularly increase the resistance against BFAs. **Floating-point DNN models.** To the best of our knowledge, all state-of-the-art BFAs [4, 46, 8, 4] only focus on attacking quantized models. Therefore, we follow these existing works to evaluate our defense methods on quantized models in this paper. In fact, since Aegis is experimented in a non-intrusive fashion, protecting the floating-point DNN model is also feasible. We leave this experimentation as our future work. **Potential defense effects against untargeted BFAs.** As indicated in Section 3, mitigating untargeted BFAs is beyond the scope of this paper. However, we still evaluate Aegis against a state-of-the-art untargeted BFA [45] for comprehensiveness. As shown in Appendix D, results show that Aegis could also mitigate untargeted BFA effectively by significantly increasing the attack cost (i.e. the number of flipped bits). We consider this as future work. Figure 6: On CIFAR-100 and VGG16, we compare Aegis with other defense methods under different values of \(N_{b}\). Even the adversary significantly increases \(N_{b}\), Aegis still outperforms others in the adaptive scenarios. Conclusion We propose Aegis, a novel mitigation methodology against targeted bit-flip attacks. With a novel design of DESDN, we randomly select ICs for inference, enabling input samples to early-exit from them and effectively obfuscate the adversary. We further propose ROB to improve IC robustness. We conduct extensive experiments with four mainstream datasets and two DNN structures to show that Aegis can mitigate various state-of-the-art targeted attacks as well as their adaptive versions, and significantly outperform existing defense methods. ## Acknowledgments This work is supported by the National Key R&D Program of China (2022YFB3105202), National Natural Science Foundation of China (62106127, 62132011, 61972224), NSFOCUS (2022671026), Singapore Ministry of Education (MOE) AcRF Tier 2 MOE-T2EP20121- 0006, and Ant Group through CCF-Ant Innovative Research Program No. RF2021002.
2305.17939
Fourier Analysis on Robustness of Graph Convolutional Neural Networks for Skeleton-based Action Recognition
Using Fourier analysis, we explore the robustness and vulnerability of graph convolutional neural networks (GCNs) for skeleton-based action recognition. We adopt a joint Fourier transform (JFT), a combination of the graph Fourier transform (GFT) and the discrete Fourier transform (DFT), to examine the robustness of adversarially-trained GCNs against adversarial attacks and common corruptions. Experimental results with the NTU RGB+D dataset reveal that adversarial training does not introduce a robustness trade-off between adversarial attacks and low-frequency perturbations, which typically occurs during image classification based on convolutional neural networks. This finding indicates that adversarial training is a practical approach to enhancing robustness against adversarial attacks and common corruptions in skeleton-based action recognition. Furthermore, we find that the Fourier approach cannot explain vulnerability against skeletal part occlusion corruption, which highlights its limitations. These findings extend our understanding of the robustness of GCNs, potentially guiding the development of more robust learning methods for skeleton-based action recognition.
Nariki Tanaka, Hiroshi Kera, Kazuhiko Kawamoto
2023-05-29T08:04:04Z
http://arxiv.org/abs/2305.17939v2
Fourier Analysis on Robustness of Graph Convolutional Neural Networks for Skeleton-based Action Recognition ###### Abstract Using Fourier analysis, we explore the robustness and vulnerability of graph convolutional neural networks (GCNs) for skeleton-based action recognition. We adopt a joint Fourier transform (JFT), a combination of the graph Fourier transform (GFT) and the discrete Fourier transform (DFT), to examine the robustness of adversarially-trained GCNs against adversarial attacks and common corruptions. Experimental results with the NTU RGB+D dataset reveal that adversarial training does not introduce a robustness trade-off between adversarial attacks and low-frequency perturbations, which typically occurs during image classification based on convolutional neural networks. This finding indicates that adversarial training is a practical approach to enhancing robustness against adversarial attacks and common corruptions in skeleton-based action recognition. Furthermore, we find that the Fourier approach cannot explain vulnerability against skeletal part occlusion corruption, which highlights its limitations. These findings extend our understanding of the robustness of GCNs, potentially guiding the development of more robust learning methods for skeleton-based action recognition. Skeleton-based action recognition Graph convolutional neural network Adversarial robustness Fourier analysis ## 1 Introduction In skeleton-based action recognition, graph convolutional neural networks (GCNs) exhibit remarkable performance due to their ability to represent skeletal motion inputs using topological graphs [1, 2, 3, 4, 5, 6]. However, recent studies have revealed that GCNs are vulnerable to adversarial attacks [7, 8, 9, 10] and common corruptions, such as Gaussian noise. This finding emphasizes the need to ensure robustness in real-world applications. To address these issues, other recent studies have proposed methods to improve the robustness of GCNs [11, 12, 13, 14, 15]. These vulnerabilities also imply that GCNs learn different features from humans, highlighting the need for a deeper understanding of their properties to develop robust models. Recent studies in image classification have uncovered interesting properties of convolutional neural networks (CNNs) using Fourier analysis [16, 17, 18, 19, 20, 21]. For example, Yin et al. [16] discovered that adversarial perturbations for standard-trained CNNs are concentrated in the high-frequency domain, whereas those for adversarially-trained CNNs are concentrated in the low-frequency domain. They also revealed that adversarial training encourages CNNs to capture low-frequency features of images, resulting in a trade-off between robustness to low-frequency and high-frequency perturbations. More specifically, adversarial training can enhance robustness to high-frequency corruptions such as Gaussian noise while degrading robustness to low-frequency corruptions such as fog corruption. Saikia et al. [22] leveraged the trade-off to propose a method for improving robustness by combining separate models that were respectively robust to low- and high-frequency perturbations. Furthermore, Zhuang et al. [21] observed that CNNs robust against common corruptions tend to rely more on low-frequency features than standard-trained models. These studies demonstrated the effectiveness of Fourier analysis in understanding the robustness of CNNs. Inspired by these prior efforts, we examine the robustness of skeleton-based action recognition using Fourier analysis. Unlike image classification, where the 2D discrete Fourier transform (DFT) can be used for Fourier analysis, skeleton-based action recognition requires an alternate approach owing to the graph-based representation of the skeletal data. Therefore, we adopt a joint Fourier transform (JFT) [23] that combines the graph Fourier transform (GFT) and DFT, as shown in Fig. 1. By applying the GFT to each frame of the skeletal sequence data and then using the DFT to each graph-frequency component, we analyze the frequencies of the skeletal sequence data along the spatial (graph-frequency) and temporal (time-frequency) directions. This method enables us to explore the difference between CNN-based image classification and GCN-based skeletal action recognition from a Fourier perspective. In experiments with the NTU RGB+D dataset [24], we apply Fourier analysis to compare standard-trained and adversarially-trained GCNs. Our observations reveal that there is no robustness trade-off between adversarial attacks and low-frequency perturbations for GCN-based skeletal action recognition. This finding is interesting because such a trade-off is typically observed in image classification [16; 20]. Furthermore, we explore the robustness against common corruptions, such as Gaussian noise and part occlusion, and find that an experimental result for the case of part occlusion cannot be explained by Fourier analysis alone. The contributions of this study are as follows. * A novel application of Fourier analysis is presented to GCN-based skeletal-based action recognition. Specifically, for the first time, we analyze the frequency characteristics of adversarially-trained GCNs against adversarial attacks and common corruptions using the joint Fourier transform. * Experimental findings indicate that there is no robustness trade-off between adversarial attacks and low-frequency perturbations for skeleton-based action recognition, which is unique in CNN-based image classification. * Challenges are revealed in comprehensively explaining the robustness of skeleton-based action recognition using Fourier analysis. Specifically, Fourier analysis cannot explain vulnerability against part occlusion corruptions. The remainder of this paper is organized as follows. Section 2 presents related work on the robustness of skeleton-based action recognition and Fourier analysis approaches in deep learning. Section 3 describes a Fourier analysis method for GCNs. Section 4 presents the experimental results. ## 2 Related work This section reviews related work on the robustness of skeleton-based action recognition and the Fourier analysis on robustness of deep models against adversarial attacks and common corruptions. ### Robustness of Skeleton-based Action Recognition Skeletal motion data offer several advantages over RGB videos, such as robustness to changes in clothing and background and superior computational efficiency [25; 26]. Due to the nature of the skeletal structure, GCNs have been extensively used for skeleton-based action recognition [1; 2; 3; 4; 5; 6]. Recent research has explored adversarial attacks [27; 28] on GCNs for skeleton-based action recognition and revealing the vulnerabilities of GCNs [7; 8; 9; 10]. Liu et al. [10] presented the first white-box adversarial attack on GCNs. Figure 1: Flow of the joint Fourier transform (JFT) on skeletal sequence data, which encompasses both the graph Fourier transform (GFT) and the discrete Fourier transform (DFT). The GFT is first applied to the skeletal data at each frame, followed by the DFT. Their method perturbs joint locations while preserving temporal coherence, spatial integrity, and anthropomorphic plausibility. Diao et al. [7] proposed a guided manifold walk method to search for adversarial examples in the black-box setting. They also considered the naturalness and imperceptibility of perturbed skeletons. Wang et al. [8] proposed an attack method in both white- and black-box settings and highlighted the importance of considering motion dynamics in analyzing imperceptible adversarial attacks on 3D skeletal motion. Tanaka et al. [9] indicated that an adversarial attack was possible by only changing the lengths of the bones. All of these studies demonstrate that GCNs are vulnerable to imperceptible adversarial perturbations. Contrary to these studies on attack, research on defense against adversarial attacks for skeleton-based action recognition remains in its infancy. Wang et al. [29] proposed a Bayesian defense framework based on adversarial training, which is the most effective defense technique against adversarial attacks. Their study demonstrated that adversarial training is effective not only for CNN-based image classification, but also for GCN-based skeletal action recognition. Nevertheless, existing studies on adversarial robustness are limited and not sufficiently comprehensive. Even so, vulnerability to common corruptions has been explored in several studies. For example, robustness against Gaussian noise [11], part occlusion [12; 13; 15], frame occlusions and jittering noises [13] have been explored. Further investigation into the robustness against both common corruptions and adversarial attacks are essential for enhancing the applicability of deep models in real-world scenarios. ### Fourier analysis of CNN-based image classification In image classification, recent studies have attempted to explain the robustness of CNNs using frequency analysis [16; 17; 18; 19; 20; 21]. Yin et al. [16] demonstrated that standard-trained CNNs predominantly depend on high-frequency components for image classification, while adversarial training encourages CNNs to capture low-frequency components of images. Their further investigation revealed that adversarial training enhances robustness against common high-frequency corruptions, such as Gaussian noise, while degrading robustness against common low-frequency corruptions, such as fog corruption. These findings indicate the existence of a trade-off between robustness against high-frequency and low-frequency perturbations. Wang et al. [17] established that smoothing the convolutional kernels at the first layer, thereby encouraging CNNs to ignore high-frequency components, can improve adversarial robustness. Bernhard et al. [18] observed that the frequency characteristics of adversarial robustness may depend on the dataset. Abello et al. [19] investigated the high-frequency biases of standard-trained CNNs. Chan et al. [20] explored these trade-offs by directly changing the frequency profile of the models. Zhuang et al. [21] identified harmful frequencies for robustness to common corruptions and proposed a method to ignore these harmful frequency components. These studies indicate that it is beneficial to use Fourier analysis to understand the robustness of CNNs. However, the frequency analysis of GCN-based skeleton action recognition remains unexplored. ## 3 Fourier Analysis for Skeleton-based Action Recognition Here, we investigate the frequency responses of GCNs for skeleton action recognition to reveal their robustness against adversarial attacks and common corruptions. Rather than using the 2D discrete Fourier transform (DFT), we employ the joint Fourier transform (JFT), which is a frequency analysis tool for time-varying graph signals [23], Furthermore, we use the Fourier heatmap [16] to visualize the sensitivity of deep models in the frequency domain. ### Spatiotemporal Graph for Skeletal Sequence Data A skeleton sequence for a single person is represented by an undirected graph \(G=(V,E)\), where \(V\) is a set of nodes and \(E\) is a set of edges. The node set \(V\) consists of all \(N\) skeleton joints, i.e., \(V\) includes \(N\times T\) nodes, and the edge set \(E\) indicates the intrabody connections of the skeleton. Each node \(v_{i}(t)\in V\) is associated with a feature vector of the corresponding joint position \(\mathbf{x}_{i}(t)=(x_{i}(t),y_{i}(t),z_{i}(t))^{\top}\in\mathbb{R}^{3}\) at frame \(t\). For example, considering the joint positions themselves, the node feature is \(\mathbf{x}_{i}(t)\). As with other features, joint motion \(\mathbf{v}_{i}(t)=\mathbf{x}_{i}(t+1)-\mathbf{x}_{i}(t)\), bone \(\mathbf{b}_{i}(t)=\mathbf{x}_{i}(t)-\mathbf{x}_{j}(t)\) (\(v_{i}\) and \(v_{j}\) is a pair of connected joints), and bone motion \(\mathbf{v}_{i}^{\mathrm{b}}(t)=\mathbf{b}_{i}(t+1)-\mathbf{b}_{i}(t)\) have been used [3; 4; 5]. The set of such feature vectors is represented by \[\mathbf{X}=\begin{pmatrix}f_{1}(1)&f_{1}(2)&\dots f_{1}(T)\\ f_{2}(1)&f_{2}(2)&\dots f_{2}(T)\\ \vdots&\ddots&\vdots\\ f_{N}(1)&f_{N}(2)&\dots f_{N}(T)\end{pmatrix}\in\mathbb{R}^{N\times T}, \tag{1}\] where \(f_{i}(t)\) is an \(x\), \(y\), or \(z\) coordinate element of the feature vector associated with \(v_{i}(t)\). We omit the dimension of the three-axis channels for simplicity, although the related skeleton sequence is represented by an \(N\times T\times 3\) tensor. The skeleton sequence is processed independently for each channel. To normalize the frame lengths of all skeletal data, linear interpolation along the temporal direction are usually used [4, 5, 6]. We denote the interpolated data for \(\mathbf{X}\) by \(I(\mathbf{X})\in\mathbb{R}^{N\times T^{\prime}}\), where \(T^{\prime}\) is set to \(64\) in our experiments. ### Standard & Adversarial Training Let \(\mathcal{M}_{\theta}\) be a GCN parameterized by \(\theta\). Standard training for \(\mathcal{M}_{\theta}\) is performed by solving the following minimization problem \[\min_{\theta}\mathbb{E}_{(\mathbf{X},y)\sim\mathcal{D}}[\mathcal{L}(\mathcal{M}_ {\theta}(I(\mathbf{X}),y))], \tag{2}\] where \(\mathcal{L}(\cdot)\) is the cross-entropy loss function and \(\mathcal{D}\) is an underlying data distribution over pairs of interpolated skeletal data \(I(\mathbf{X})\) and action labels \(y\). Adversarial training in this study is based on the projected gradient descent (PGD) [28], which is a typical and strong attack method. The PGD uses a set of perturbations \(S=\{\mathbf{\delta}\mid\left\lVert\mathbf{\delta}\right\rVert_{p}\leq\epsilon\}\), where \(\left\lVert\mathbf{\delta}\right\rVert_{p}\) is the \(l_{p}\) norm of \(\mathbf{\delta}\in\mathbb{R}^{T\times N}\) ( we set \(p\) as 2) and \(\epsilon>0\) denote the supremum of the perturbation norm. Adversarial training for \(\mathcal{M}_{\theta}\) with the PGD is performed by solving the following min-max optimization problem [28] \[\min_{\theta}\mathbb{E}_{(\mathbf{X},y)\sim\mathcal{D}}\Bigl{[}\max_{\mathbf{\delta} _{\mathrm{adv}}\in S}\mathcal{L}(\mathcal{M}_{\theta}(I(\mathbf{X}+\mathbf{\delta}_{ \mathrm{adv}}),y))\Bigr{]}, \tag{3}\] where linear interpolation \(I(\cdot)\) is repeatedly applied to adversarially perturbed data \(\mathbf{X}+\mathbf{\delta}_{\mathrm{adv}}\) in the optimization loop. ### Discrete & Graph Fourier Transforms DFT and GFT form the basis for the JFT on spatiotemporal graph \(G\), as is discussed in the following subsection. Additionally, DFT and GFT are used to generate low-frequency or high-frequency Gaussian noise signals, which are used to validate the trade-off in robustness against low-frequency and high-frequency corruptions, as discussed in Section 4.2.3. We let the GFT be defined first. The GFT on \(\mathbf{X}\) is defined by \[\text{GFT}(\mathbf{X})=\mathbf{U}^{\top}\mathbf{X}, \tag{4}\] where \(\mathbf{U}\in\mathbb{R}^{N\times N}\) is the eigenvector matrix of the Laplacian matrix \(\mathbf{L}\) of the spatial subgraph of \(G\). The Laplacian matrix is defined by \(\mathbf{L}=\mathbf{D}-\mathbf{A}\in\mathbb{R}^{N\times N}\), where \(\mathbf{A}\) and \(\mathbf{D}\) denote the adjacency and degree matrices of the spatial skeletal structure, respectively. Let \(\lambda_{k}\in\mathbb{R}\) and \(\mathbf{u}_{k}\in\mathbb{R}^{N}\) be the \(k\)-th eigenvalue and eigenvector of \(\mathbf{L}\). The eigenvector matrix \(\mathbf{U}\) is obtained by applying the eigen-decomposition to \(\mathbf{L}\) as follows \[\mathbf{L}=\mathbf{U}\mathbf{\Sigma}\mathbf{U}^{\top}, \tag{5}\] where \(\mathbf{U}=[\mathbf{u}_{1},\dots,\mathbf{u}_{N}]\) and \(\mathbf{\Sigma}=\mathrm{diag}(\lambda_{1},\dots,\lambda_{N})\). Here, the eigenvalues are sorted as \(\lambda_{1}\geq\lambda_{2}\geq\dots>\lambda_{N}=0\), where the larger the eigenvalue is, the higher the frequency. The DFT on \(\mathbf{X}\) is defined by \[\text{DFT}(\mathbf{X})=\mathbf{X}\mathbf{W}, \tag{6}\] where \(\mathbf{W}\in\mathbb{R}^{T\times T}\) is the DFT matrix, defined by \[\mathbf{W}=\begin{pmatrix}1&1&1&\dots&1\\ 1&\omega&\omega^{2}&\dots&\omega^{T-1}\\ 1&\omega^{2}&\omega^{4}&\dots&\omega^{2(T-1)}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&\omega^{T-1}&\omega^{2(T-1)}&\dots&\omega^{(T-1)(T-1)}\end{pmatrix}, \tag{7}\] where \(\omega=e^{-2\pi j/T}\) and \(j^{2}=-1\). We generate low-frequency or high-frequency Gaussian noises that is added to skeletal data \(\mathbf{X}\) using the GFT and DFT. Let \(\mathbf{V}\) be the Gaussian white noise on \(\mathbb{R}^{N\times T}\), where each element of \(\mathbf{V}\) is independently sampled from a zero-mean Gaussian distribution \(\mathcal{N}(0,\sigma^{2})\). Spatial filtering or graph spectral filtering on \(\mathbf{V}\) is defined by \[\widetilde{\mathbf{V}_{\mathrm{s}}}=\mathbf{M}_{\mathrm{s}}\text{GFT}(\mathbf{V}), \tag{8}\] where \(\widetilde{\mathbf{V}_{\text{s}}}\) is the filtering output in the spatial frequency domain and \(\mathbf{M}_{\text{s}}=\operatorname{diag}(m_{\text{s},1},m_{\text{s},2},\ldots,m_{ \text{s},N})\) is the \(N\times N\) binary diagonal matrix for spatial filtering. For instance, if \(\mathbf{M}_{\text{s}}=\operatorname{diag}(0,0,\ldots,0,1,1)\), as shown in Fig. 2 (leftmost), a low-frequency Gaussian noise with a bandwidth of 2 is generated by applying the inverse GFT to \(\widetilde{\mathbf{V}_{\text{s}}}\). If \(\mathbf{M}_{\text{s}}=\operatorname{diag}(1,1,0,\ldots,0)\), as shown in Fig. 2 (left second), High-frequency Gaussian noise with a bandwidth of 2 is generated. Similarly, temporal filtering on \(\mathbf{V}\) is defined by \[\widetilde{\mathbf{V}_{\text{t}}}=\text{DFT}(\mathbf{V})\mathbf{M}_{\text{t}}, \tag{9}\] where \(\widetilde{\mathbf{V}_{\text{t}}}\) is the filtering output in the temporal frequency domain and \(\mathbf{M}_{\text{t}}=\operatorname{diag}(m_{\text{t},1},m_{\text{t},2},\ldots,m_ {\text{t},T})\) is the \(T\times T\) binary diagonal matrix for temporal filtering. Similar to the GFT, \(\mathbf{M}_{\text{t}}=\operatorname{diag}(0,0,\ldots,0,1,1)\) and \(\mathbf{M}_{\text{t}}=\operatorname{diag}(1,1,0,\ldots,0)\) generate low-frequency and high-frequency Gaussian noises with a bandwidth of 2, as shown in Fig. 2 (right second) and (rightmost), respectively. ### Joint Fourier Transform and Fourier Heatmap We adopt the JFT [23] for Fourier analysis on spatiotemporal graph G, which encompasses both the GFT and DFT. Initially, the GFT is implemented on the spatial subgraph of \(G\) for each frame to extract spatial frequency characteristics, followed by the application of the DFT to extract temporal frequency characteristics of \(G\), as shown in Fig. 1. The JFT on \(\mathbf{X}\) is defined by combining the GFT and DFT as follows: \[\text{JFT}(\mathbf{X})=\mathbf{U}^{\top}\mathbf{X}\mathbf{W}. \tag{10}\] For Fourier analysis on skeletal data, we compute the average spectrum over all test data of \(\mathbf{X}\) using the JFT, and visualize these values using the heatmaps shown in Fig. 3, where the horizontal and vertical axes represent temporal and spatial frequencies, respectively. Additionally, we employ Fourier heatmaps [16] to visualize the sensitivity of the GCNs for specific frequency signals. A signal of spatial frequency \(\lambda_{k}\) (the \(k\)-th eigenvalue of \(\mathbf{L}\)) and temporal frequency \(l/T\), denoted by \(\mathbf{F}_{k,l}\in\mathbb{R}^{N\times T}\), is Figure 3: Average Fourier spectrum over all tests skeleton data for joint, joint motion, bone, and bone motion features. The vertical and horizontal axes represent spatial and temporal frequency, respectively. From these figures, for example, we can see the Fourier spectrum of the joint feature (leftmost) is concentrated at low frequencies in the spatiotemporal frequency domain. Figure 2: Spatial low-pass (leftmost) and high-pass (left second) filtering are performed by masking the Fourier spectrum along the spatial frequency axis. Temporal low-pass (right second) and high-pass (rightmost) are performed by masking the Fourier spectrum along the temporal frequency axis. generated by setting the JFT of \(\mathbf{F}_{k,l}\) to \[\text{JFT}(\mathbf{F}_{k,l})=\ k^{\text{th}}\begin{pmatrix}0&0&0&0&\ldots&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 0&\ldots&0&1&0&\ldots&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 0&\ldots&0&0&0&\ldots&0\end{pmatrix}, \tag{11}\] where the right-hand side matrix takes \(1\) for only the \((k,l)\)-element and \(0\) otherwise. Using \(\mathbf{F}_{k,l}\), a perturbed signal of a skeleton sequence \(\mathbf{X}\) is generated by \[\mathbf{X}^{\prime}=I(\mathbf{X})+rv\mathbf{F}_{k,l}, \tag{12}\] where \(r\) is sampled uniformly at random from \(\{-1,1\}\), \(v>0\) is the norm of the perturbation, and \(\mathbf{F}_{k,l}\) is supposed to be normalized as \(||\mathbf{F}_{k,l}||_{2}=1\). The Fourier heatmap is generated by plotting the average error rate of 1000 randomly sampled test data for every frequency \(k=1,2,\ldots,N\) and \(l=1,2,\ldots,T\). The closer to red a region of the Fourier heatmap is, the more sensitive the GCNs at the corresponding frequency. In other words, to recognize human actions, GCNs capture skeletal signals that contain such sensitive frequencies. The high sensitivity brings vulnerability to the GCNs. ## 4 Experiment Using frequency analysis, we conduct a comparative analysis of the robustness for the standard-trained and adversarially-trained GCNs. First, we investigate the robustness and vulnerability in the frequency domain using Fourier heatmaps. The visualization provides basic insights into the robustness of adversarial training. Next, we analyze the spectral distributions of adversarial perturbations to the GCNs. This analysis reveals the frequency characteristics of adversarial attacks. Furthermore, we explore whether the robustness trade-off that has been established in CNN-based image classification exists similarly in GCN-based skeletal action recognition. Finally, we evaluate the robustness of the GCNs against common corruptions. ### Experimental Setting #### 4.1.1 Dataset We conduct our experiments on NTU RGB+D [24], which contains 56,880 skeletal data with 60 action classes. We divide the whole data into training and test data according to the subjects (cross-subject setting). The training and test Figure 4: Adversarial examples generated by the \(l_{2}\)-PGD. Clean (blue) and adversarial (red) examples are superimposed for three example actions. These two almost overlap and are highly imperceptible. For each action, action labels before and after adversarial attacks are provided. datasets comprise 40,320 and 16,560 samples, respectively. For the validation data, we randomly sample 5% from the training data. #### 4.1.2 Model We train ST-GCNs [1] and TCA-GCNs [5] for joint \(\mathbf{x}_{i}(t)\), joint motion \(\mathbf{v}_{i}(t)\), bone \(\mathbf{b}_{i}(t)\), and bone motion \(\mathbf{v}_{i}^{\mathrm{b}}(t)\), respectively, using official codes provided by those authors. To guarantee convergence in training, we execute at least 80 epochs for the ST-GCN and 75 epochs for the TCA-GCN and adopt early stopping with patience 20 for both models. For the other hyperparameters, we use the same ones of the official codes. #### 4.1.3 Adversarial Attack We use the \(l_{2}\)-PGD algorithm to evaluate adversarial robustness. To normalize the attack strength to each piece of skeletal data, we set the perturbation threshold \(\epsilon\) as \(\epsilon=l_{\mathrm{head}}\times\epsilon_{\mathrm{head}}\), where \(l_{\mathrm{head}}\) is the head length of each skeletal datum. Fig. 4 displays adversarial examples when \(\epsilon_{\mathrm{head}}=1.0,3.0,5.0\). We do not impose any naturalness constraints on the attacker, except for the perturbation norm. Nevertheless, we observe that adversarial attacks are highly imperceptible because linear interpolation in Section 3.1 suppresses large jittering. #### 4.1.4 Adversarial Training Adversarial training [28] is an effective method for defending against adversarial attacks. We use Free [30] for adversarial training due to its computational efficiency. For Free, the number of hop steps is set to four, and the threshold is set to \(\epsilon_{\mathrm{head}}=3\). Table 1 shows the accuracies of standard-trained and adversarially-trained models. As is well established, adversarial training performs less effectively than standard training for clean data. To verify that adversarial training enhances adversarial robustness, we conduct preliminary experiments on robustness of the standard-trained and adversarially-trained models with thresholds \(\epsilon_{\mathrm{head}}\in\{1.0,3.0,5.0\}\) and 10 iterations. Then, we attack each model by perturbing clean data correctly classified by both models to take the difference in clean accuracy into consideration. In Table 2, we can see that adversarial training demonstrates higher adversarial robustness than standard training. \begin{table} \begin{tabular}{c|c|c|c|c} \hline Model & Joint & Joint Motion & Bone & Bone Motion \\ \hline ST-GCN (ST) & **85.3**\% & **84.8**\% & **85.4**\% & **84.4**\% \\ ST-GCN (AT) & 79.8\% & 73.1\% & 79.7\% & 76.0\% \\ \hline TCA-GCN (ST) & **89.4**\% & **86.7**\% & **89.2**\% & **86.3**\% \\ TCA-GCN (AT) & 82.1\% & 77.2\% & 82.2\% & 78.4\% \\ \hline \end{tabular} \end{table} Table 1: Comparison of clean accuracy between standard-trained (ST) and adversarially-trained (AT) models. \begin{table} \begin{tabular}{c|c|c|c} \hline Model & \(\epsilon_{\mathrm{head}}=1\) & \(\epsilon_{\mathrm{head}}=3\) & \(\epsilon_{\mathrm{head}}=5\) \\ \hline ST-GCN ST (Joint) & 26.3\% & 6.00\% & 1.63\% \\ ST-GCN AT (Joint) & **74.6**\% & **61.2**\% & **47.7**\% \\ \hline TCA-GCN ST (Joint) & 16.7\% & 1.72\% & 0.19\% \\ TCA-GCN AT (Joint) & **76.8**\% & **64.6**\% & **52.4**\% \\ \hline ST-GCN ST (Joint Motion) & 9.30\% & 0.61\% & 0.06\% \\ ST-GCN AT (Joint Motion) & **64.2**\% & **48.3**\% & **35.8**\% \\ \hline TCA-GCN ST (Joint Motion) & 2.84\% & 0.04\% & 0.00\% \\ TCA-GCN AT (Joint Motion) & **69.0**\% & **51.6**\% & **36.5**\% \\ \hline ST-GCN ST (Bone) & 8.99\% & 0.35\% & 0.07\% \\ ST-GCN AT (Bone) & **71.5**\% & **52.5**\% & **32.9**\% \\ \hline TCA-GCN ST (Bone) & 8.35\% & 0.21\% & 0.01\% \\ TCA-GCN AT (Bone) & **76.1**\% & **60.8**\% & **44.5**\% \\ \hline ST-GCN ST (Bone Motion) & 0.87\% & 0.02\% & 0.00\% \\ ST-GCN AT (Bone Motion) & **62.2**\% & **40.5**\% & **24.5**\% \\ \hline TCA-GCN ST (Bone Motion) & 0.39\% & 0.03\% & 0.02\% \\ TCA-GCN AT (Bone Motion) & **61.3**\% & **46.5**\% & **31.8**\% \\ \hline \end{tabular} \end{table} Table 2: Comparison of adversarial accuracy between standard-trained (ST) and adversarially-trained (AT) models. #### 4.1.5 Evaluation Metric As shown in Table 1, there is a difference in clean accuracy between standard-trained and adversarially-trained models. For a fair comparison of robustness, we remove a such difference. More specifically, when we evaluate the robustness, we perturb clean data correctly classified by both models and use accuracy for these perturbed data as an evaluation metric. ### Results #### 4.2.1 Frequency Analysis of Adversarial Training The Fourier heatmap examines the frequency characteristics of standard and adversarial training. Fig. 5 shows Fourier heatmaps that plot the average error rates of 1000 randomly sampled test data points for every frequency. The top and bottom three rows display the Fourier heatmaps of the ST-GCNs and TCA-GCNs, respectively, for each of the four features (joint, joint motion, bone, bone motion) in Section 3.1 and three perturbation norms \(v\in\{0.5,1.5,3.0\}\) in Eq. (12). Fig. 5 reveals the following frequency characteristics. The standard-trained models are sensitive to temporal low-frequency perturbations (i.e., the left half of the maps). In other words, the models do not capture high-frequency signals in time, whereas they do capture signals from low to high frequencies in space. The adversarially-trained models become more insensitive, especially for high-frequency perturbations (i.e., the upper right of the maps), resulting in robustness to high-frequency perturbations. Joint motion and bone motion features introduce more vulnerability against high-frequency perturbations than the joint and bone features, respectively. This vulnerability is reasonably attributed to motion (differential) features generally being more sensitive to high-frequency perturbations than positional features. These frequency characteristics do not depend on the network architecture, and the two GCNs have similar characteristics. These results prove that adversarial training can improve the robustness in the higher frequencies, and the improvement is also observable in CNN-based image classification [16; 17; 19]. In contrast, the lower frequency characteristics of the Figure 5: Fourier heatmaps of standard-trained (ST) and adversarially-trained (AT) GCNs. The top and bottom three rows display those of the ST-GCNs and TCA-GCNs, respectively, for each of the four features (joint, joint motion, bone, bone motion) and three perturbation norms \(v\in\{0.5,1.5,3.0\}\). CNNs and GCNs are slightly different. In CNN-based image classification, adversarial training sacrifices the robustness of the standard-trained models in the lowest frequencies (i.e., the lower left of the maps), whereas this is not always the case in GCN-based action recognition. For example, the adversarially-trained models learned with the joint and bone features do not become more vulnerable than the standard-trained models at the lowest frequencies. #### 4.2.2 Frequency Analysis of Adversarial Attack The frequency characteristics of adversarial attacks are examined using the spectral distribution of successful adversarial attacks. Fig. 6 shows the spectral distributions, which are obtained by estimating the average amplitude of successful adversarial examples, i.e., \(\mathbb{E}[|\mathrm{JFT}(I(\mathbf{X}+\mathbf{\delta}_{\mathrm{adv}})-I(\mathbf{X}))|]\), where \(\mathbf{X}\) is clean skeletal data and \(\mathbf{\delta}_{\mathrm{adv}}\) is an adversarial perturbation that successfully attacks a given GCN. Here, \(I(\cdot)\) is the linear interpolation operation in Section 3.1. Fig. 6 shows that the spectral distributions of the adversarial attacks on the standard-trained models are broadly distributed, whereas those on the adversarially-trained models are concentrated in the lower frequencies. These frequency characteristics indicate that the adversarially-trained models capture the features from the lower frequency signals better than the standard-trained models. This observation supports the hypothesis that adversarial training provides robustness against high-frequency perturbations, as discussed in the previous subsection. However, the adversarially-trained models with the bone motion feature remain vulnerable in the high-frequency domain (i.e., the rightmost column in Fig. 6). This phenomenon reasonably leads us to conclude that the bone feature contains high-frequency signals [19] as shown in Fig. 3 (i.e., the rightmost column). #### 4.2.3 Robustness Trade-off between High-Frequency and Low-Frequency Perturbations We examine whether a trade-off exists where adversarially-trained models are robust to high-frequency perturbations but highly vulnerable to low-frequency perturbations. This examination is motivated by the existence of the same trade-off as found in CNN-based image classification [16; 20]. The trade-off is evaluated by adding low- or high-frequency Gaussian noise, as described in Section 3.3, to the clean data that both models correctly classify. The norm of the Gaussian noise is adjusted such that the accuracy of either the standard-trained model or the adversarially-trained model is approximately 80%. This adjustment is required for a fair comparison because the appropriate norm differs between Figure 6: Average Fourier spectrum of adversarial perturbations in the spatiotemporal frequency domain. The heatmaps plot the estimation of \(\mathbb{E}[|\mathrm{JFT}(I(\mathbf{X}+\mathbf{\delta}_{\mathrm{adv}})-I(\mathbf{X}))|]\), where \(\mathbf{X}\) is the clean data and the expectation is chosen over the adversarial examples that successfully attack each model. the four features. These norms are listed in Table 3 and denoted as scaled norms. In the following experiments, we use the ST-GCN because there is no significant difference in the frequency characteristics of the two architectures. First, we evaluate the accuracies of the standard-trained model (blue) and the adversarially-trained model (red), as shown in Fig. 7, for changes in the Gaussian noise norm while maintaining a fixed bandwidth of 2. The norm is changed at 20%, 40%, 60%, 80%, and 100% of the scaled norm. The top two and bottom two rows in Fig. 7 show the accuracies when perturbed with spatially or temporally filtered Gaussian noises, respectively. In all cases, the adversarially-trained model (red) is more robust than the standard-trained model (blue). Next, we evaluate the accuracies of the two models upon changing the Gaussian noise bandwidth while keeping the norm fixed at the scaled norm. The results are shown in Fig. 8 and indicate that the adversarially-trained model (red) is also more robust than the standard model. Finally, we evaluate the accuracies for changes in the spatially and temporally filtered Gaussian noise norm while keeping the norm fixed. Table 4 lists the scaled norm for the experiment, and Fig. 9 also proves that the adversarially-trained model (red) is again more robust. \begin{table} \begin{tabular}{c|c|c|c|c} \hline Filter & Joint & Joint Motion & Bone & Bone Motion \\ \hline Spatial Low & 17.0 & 3.50 & 6.40 & 1.10 \\ \hline Spatial High & 77.0 & 20.0 & 5.90 & 4.80 \\ \hline Temporal Low & 31.0 & 2.0 & 9.90 & 0.60 \\ \hline Temporal High & 77.0 & 19.0 & 24.0 & 14.0 \\ \hline \end{tabular} \end{table} Table 3: List of scaled norms for spatially or temporally filtered Gaussian noises. Figure 7: Accuracy comparison of standard-trained (blue) and adversarially-trained (red) GCNs for changes in the Gaussian noise norm with a fixed bandwidth of 2: (top) spatial low-pass filter, (second) spatial high-pass filter, (third) temporal low-pass filter, and (bottom) temporal high-pass filter. In summary, these results indicate that GCN-based skeletal action recognition does not suffer from the same robustness trade-offs as CNNs. Therefore, adversarial training improves the GCNs robustness to both low- and high-frequency perturbations, i.e., without sacrificing robustness in the low-frequency domain. #### 4.2.4 Robustness to Common Corruptions We evaluate the robustness to common corruptions. In image classification, common corruptions to image data include high-frequency corruptions, such as Gaussian noise and shot noise, and low-frequency corruptions, such as motion blur and fog [31]. The trade-off between accuracy and adversarial robustness makes CNN-based image classifiers robust to high-frequency corruptions but vulnerable to low-frequency corruptions. However, the experiment in the previous section demonstrates that such a trade-off does not exist for the GNCs. Therefore, adversarial training is expected to be more robust to common corruptions than standard training, regardless of its frequency spectra. We experimentally examine this conclusion based on the following three common corruptions for skeleton data, as shown in Fig. 10. * Gaussian noise: the skeletal data are perturbed by adding a zero-mean Gaussian noise with standard deviation \(\sigma=0.01,0.03,0.05\). * Frame loss: Each frame of the skeletal sequence data is randomly lost. Loss rate \(p\) is a uniform random number in the \([0,1]\) interval. The frame length is adjusted when input to the GCNs by linearly interpolating the lost frames. Figure 8: Accuracy comparison of standard-trained (blue) and adversarially-trained (red) GCNs for changes in the Gaussian noise bandwidth with the scaled norm in Table. 3. (top) spatial low-pass filter, (second) spatial high-pass filter, (third) temporal low-pass filter, and (bottom) temporal high-pass filter. * Part occlusion [12]: a part of the skeletal data is occluded. The skeletal data are divided into five parts: left arm (part 1), right arm (part 2), both hands (part 3), both legs (part 4), and torso (part 5), and either part coordinates are set to 0. To provide a Fourier analysis of the common corruptions, we compute their spectral distributions as shown in Figs. 11-13. These distributions are given by \(\mathbb{E}[|\mathrm{JFT}(I(C(\mathbf{X}))-I(\mathbf{X}))|]\), where \(\mathbf{X}\) is clean skeletal data and \(C(\cdot)\) is one of the three corruptions. Gaussian NoiseFig. 11 shows the average Fourier spectrum distributions for the four features, and Table 5 lists the accuracies of the standard-trained and adversarially-trained two GCNs with standard deviation \(\sigma=0.01,0.03,0.05\). As depicted in Fig. 11, the frequency characteristics of Gaussian noise corruption differ for the four features. For example, the joint feature largely includes temporal low-frequency signals (i.e., the left half of the map), whereas the joint motion feature includes temporal high-frequency signals (i.e., the right half of the map). For such corruptions, we predict which frequency bands are vulnerable from the Fourier heatmap in Fig. 5. For example, the Fourier heatmap of the joint feature, as shown in Fig. 5 (i.e., leftmost column) indicates that the standard-trained models are vulnerable in the temporal low-frequency domain, but the adversarially-trained models are more robust in all frequency domains. Therefore, adversarially-trained models are expected to provide better accuracies in all cases. Table 5 supports this prediction. The same holds for the other three features. Hence, adversarial training yields robustness to Gaussian noise corruptions for GCN-based skeletal action recognition. models should be more robust to the corruption. However, the results in Table 7 demonstrate that the standard-trained models are more robust to this scenario. A possible alternative explanation is that the part occlusion corruptions cause a portion of the skeletal signals to be missing, resulting in the loss of the features necessary for action recognition. The missing signal problem cannot be explained from the Fourier perspective, and other approaches need to be considered. ## 5 Conclusions This study examines the robustness of the GCNs for skeleton-based action recognition against adversarial attacks and common corruptions. Fourier analysis of the robustness of GCNs is reported. We compute the average Fourier spectra of adversarial perturbations and common corruptions using the JFT, which is a combination of the GFT and DFT. Furthermore, the frequency effects of both standard and adversarial training are explored using the Fourier heatmap. Our experiments reveal that the standard-trained models are sensitive to high-frequency perturbations, and adversarial training suppresses this sensitivity and enhances robustness to high-frequency perturbations. While this robustness against high frequencies is also known in CNN-based image classification, our interesting finding is that the GCNs for skeleton-based action recognition do not suffer from a robustness trade-off between adversarial robustness and low-frequency perturbations (as do CNNs). However, when examining part occlusion corruptions, Fourier analysis performs poorly, evidencing the limitations of Fourier analysis. In future work, we will leverage our findings to further develop learning methods that yield more robustness of GCNs in skeleton-based action recognition. ## Acknowledgments This work was supported by JSPS KAKENHI Grant Number JP22H03658.
2303.12270
EBSR: Enhanced Binary Neural Network for Image Super-Resolution
While the performance of deep convolutional neural networks for image super-resolution (SR) has improved significantly, the rapid increase of memory and computation requirements hinders their deployment on resource-constrained devices. Quantized networks, especially binary neural networks (BNN) for SR have been proposed to significantly improve the model inference efficiency but suffer from large performance degradation. We observe the activation distribution of SR networks demonstrates very large pixel-to-pixel, channel-to-channel, and image-to-image variation, which is important for high performance SR but gets lost during binarization. To address the problem, we propose two effective methods, including the spatial re-scaling as well as channel-wise shifting and re-scaling, which augments binary convolutions by retaining more spatial and channel-wise information. Our proposed models, dubbed EBSR, demonstrate superior performance over prior art methods both quantitatively and qualitatively across different datasets and different model sizes. Specifically, for x4 SR on Set5 and Urban100, EBSRlight improves the PSNR by 0.31 dB and 0.28 dB compared to SRResNet-E2FIF, respectively, while EBSR outperforms EDSR-E2FIF by 0.29 dB and 0.32 dB PSNR, respectively.
Renjie Wei, Shuwen Zhang, Zechun Liu, Meng Li, Yuchen Fan, Runsheng Wang, Ru Huang
2023-03-22T02:36:13Z
http://arxiv.org/abs/2303.12270v1
# EBSR: Enhanced Binary Neural Network for Image Super-Resolution ###### Abstract While the performance of deep convolutional neural networks for image super-resolution (SR) has improved significantly, the rapid increase of memory and computation requirements hinders their deployment on resource-constrained devices. Quantized networks, especially binary neural networks (BNN) for SR have been proposed to significantly improve the model inference efficiency but suffer from large performance degradation. We observe the activation distribution of SR networks demonstrates very large pixel-to-pixel, channel-to-channel, and image-to-image variation, which is important for high performance SR but gets lost during binarization. To address the problem, we propose two effective methods, including the spatial re-scaling as well as channel-wise shifting and re-scaling, which augments binary convolutions by retaining more spatial and channel-wise information. Our proposed models, dubbed EBSR, demonstrate superior performance over prior art methods both quantitatively and qualitatively across different datasets and different model sizes. Specifically, for \(\times 4\) SR on Set5 and Urban100, EBSR-light improves the PSNR by 0.31 dB and 0.28 dB compared to SRResNet-E2FIF, respectively, while EBSR outperforms EDSR-E2FIF by 0.29 dB and 0.32 dB PSNR, respectively. ## 1 Introduction Image super-resolution (SR) is a classic yet challenging problem in computer vision. It aims to reconstruct high-resolution (HR) images, which have more details and high-frequency information, from low-resolution (LR) images. Image SR is an ill-posed problem as there are multiple HR images corresponding to a single SR image [24]. In recent years, deep neural networks (DNNs) have achieved great quality improvement in image SR but also, suffers from intensive memory consumption and computational cost [11, 13, 29, 28]. The high memory and computation requirements of SR networks hinder their deployment on resource-constrained devices, such as mobile phones and other embedded systems. Network quantization has been naturally used to compress SR models [12, 6, 30] as an effective way to reduce memory and computation costs while maintaining high accuracy. In network quantization, binary neural networks (BNNs), which quantize activations and weights to \(\{-1,1\}\), are of particular interest because of their memory saving by 1-bit parameters and computation saving by replacing convolution with bit-wise operations, including XNOR and bit-count [20]. Ma et al. [16] are the first to introduce binarization to SR networks. However, they only binarize weights and leave activations at full precision, which impedes the bit-wise operation and leads to a limited speedup. Afterward, many works [25, 8, 10] have explored BNNs for image SR with both binary activations and weights. Though promising results have been achieved, all these BNNs still suffer from a large performance degradation compared to the floating point (FP) counterparts. We observe the large performance degradation comes from two reasons. First of all, existing BNNs for SR surprisingly adopt a simple binarization method, which di Figure 1: The binary feature maps of the same channels in the same block in our EBSR-light and the prior art E2FIF. rectly applies the sign function to binarize the activations. In contrast, more advanced binarization methods have been demonstrated in BNNs for image classification [14, 1, 18]. Hence, how to improve the binarization methods becomes the first question towards high-performance BNNs for SR. Secondly, the majority of recent full-precision SR networks [13, 27, 29] remove batch normalization (BN) for better performance. This is because BN layers normalize the features and destroy the original luminance and contrast information of the input image, resulting in blurry output images and worse visual quality [13]. After removing BN, we observe the activation distribution of SR networks exhibits much larger pixel-to-pixel, channel-to-channel, and image-to-image variation, which we hypothesize is important to capture the image details for high performance SR. However, such distribution variation is very unfriendly to BNN. On one hand, it leads to a large binarization error and makes BNN training harder. For the reason, recent works, e.g., E2FIF [10], add the BN back to the BNNs, and thus, suffers from blurry outputs. On the other hand, the variation of activation distribution is very hard to preserve. For example, each activation tensor in the BNN share the same binarization parameters, e.g., scaling factors. This makes it very hard to preserve the magnitude differences across channels and pixels. Hence, how to better capture the distribution variation is the second important question. To address the aforementioned questions, in this paper, we propose EBSR, an enhanced BNN for image SR. EBSR features two important changes over the previous BNNs for SR, including leveraging more advanced binarization methods to stabilize training without BN, and novel methods, including spatial re-scaling as well as channel-wise shifting and re-scaling to better capture the spatial and channel-wise activation distribution. The resulting EBSR binary features can preserve much more textures and details for SR compared to prior art methods as shown in Figure 1. Our contributions can be summarized as below: * We leverage advanced binarization methods to build a strong BNN baseline, which stabilizes the training without BN and outperforms prior art methods. * We observe the spatial and channel-wise variation of activation distribution is important for SR quality and propose novel methods to capture the variation, which preserve much more details. * We evaluate our models on benchmark datasets and demonstrate significant performance improvement over the prior art method, i.e, E2FIF. Specifically, EBSR-light outperforms E2FIF by 0.31 dB and 0.28 dB on Set5 and Urban100, respectively, at \(\times 4\) scale at a slight computation increase. EBSR-SQ achieves 0.26 dB and 0.27 dB PSNR improvements over E2FIF at the same computation cost. ## 2 Related Works ### Image super-resolution deep neural networks DNNs have been widely used in image SR for their satisfying performance. The pioneering work SRCNN [4] first uses a DNN which has only three convolution layers to reconstruct the HR image in an end-to-end way. VDSR [9] increases the network depth to 20 convolution layers and introduces global residual learning for better performance. SRResNet [11] introduces residual blocks in SR and achieves better image quality. SRGAN [11] uses SRResNet as the generator and an additional discriminator to recover more photo-realistic textures. EDSR [13] removes BN in the residual block and features with a deeper and wider model with up to 43M model parameters. Dense connect [29], channel attention module [27], and non-local attention [28] mechanism are also used in SR networks to improve the image quality. However, these networks have very large memory and computation overheads that can hardly be deployed on resource-constrained devices. ### BNN for image super-resolution To compress the SR models, BNNs for SR have been studied in recent years. For a BNN, the binarization function can be written as \(\hat{x}=\alpha\operatorname{sign}(x-\beta)\), where \(\alpha\), \(\beta\), \(x\) and \(\hat{x}\) denote the scaling factor, bias, the FP and binary variables, respectively. There are two kinds of binarization strategies [19], including per-tensor and per-channel binarization. The major difference is per-tensor binarization uses the same \(\alpha\) and \(\beta\) for the whole tensor while per-channel binarization has channel-wise \(\alpha\)s and \(\beta\)s. We compare different BNNs for SR and for image classification tasks in Table 1. Ma et al. [16] first introduce binarization to SR networks and reduce the model size by \(80\%\) compared with FP SRResNet. However, they only binarize weights and leave activations at FP, which impedes the bit-wise operation and requires FP accumulations. Xin et al. [25] binarizes both weights and activations in SR networks utilizing a bit-accumulation mechanism (BAM) to approximate the FP convolution. A value accumulation scheme is \begin{table} \begin{tabular}{c|c c c c c c} \hline Method & Weight & Act & Adaptive Quanti & w/ BN & HW Cost & Task \\ \hline [18] & Perchl & Per tensor & Spatial & Yes & Low & Cls \\ [18] & Perchl & Per tensor & Cls & Yes & Low & Cls \\ ReCaNet[14] & Perchl & Per tensor & No & Yes & Low & Cls \\ [16] & Perchl & \(\Phi^{*}\) & No & Yes & PP/Accuña proposed to binarize each layer based on all previous layers. However, their method introduces extra FP BN computation during inference. Jiang et al. [8] find that simply removing BN leads to large performance degradation. Hence, they explore a binary training mechanism (BTM) to better initialize weights and normalize input LR images. They build a BNN without BN named IBTM. After that, Hong et al. proposed DAQ[6] adaptive to diverse channel-wise distributions. However, they use per-channel activation quantization which introduces large FP multiplications and accumulations. Recently Lang et al. [10] propose using end-to-end full-precision information flow in BNN and develops a high performance BNN named E2FIF. However, there is a still large gap with the full-precision model. None of the binarization methods above have the ability to capture the pixel-to-pixel, channel-to-channel, and image-to-image variation of the activation distribution. ## 3 Motivation To better understand the origin of performance degradation of BNN for SR, in this section, we visualize the activation distributions for a FP SR network. We select a lightweight EDSR model, which has 16 blocks and 64 channels for the body module [13]. We also visualize the activation distribution of a image classification network, i.e., MobileNetV2 [21] for comparison. From the comparison, we observe clear differences between the two models, which serves as the motivation for our EBSR. **Motivation 1: Pixel-to-Pixel Variation** We first compare the activation distribution for different pixels within the same layer. We randomly sample 24 pixels from one layer of both MobileNetV2 and EDSR and visualize the activation distribution for these pixels in Figure 2. As we can observe, for MobileNetV2, due to BN, the activation distribution of different pixels are very similar to each other with relatively small magnitude. In contrast, for EDSR, on one hand, the magnitude of the activation distribution is much larger compared to MobileNetV2; while on the other hand, the magnitude of different pixels are also quite different. Such phenomenon is not specific to a certain layer in EDSR. According to [3], the difference of activation magnitudes indicates different scaling factors are needed for each pixel. However, per-pixel binarization is not hardware friendly while existing per-tensor binarization schemes cannot capture the pixel-to-pixel variation of activation distribution. **Motivation 2: Channel-to-Channel Variation** We now compare the activation distributions for different channels within each layer. We randomly sample 24 channels from the same layer of both MobileNetV2 and EDSR and visualize the activation distribution in Figure 9(b) and 9(a). As we can observe, the activation distribution of EDSR is different from that of MobileNetV2 in that both the mean and variance of activations varies a lot across different channels in EDSR. This indicates different channels require both different scaling factors and different bias during binarization. One way to resolve the channel-to-channel variation of activation distribution is to leverage per-channel quantization, which is used in [6]. However, per-channel quantization prevents BNN from performing bit-wise operations for convolution, which makes BNN lose its most important advantage. This is illustrated in Figure 4. In this toy example, we have an activation tensor \(A\in\mathbb{R}^{1\times 4\times 2\times 2}\) and a weight tensor \(W\in\mathbb{R}^{2\times 4\times 1\times 1}\). When we convolve A with the first filter in red color, for activation per-tensor binarization (e.g., Figure 4(a)), the convolution can be calculated as \(s_{a}s_{w_{1}}\left[1\times 1+(-1)\times 1+(-1)\times 1+1\times 1\right]\), where \(\left[\cdot\right]\) can be calculated efficiently by xnor and bit-count operations. However, for activation per-channel binarization, we can observe that the activation distribution of EDSR is different from that of MobileNetV2. Figure 3: Activation distribution of EDSR exhibits much larger channel-to-channel (horizontal view) and image-to-image (vertical view) variations compared to MobileNetV2. Figure 2: Activation distribution of EDSR exhibits much larger pixel-to-pixel variation compared to MobileNetV2. tion (e.g., Figure 4(b)), each channel of activations has a real-valued scaling factor and the convolution is computed as \(s_{a_{1}}s_{w_{1}}-s_{a_{2}}s_{w_{1}}+s_{a_{3}}s_{w_{1}}+s_{a_{4}}s_{w_{1}}\), where all terms need to be calculated in FP without any acceleration. Thus activation per-channel binarization is not feasible for BNN. **Motivation 3: Image-to-Image Variation** We further compare the activation distributions of the same layer for different images. As shown in Figure 9(a) and 9(c), for EDSR, both the mean and magnitude of the activation distributions are different, indicating different images may require different scaling factors and bias as well. **Remark** For the analysis above, by comparison with MobileNetV2, we observe the activation distribution of EDSR exhibits clear pixel-to-pixel, channel-to-channel, and image-to-image variation. We hypothesize this is important for high quality SR as it captures details specific to different images. To preserve such variation requires the binarization scaling factor to be pixel, channel, and image dependent while requires the bias to be channel and image dependent. How to realize such requirements in BNNs would be important to close the gap with their full-precision counterparts. ## 4 Build A Strong Baseline Currently, most BSR networks use the following simple sign function for per-tensor activation binarization and per-channel weight binarization [19]: \[\hat{x}=\mathrm{sign}(x)\quad\hat{w}_{i}=\frac{\|w_{i}\|_{l1}}{n}\,\mathrm{ sign}(w_{i}) \tag{1}\] where \(\hat{x}\) and \(x\) denote the binary and real-valued activation, respectively, \(\hat{w_{i}}\) and \(w_{i}\) denote the binary weights and real-valued weights, respectively, and \(n\) denote the number of weights in the \(i_{th}\) weight filter. However, due to the variation of activation distributions, such simple binarization scheme suffers from convergence issue and low performance when the BN is removed. Therefore, we first build a strong baseline BNN based on the robust binarization method and propose a reliable network structure in this section. ### Binarization To handle the channel-to-channel variation of the activation mean, we first introduce RSign following ReActNet [14], which has a channel-wise learnable threshold. We also adopt a learnable scaling factor for each activation tensor to further reduce the quantization error between binary and real-valued activations. Hence, the binarization function used for activations is defined as: \[\hat{x}=\alpha\,\mathrm{sign}(\frac{x-\beta_{i}}{\alpha}) \tag{2}\] where \(\beta_{i}\) is the channel-wise learnable thresholds and \(\alpha\) is the learnable scaling factor for each activation tensor. Both of them can be optimized end-to-end with other parameters in the network. To back-propagate the gradients through the discretized binarization function, we follow [15] to use a piecewise polynomial function as the straight-through estimator (STE), which can reduce the gradient mismatch effectively. Thus, the gradient w.r.t. \(\alpha\) can be calculated as: \[\frac{\partial\hat{x}}{\partial\alpha}=\begin{cases}-1,&\text{if }x\leq\beta- \alpha\\ -2\left(\frac{x-\beta}{\alpha}\right)^{2}-2\frac{x-\beta}{\alpha}-1,&\text{if }\beta- \alpha<x\leq\beta\\ 2\left(\frac{x-\beta}{\alpha}\right)^{2}-2\frac{x-\beta}{\alpha}+1,&\text{if } \beta<x\leq\beta+\alpha\\ 1,&\text{if }x>\beta+\alpha\end{cases} \tag{3}\] while the gradient w.r.t. \(\beta_{i}\) can be computed as: \[\frac{\partial\hat{x}}{\partial\beta_{i}}=\begin{cases}-2-2\frac{x-\beta_{i} }{\alpha},&\text{if }\quad\beta_{i}-\alpha<x\leq\beta_{i}\\ -2+2\frac{x-\beta_{i}}{\alpha},&\text{if }\beta_{i}<x\leq\beta_{i}+\alpha\\ 0,&\text{otherwise}\end{cases} \tag{4}\] For weight binarization, we still use the channel-wise sign function in Eq.1, for which the scaling factors are the average of the \(\ell_{1}\) norm of each filter. ### Baseline Network Structure We use the lightweight EDSR with 16 blocks and 64 channels, and EDSR with 32 blocks and 256 channels as our backbones for two variants namely EDSR-light and EBSR. Following existing BSR networks [16, 25, 8, 10], we only binarize the body module and leave the head and tail modules in full-precision which only contain one convolution layer each. We also follow Bi-Real Net [15] and E2FIF [10] to use a skip connection bypassing every binary convolution layer in order to keep the full-precision information from being cut off by binary layers. Note that the network structure here doesn't contain BN layers. Table 2 shows the comparison between our proposed baseline and the prior art E2FIF. As can be observed, for Figure 4: Activation per-channel quantize (a) and per-tensor quantize (b). For weights quantization, both (a) and (b) are per channel, i.e., each channel in the convolution output tensor has different scaling factors, which also means each kernel has a different scaling factor. E2FIF, removing BN leads to a huge performance degradation. In contrast, for our strong baseline, its training is stable without BN and it outperforms E2FIF both with and without BN. ## 5 Method In this section, we propose two techniques to further enhance the strong baseline to capture the variation of activation distributions better. We first introduce spatial re-scaling to adapt the network to pixel-to-pixel variation. We then propose channel-wise shifting and re-scaling to better capture the channel-to-channel variation. Meanwhile, as both of the two methods are image-dependent, the image-to-image variation can be captured naturally. By combining the two methods with our strong baseline, we build our enhanced BNN for SR, named EBSR. ### Spatial Re-scaling As is shown in Figure 2, activation distributions have large pixel-to-pixel variation in SR networks and the difference of activation magnitudes indicates different scaling factors are preferred for different pixels. Inspired by [18], we propose spatial re-scaling to better adapt the network to the spatial variation of activation distributions in SR networks. We take the real-valued activations \(A\) before convolution as input and predict pixel-wise scaling factors \(S(A)\), which re-scale the binary convolution output. Spatial re-scaling process can be formulated as follows: \[A*W\approx(\operatorname{sign}(A)\mathbin{\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 536$}}\hbox to 0.0pt{\lower 3.0pt\hbox{$ \mathchar 536$}}\hbox to 0. activations and re-scale the binary convolution output, respectively. ### Network Structure Combining the spatial re-scaling and the channel-wise shifting and re-scaling methods, we construct the enhanced convolution layer (E-Conv). Then we build our EBSR model based on E-Conv. In Figure 6, we compare the binary convolution layer used in the baseline network and our proposed E-Conv. We use spatial and channel-wise scaling factors to re-scale the binary convolution output, and use channel-wise shifting to learn appropriate thresholds for each channel before binarization. The scaling factors and threshold used in E-Conv are learnable and depend on the real-valued input activations. In this way, our proposed EBSR can adapt to pixel-to-pixel, channel-to-channel, and image-to-image variations to reduce the large binarization error and preserve more details. Figure 7 shows the basic block based on the E-Conv and our EBSR composed of the basic blocks. Following existing works, the convolution layers in the head and tail modules are not binarized. We choose the lightweight EDSR which has 16 basic blocks and 64 channels, and EDSR which has 32 basic blocks and 256 channels as our backbones, which correspond to EBSR-light and EBSR, respectively. ## 6 Experiments ### Experimental Setup We train all the models on the training set of DIV2K [22]. Our models operate on RGB channels, i.e., the input and output image are in RGB color space, not YCbCr color space. For evaluation, we use four standard benchmarks including Set5 [2], Set14 [26], B100 [17] and Urban100 [7]. For evaluation metrics, we use PSNR and SSIM [23] over the Y channel between the output SR image and the original HR image as most previous works did. We choose L1 loss between SR image and HR image [13] as our loss function. Input patch size is set to \(48\times 48\). The mini-batch size is set to 16. We use ADAM optimizer with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), and \(\epsilon=10^{-8}\). The learning rate is initialized as \(2\times 10^{-4}\) and halved every 200 epochs. All our models are trained from scratch for 300 epochs. \begin{table} \begin{tabular}{c|c c|c c|c c} \hline \multirow{2}{*}{Models} & \multirow{2}{*}{OPs} & \multirow{2}{*}{Params} & \multicolumn{2}{c|}{Set5} & \multicolumn{2}{c}{Urban100} \\ \cline{3-8} & & & PSNR & SSIM & PSNR & SSIM \\ \hline SRResNet-fp & 64.986G & 1.52M & 31.76 & 0.888 & 25.54 & 0.767 \\ SRResNet-E2FIF & 1.83G & 0.04M & 31.33 & 0.880 & 25.08 & 0.750 \\ **EBSR-light (ours)** & 2.27G & 0.08M & 31.64 & 0.885 & 25.36 & 0.761 \\ EDSR-fp & 164.668G & 43.09M & 32.46 & 0.897 & 26.64 & 0.803 \\ EDSR-E2FIF & 25.32G & 0.17M & 31.91 & 0.890 & 25.74 & 0.774 \\ **EBSR (ours)** & 28.58G & 1.10M & 32.20 & 0.892 & 26.06 & 0.783 \\ **EBSR-S** (W1As) & 1.75G & 0.06M & 31.46 & 0.882 & 25.20 & 0.755 \\ EBSR-SQ (W2A4) & 1.75G & 0.06M & 31.38 & 0.880 & 25.17 & 0.753 \\ EBSR-SQ (W4A4) & 1.82G & 0.06M & 31.59 & 0.884 & 25.33 & 0.759 \\ \hline \end{tabular} \end{table} Table 4: Memory and computation overheads of different models. W and A denote the bit-width of weights and activations in quantized spatial re-scaling module. Figure 5: Block diagram for spatial re-scaling, and channel-wise shifting and re-scaling. Figure 6: Comparison of (a) the binary convolution layer with a skip connection used in our baseline network and (b) the proposed E-Conv. Figure 7: The structure of our proposed EBSR. Convolution layers in purple are real-valued vanilla 3x3 convolutions. ### Benchmark Results Table 6 and 7 present the quantitative results on different datasets. The proposed models outperform all other models. For instance, compared to the prior art SRResNet-E2FIF, our EBSR-light improves the PSNR by 0.38 dB, 0.32 dB, and 0.28 dB on Urban100 for \(\times 2\), \(\times 3\), and \(\times 4\) SR, respectively. For the larger model, our EBSR also significantly outperforms EBSR-E2FIF, e.g. 0.29 dB, 0.27dB, 0.1dB, and 0.32 dB improvements of PSNR on Set5, Se14, B100, and Urban100 respectively at \(\times 4\) scale. Overall our models significantly improve the performance of BNN for SR and further bridge the performance gap between the binary and FP SR networks. We also provide the qualitative results in Figure 8. As can be seen, the SR image reconstructed by EBSR-light is richer in details and edges. It is closer to the HR image in visual perception than the prior art network E2FIF. We provide more qualitative comparisons in the appendix. ### Memory and Computation Cost We now evaluate the memory and computation cost of our proposed EBSR-light and EBSR. We compute the total operations and parameters following [31] and [15] as below: \[\begin{split}& OPs=FLOPs+BOPs/64\\ &Params=Param_{fp}+Param_{bi}/32\end{split} \tag{7}\] where \(BOPs\) and \(Param_{bi}\) denote the number of binary operations and parameters. We choose a \(128\times 128\) input image at \(\times 4\) scale for evaluation. Compared with the FP SRResNet and EBSR, our EBSR-light and EBSR reduce memory usage by \(38\times\) and \(39\times\), respectively, and reduce computation by \(29\times\) and \(58\times\), respectively. The computation cost of our models is slightly larger than E2FIF in exchange for much higher performance as discussed above. We notice that the spatial re-scaling module introduces many FP operations. This is because the convolution layer in Figure 5(a) outputs a feature map with \(H\times W\) spatial dimension, which is usually large for SR. Thus we propose EBSR-SQ to quantize the spatial re-scaling module to low-bits. We quantize the weights and activations following [5] and calculated the equivalent operations and parameters of EBSR-SQ following [31]. We try three different configurations as shown in Table 4, all of which achieve better performance compared to the SRResNet-E2FIF with fewer operations. Specifically, EBSR-SQ with 4-bit weight and 4-bit activation improves the PSNR by 0.26 dB and 0.27dB on Set5 and Urban100, respectively, over SRResNet-E2FIF, with 0.01 G fewer operations. ### Ablation Study We conduct ablation studies based on our strong baseline model to analyze the individual effect of our methods. We also compare our methods with the per-channel activation quantization. As we have discussed in Section 3, per-channel activation quantization is not hardware friendly. Hence, we focus on the performance comparison with it. As shown in Table 5, compared with the baseline model, the spatial re-scaling method has 0.24 dB and 0.22 dB improvement on Set5 and Urban100, respectively. While the improvements of the channel-wise shifting and re-scaling method are 0.07 dB and 0.03 dB on the two datasets. Our model EBSR-light out-performs the per-channel quantization by 0.26 dB and 0.24 dB on the two datasets while being much more hardware friendly. The reason that spatial re-scaling is more effective than channel-wise shifting and re-scaling is probably that in our strong baseline, we have already used RSign with learnable bias to binarize activations which alleviate the impact of channel-wise variation. ## 7 Conclusion In this work, we observe the large pixel-to-pixel, channel-to-channel, and image-to-image variations in FP SR networks, which contain important detailed information for SR. To preserve these information in BNNs, we first construct a strong baseline utilizing robust binarization methods and propose the spatial re-scaling as well as channel-wise shifting and re-scaling methods. Then we construct EBSR, EBSR-light, and EBSR-SQ. Compared to the prior art, the proposed models capture more detailed information for SR and significantly bridge the performance Figure 8: Qualitative comparison of our EBSR-light with the prior art network on a \(\times 4\) super-resolution. \begin{table} \begin{tabular}{c|c c c|c c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{OPs} & \multirow{2}{*}{Params} & \multicolumn{2}{c|}{Set5} & \multicolumn{2}{c}{Urban100} \\ \cline{4-6} & & & PSNR & SSIM & PSNR & SSIM \\ \hline Baseline & 1.566 & 0.03M & 31.30 & 0.880 & 25.09 & 0.751 \\ Baseline + per ml act quant & - & - & 31.38 & 0.880 & 25.12 & 0.752 \\ Baseline + chl-wise shift&& 1.630 & 0.06M & 31.37 & 0.880 & 25.12 & 0.752 \\ Baseline + spatial re-scale & 2.166 & 0.05M & 31.54 & 0.883 & 25.31 & 0.759 \\ EBSR-light & 2.276 & 0.08M & 31.64 & 0.885 & 25.36 & 0.761 \\ \hline \end{tabular} \end{table} Table 5: Comparison of different methods. - denotes unable to calculate with Eq.7 for quantized network. gap between BNN and FP SR networks in low computation and memory costs. Specifically, EBSR-light improves the PSNR by 0.31 dB and 0.28 dB compared to SRResNet-E2FIF while EBSR outperforms EDSR-E2FIF by 0.29 dB and 0.32 dB PSNR for \(\times 4\) SR on Set5 and Urban100 respectively. \begin{table} \begin{tabular}{c|c|c c|c c|c c|c c} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Scale} & \multicolumn{2}{c|}{Set5} & \multicolumn{2}{c|}{Set14} & \multicolumn{2}{c}{B100} & \multicolumn{2}{c}{Urban100} \\ \cline{3-10} & & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline SRResNet-fp & x2 & 37.76 & 0.958 & 33.27 & 0.914 & 31.95 & 0.895 & 31.28 & 0.919 \\ Bicubic & x2 & 33.66 & 0.930 & 30.24 & 0.869 & 29.56 & 0.843 & 26.88 & 0.840 \\ SRResNet-BNN & x2 & 35.21 & 0.942 & 31.55 & 0.896 & 30.64 & 0.876 & 28.01 & 0.869 \\ SRResNet-DoReFa & x2 & 36.09 & 0.950 & 32.09 & 0.902 & 31.02 & 0.882 & 28.87 & 0.880 \\ SRResNet-BAM & x2 & 37.21 & 0.956 & 32.74 & 0.910 & 31.60 & 0.891 & 30.20 & 0.906 \\ SRResNet-E2FIF & x2 & 37.50 & 0.958 & 32.96 & 0.911 & 31.79 & 0.894 & 30.73 & 0.913 \\ EBSR-light (ours) & x2 & **37.64** & **0.958** & **33.16** & **0.913** & **31.88** & **0.895** & **31.11** & **0.917** \\ \hline SRResNet-fp & x3 & 34.07 & 0.922 & 30.04 & 0.835 & 28.91 & 0.798 & 27.50 & 0.837 \\ Bicubic & x3 & 30.39 & 0.868 & 27.55 & 0.774 & 27.21 & 0.739 & 24.46 & 0.735 \\ SRResNet-BNN & x3 & 31.18 & 0.877 & 28.29 & 0.799 & 27.73 & 0.765 & 25.03 & 0.758 \\ SRResNet-DoReFa & x3 & 32.44 & 0.903 & 28.99 & 0.811 & 28.21 & 0.778 & 25.84 & 0.783 \\ SRResNet-BAM & x3 & 33.33 & 0.915 & 29.63 & 0.827 & 28.61 & 0.790 & 26.69 & 0.816 \\ SRResNet-E2FIF & x3 & 33.65 & 0.920 & 29.67 & 0.830 & 28.72 & 0.795 & 27.01 & 0.825 \\ EBSR-light (ours) & x3 & **33.89** & **0.921** & **29.95** & **0.834** & **28.82** & **0.797** & **27.33** & **0.833** \\ \hline SRResNet-fp & x4 & 31.76 & 0.888 & 28.25 & 0.773 & 27.38 & 0.727 & 25.54 & 0.767 \\ Bicubic & x4 & 28.42 & 0.810 & 26.00 & 0.703 & 25.96 & 0.668 & 23.14 & 0.658 \\ SRResNet-BNN & x4 & 29.33 & 0.826 & 26.72 & 0.728 & 26.45 & 0.692 & 23.68 & 0.683 \\ SRResNet-DoReFa & x4 & 30.38 & 0.862 & 27.48 & 0.754 & 26.87 & 0.708 & 24.45 & 0.720 \\ SRResNet-BAM & x4 & 31.24 & 0.878 & 27.97 & 0.765 & 27.15 & 0.719 & 24.95 & 0.745 \\ SRResNet-E2FIF & x4 & 31.33 & 0.880 & 27.93 & 0.766 & 27.20 & 0.723 & 25.08 & 0.750 \\ EBSR-light (ours) & x4 & **31.64** & **0.885** & **28.22** & **0.772** & **27.30** & **0.727** & **25.36** & **0.761** \\ \hline \end{tabular} \end{table} Table 6: Comparison of our proposed EBSR-light with other BNNs for image super-resolution. All the models in this table have a similar lightweight backbone, which has 16 blocks and 64 channels. Note that fp denotes the full-precision model. \begin{table} \begin{tabular}{c|c|c c|c c|c c|c c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Scale} & \multicolumn{2}{c|}{Set5} & \multicolumn{2}{c|}{Set14} & \multicolumn{2}{c|}{B100} & \multicolumn{2}{c}{Urban100} \\ \cline{3-10} & & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline EDSR-fp & x2 & 38.11 & 0.960 & 33.92 & 0.920 & 32.32 & 0.901 & 32.93 & 0.935 \\ Bicubic & x2 & 33.66 & 0.930 & 30.24 & 0.869 & 29.56 & 0.843 & 26.88 & 0.840 \\ EDSR-BNN & x2 & 34.47 & 0.938 & 31.06 & 0.891 & 30.27 & 0.872 & 27.72 & 0.864 \\ EDSR-BiReal & x2 & 37.13 & 0.956 & 32.73 & 0.909 & 31.54 & 0.891 & 29.94 & 0.903 \\ EDSR-IBTM & x2 & 37.80 & 0.960 & 33.38 & 0.916 & 32.04 & 0.898 & 31.49 & 0.922 \\ EDSR-E2FIF & x2 & 37.95 & **0.960** & 33.37 & 0.915 & **32.13** & **0.899** & 31.79 & 0.924 \\ EBSR (ours) & x2 & **37.99** & 0.959 & **33.52** & **0.916** & 32.10 & 0.898 & **31.96** & **0.926** \\ \hline EDSR-fp & x3 & 34.65 & 0.928 & 32.52 & 0.846 & 29.25 & 0.809 & 28.80 & 0.865 \\ Bicubic & x3 & 30.39 & 0.868 & 27.55 & 0.774 & 27.21 & 0.739 & 24.46 & 0.735 \\ EDSR-BNN & x3 & 20.85 & 0.399 & 19.47 & 0.299 & 19.23 & 0.285 & 18.18 & 0.307 \\ EDSR-BiReal & x3 & 33.17 & 0.914 & 29.53 & 0.826 & 28.53 & 0.790 & 26.46 & 0.801 \\ EDSR-IBTM & x3 & 34.10 & 0.924 & 30.11 & 0.838 & 28.93 & 0.801 & 27.49 & 0.839 \\ EDSR-E2FIF & x3 & 34.24 & 0.925 & 30.06 & 0.837 & 29.00 & 0.802 & 27.84 & 0.844 \\ EBSR (ours) & x3 & **34.36** & **0.925** & **30.28** & **0.840** & **29.04** & **0.803** & **28.06** & **0.849** \\ \hline EDSR-fp & x4 & 32.46 & 0.897 & 28.80 & 0.787 & 27.71 & 0.742 & 26.64 & 0.803 \\ Bicubic & x4 & 28.42 & 0.810 & 26.00 & 0.703 & 25.96 & 0.668 & 23.14 & 0.658 \\ EDSR-BNN & x4 & 17.53 & 0.188 & 17.51 & 0.160 & 17.15 & 0.151 & 16.35 & 0.163 \\ EDSR-BiReal & x4 & 30.81 & 0.871 & 27.71 & 0.760 & 27.01 & 0.716 & 24.66 & 0.733 \\ EDSR-IBTM & x4 & 31.84
2307.13499
Finding Money Launderers Using Heterogeneous Graph Neural Networks
Current anti-money laundering (AML) systems, predominantly rule-based, exhibit notable shortcomings in efficiently and precisely detecting instances of money laundering. As a result, there has been a recent surge toward exploring alternative approaches, particularly those utilizing machine learning. Since criminals often collaborate in their money laundering endeavors, accounting for diverse types of customer relations and links becomes crucial. In line with this, the present paper introduces a graph neural network (GNN) approach to identify money laundering activities within a large heterogeneous network constructed from real-world bank transactions and business role data belonging to DNB, Norway's largest bank. Specifically, we extend the homogeneous GNN method known as the Message Passing Neural Network (MPNN) to operate effectively on a heterogeneous graph. As part of this procedure, we propose a novel method for aggregating messages across different edges of the graph. Our findings highlight the importance of using an appropriate GNN architecture when combining information in heterogeneous graphs. The performance results of our model demonstrate great potential in enhancing the quality of electronic surveillance systems employed by banks to detect instances of money laundering. To the best of our knowledge, this is the first published work applying GNN on a large real-world heterogeneous network for anti-money laundering purposes.
Fredrik Johannessen, Martin Jullum
2023-07-25T13:49:15Z
http://arxiv.org/abs/2307.13499v1
# Finding Money Launderers Using Heterogeneous Graph Neural Networks ###### Abstract Current anti-money laundering (AML) systems, predominantly rule-based, exhibit notable shortcomings in efficiently and precisely detecting instances of money laundering. As a result, there has been a recent surge toward exploring alternative approaches, particularly those utilizing machine learning. Since criminals often collaborate in their money laundering endeavors, accounting for diverse types of customer relations and links becomes crucial. In line with this, the present paper introduces a graph neural network (GNN) approach to identify money laundering activities within a large heterogeneous network constructed from real-world bank transactions and business role data belonging to DNB, Norway's largest bank. Specifically, we extend the homogeneous GNN method known as the Message Passing Neural Network (MPNN) to operate effectively on a heterogeneous graph. As part of this procedure, we propose a novel method for aggregating messages across different edges of the graph. Our findings highlight the importance of using an appropriate GNN architecture when combining information in heterogeneous graphs. The performance results of our model demonstrate great potential in enhancing the quality of electronic surveillance systems employed by banks to detect instances of money laundering. To the best of our knowledge, this is the first published work applying GNN on a large real-world heterogeneous network for anti-money laundering purposes. _Keywords--_ graph neural networks, anti-money laundering, supervised learning, heterogeneous graphs, PyTorch Geometric. ## 1 Introduction Money laundering is the activity of securing proceeds of a criminal act by concealing where the proceeds come from. The ultimate goal is to make it look like the proceeds originated from legitimate sources. Money laundering is a vast global problem, as it enables all types of crime where the goal is to make a profit. Because most money laundering goes undetected, it is difficult to quantify its effect on the global economy. However, a research report by United Nations - Office on Drugs and Crime (2011) estimates that 1-2 trillion US dollars are being laundered each year, which corresponds to 2-5% of global gross domestic product. Both national and international anti-money laundering (AML) laws regulate electronic surveillance and reporting of suspicious transaction activities in financial institutions. The purpose of the surveillance is to detect _suspicious activities_ with a high probability of being related to money laundering, such that they can be manually investigated. The manual investigation is performed by experienced investigators, which inspect several aspects of the case, often involving multiple implicated customers, to then decide whether the behavior is suspicious enough to be reported to the authorities. See Section 2 for more details about this process. The present paper concerns the electronic surveillance process, which ought to identify a relatively small number of suspicious activities in a vast ocean of legitimate ones. In the past few decades, the electronic surveillance systems in banks have typically consisted of several simple rules, created by domain experts, that use fixed thresholds and a moderate number of if/else statements that determine whether an alert is generated. Such rules are still in large parts what makes up the surveillance systems (Chen et al., 2018). However, these rules fall short of providing efficiency (Fruth, 2018), for at least three key reasons: a) They rely on manual work to create and keep the rules up to date with the dynamically changing data (Chen et al., 2018). This work increases with the complexity and number of rules. b) They are typically too simple to be able to detect money laundering with high precision, resulting in many low-quality alerts, i.e. _false positives1_. c) Their simplicity makes them easy to circumvent for sophisticated money launderers, resulting in the possibility that severe crimes go undetected. The consequence of these three shortcomings is that banks spend huge resources on a very inefficient approach, and only a tiny fraction of illegal proceeds is being recovered (Pol, 2020). Footnote 1: Hiring more workers to inspect each alert manually is often the resolution to compensate for this shortcoming. Driven by multiple money laundering scandals in recent years2, the shortcomings of the rule-based system are high on the agenda for regulators and financial institutions alike, and considerable resources are devoted to developing more effective electronic surveillance systems. One avenue that is explored is to use machine learning (ML) to automatically learn when to generate alerts (see e.g. Jullum et al. (2020), de Jesus Rocha-Salazar et al. (2021)). Compared to human capabilities, ML is superior at detecting complicated patterns in vast volumes of data. As a consequence, ML-based systems have the potential to provide detection systems with increased accuracy and the ability to identify more sophisticated ways of laundering money. Footnote 2: As an example, it was revealed in 2018 that the Estonian branch of Danske Bank, Denmark’s largest bank, carried out suspicious transactions for over €200 billion during 2007-2015 (Bjerregaard & Kirchmaier, 2019). Subsequent lawsuit claims have amounted to over €2 billion. Money laundering is a social phenomenon, where groups of organized criminals often collaborate to launder their criminally obtained proceeds. In the networks relevant for AML, the nodes may consist of customers, while the edges between the nodes represent money transfers, shared address, or joint ownership, to name a few possibilities. Analyses of such networks can uncover circumstances that would be impossible to detect through a purely entity-based approach, simply because the necessary information would not be present. Therefore, ML methods that leverage these relational data have a substantial advantage over those that do not. The simplest way to incorporate network characteristics into ML methods is to create node features that capture information about the node's role in the network, e.g. network centrality metrics or characteristics of its neighborhood. The features can then be used in a downstream machine learning task. This approach is, however, suboptimal as the generated features might not be the ones most informative for the subsequent classification, and the full richness of the relational data will in any case not be passed over to the entity-based machine learning task. _Graph Neural Network_ (GNN) is a class of methods that overcome this drawback by applying the machine learning task directly on the network data through a neural network. GNNs are able to solve various graph-related tasks such as node classification (Kipf & Welling, 2016), link prediction (Zhang & Chen, 2018), graph clustering (Wang et al., 2017), graph similarity (Bai et al., 2019) as well as unsupervised embedding (Kipf & Welling, 2016). The primary idea behind GNNs is to extend the modern and successful techniques of _artificial neural networks_ (ANNs) from regular tabular data to that of networks. In addition to their ability to incorporate both entity features and network features into a single, simultaneously trained model, most GNNs scale linearly with the number of edges in the network, making them applicable to large networks. A huge benefit of GNNs in practical use cases is that they are inductive rather than transductive: While transductive models (e.g. Node2Vec (Grover & Leskovec, 2016)) can only be used on the specific data that was present during training, inductive models can be applied to entirely new data without having to be retrained. This is crucial for the AML application where new transactions (edges) appear continuously, and customers (nodes) enter and leave on a daily basis. Most GNNs are developed for _homogeneous_ networks with a single type of entity (node) and a single type of relation (edge). In the present AML use case, the relevant network is _heterogeneous_ in both nodes and edges: The nodes represent both private customers, companies, and external accounts, while the edges represent both financial transactions and professional business ties between the nodes. There exist some GNNs in the literature that are able to handle heterogeneous networks, such as: RGCN (Schlichtkrull et al., 2018), HAN (Wang et al., 2019), MAGNN (Fu et al., 2020), HGT (Hu et al., 2020), and HetGNN (Zhang et al., 2019). A common issue with all these methods is that they are not designed to incorporate edge features. In our AML use case, this corresponds to properties of the financial transactions (or the business ties) and is crucial information for an effective learning task. Thus, based on our current knowledge and research, there exists no directly applicable GNN method for our AML use case. The present paper proposes a heterogeneous GNN method that utilizes the edge features in the graph, which we denote _Heterogeneous Message Passing Neural Network_ (HMPNN). The HMPNN method is an expansion of the MPNN method (Gilmer et al., 2017) to a heterogeneous setting. The extension essentially connects, and simultaneously trains multiple MPNN architectures, each working for different combinations of node and edge types. We investigate two distinct approaches to aggregate the embeddings of node-edge combinations in the final step: The first approach applies a simple summation of the embedding vectors derived from the various combinations. The second approach is novel and concatenates the embeddings before applying an additional single-layer neural network to enhance the aggregation process. The HMPNN is developed for and applied to detect money laundering activities in a large heterogeneous network created from real-world bank transactions and business role data which belongs to Norway's largest bank, DNB. The nodes represent bank customers and transaction counterparties, while the edges represent bank transactions and business ties. The network has a total of more than 5 million nodes and almost 10 million edges. Among the bank customers, some are known to conduct suspicious behavior related to money laundering. While the rest are assumed not fraudulent, there could be some undetected suspicious behavior also there. Thus, this is actually a semi-labeled dataset. Finally, no oversampling or undersampling was performed, which makes the class imbalance realistic to what one encounters in practical situations. There are primarily two categories of bank customers: individual (retail) customers and organization (corporate) customers. Due to the fundamental differences between these two groups and their fraudulent behaviors, it is common practice to study and model their fraudulent behavior separately. In our setting, we, therefore, treat them as two distinct node types, each with its unique set of features. In this paper, we limit the scope to modeling, predicting, and detection of fraudulent _individual_ customers. I.e. all fraudulent customers in our dataset are individual customers. We chose to restrict our analysis to individuals because there is a significantly larger number of known fraudulent individuals compared to organizations, providing the model with more fraudulent behavior to learn from. Furthermore, individuals are a more homogeneous group with simpler customer relationships compared to organizations, making them a more suitable group to apply our methodology to. Nevertheless, note that organizations may very well be involved and utilized by fraudulent individuals, and possess key connections in the graph that the model learns from. In addition to network data, the nodes and edges have sets of features that depend on what type of node and edge they are. In this paper, HMPNN is compared to other state-of-the-art GNN methods and achieves superior results. To the best of our knowledge, there exists no prior published work on applying graph neural networks on a large real-world heterogeneous network for the purpose of AML. The rest of this article is organized as follows: Section 2 provides some background and an overview of related work within the AML domain, as well as the GNN domain. In Section 3 we formulate our HMPNN model after introducing the necessary mathematical framework and notation. Section 4 presents the data for our AML use case, the setup of our experiment, and presents and discusses the results we obtain. Finally, Section 5 summarizes our contribution, and provides some concluding remarks and directions for further work. Additional model details are provided in AppendixA and B. ## 2 Background and related work In this section, we provide some background on the AML process and give an overview of earlier, related work in the domain of AML. We also briefly describe the state-of-the-art of homogeneous and heterogeneous GNNs in the general case. ### AML process As mentioned in the Introduction, financial institutions are required by law to have effective AML systems in place. Figure 1 illustrates a typical AML process in a bank. The electronic surveillance system generates what is called _alerts_ (1) on some of the transactions, or transaction patterns, that go through the bank. are picked out and marked as closed. Otherwise, the alert is upgraded to a _case_ (2). At this stage, multiple alerts might be merged into a single case, which regularly involves multiple implicated customers. The case is then inspected thoroughly by experienced investigators. If money laundering is ruled out, the case will be marked as closed. Otherwise, the case will be upgraded to a _report_ (3), and the revealed circumstances are reported in detail to the national _Financial Intelligence Unit_ (FIU). From here on, the FIU oversees further action and will determine whether to start a criminal investigation. The manual investigation part of the process is difficult to set aside for two reasons: a) The process is hard to automate, as the investigators sit on crucial, yearlong experience and often use non-quantifiable information sources to build their cases. b) AML laws do typically not allow automated reporting of suspicious behavior. The electronic surveillance generating the alerts is suitable for automation, and the benefits of better and more targeted alerts with fewer false positives, are huge for financial institutions. That is also why this paper is concerned with the electronic surveillance aspect of the AML process. ### AML literature In AML literature, there are very few papers that have datasets with real money laundering cases, and the majority of articles are validated on small datasets consisting of less than 10,000 observations (Chen et al., 2018b). Both supervised (Liu et al. (2008), Deng et al. (2009), Zhang and Trubey (2019), Jullum et al. (2020)) and unsupervised (Lorenz et al. (2020), de Jesus Rocha-Salazar et al. (2021)) machine learning have been applied to AML use cases. However, the literature is quite limited, particularly for papers utilizing relational data. Some AML papers apply the aforementioned strategy of generating network features used in a downstream machine learning task: Savage et al. (2016) create a graph from cash- and international transactions reported to the Australian FIU (AUSTRAC), from which they extract communities defined by filtered \(k\)-step neighborhoods around each node, to then create summarizing features used to classify community suspiciousness using supervised learning. Colladon and Remondi (2017) construct multiple graphs (each of which aims to reveal unique aspects) from customer and transaction data belonging to an Italian factoring company. On these graphs, they report significant correlation between traditional centrality metrics (e.g. betweenness centrality) and known high-risk entities. Elliott et al. (2019) uses a combination of network comparison and spectral analysis to create features that are applied in a downstream learning algorithm to classify anomalies. There are only a few papers applying GNNs to money laundering detection: In a brief paper, Weber et al. (2018) discuss the use of GNN on the AML use case. The paper provides some initial results of the scalability of GCN (Kipf and Welling, 2016a) and FastGCN (Chen et al., 2018a) for a synthetic data set. However, no results on the performance of the methods were provided. Weber et al. (2019) compare GCN to other non-relational ML methods on a real-world graph generated from bitcoin transactions, with Figure 1: Workflow following detections from the electronic surveillance system. 203k nodes and 234k edges. The authors highlight the usefulness of the graph data and find that GCN outperforms logistic regression, but it is still outperformed by random forest. The dataset, called the Elliptic data, is also released with the paper and has later been utilized by several others: Alarab et al. (2020) apply GCN extended with linear layers and improve significantly on the performance of the GCN model in Weber et al. (2019). Lo et al. (2023) apply a self-supervised GNN approach to create node embedding subsequently used as input in a random forest model, and reported good performance. Others (Alarab & Prakoonwit, 2022; Pareja et al., 2020) exploited the temporal aspect of the graph to increase the performance on the same data set. It is unknown to what extent results from synthetic or bitcoin transaction networks are transferable to the real-life application of transaction monitoring of bank transactions. Our graph is also about 25 times larger than the Elliptic dataset. Moreover, as we will review immediately below, much has happened in the field of GNN since the introduction of GCN in 2018. Finally, to the best of our knowledge, there are no papers applying GNN (or other methodology) to an AML use case with a _heterogeneous_ graph. ### General case GNN literature The history of GNNs can be traced back about twenty years and GNNs have during the past few years surged in popularity. This was kicked off by Kipf & Welling (2016a) who introduced the popular method _Graph Convolutional Network_ (GCN). For an excellent survey of GNNs including their history, we refer to Wu et al. (2020). The core dynamics in GNNs is an iterative approach where each node receives information from its neighbors, and combines it with its own representation to create a new representation, which will be forwarded to its neighbors in the next iteration. We call this _message passing_. After a few iterations, these node representations are used to make inference about the node. Concentrating on today's most popular group of GNNs, _Spatial-based convolutional GNNs_, we briefly review some relevant homogeneous and heterogeneous GNN methods below. #### 2.3.1 Homogeneous GNN literature The GCN method (Kipf & Welling, 2016a) is motivated by spectral convolution and was originally formulated for the transductive setting operating directly on the adjacency matrix of the graph. However, the method can be reformulated in the inductive and spatial-based GNN setting using a message-passing formulation: The message received by a node from its neighbors is a weighted linear transformation of the neighboring node representations, followed by aggregating the result by taking their sum. GraphSage (Hamilton et al., 2017) expands on GCN in two ways: It uses a) a neighborhood sampling strategy to increase efficiency, and b) a neutral network, a pooling layer, and an LSTM (Hochreiter & Schmidhuber, 1997) to aggregate the incoming messages instead of a simple sum. A drawback of GraphSage is, however, that it does not incorporate edge weights. The Graph attention network (GAT) of Velickovic et al. (2017) introduced the attention mechanism into the GNN framework. This mechanism learns the relative importance of a node's neighbors, to assign more weight to the most important ones. Gilmer et al. (2017) presented the Message Passing Neural Network (MPNN) framework, unifying a large group of different GNN models. Apart from the unifying framework, the most essential contribution of this model is in our view that it applies a learned message-passing function that utilizes edge features. #### 2.3.2 Heterogeneous GNN literature Heterogeneous graph neural networks are commonly defined as extensions of existing homogeneous graph neural networks. For instance, the Relational Graph Convolution Network (RGCN) (Schlichtkrull et al., 2018) extends the GCN framework to support graphs with multiple types of edges. RGCN achieves this by breaking down the heterogeneous graph into multiple homogeneous ones, one for each edge type. In each layer, GCN is applied to each homogeneous graph, and the resulting node embeddings are element-wise summed to form the final output. A drawback of RGCN is that it does not take node heterogeneity into account. Heterogeneous Graph Attention Networks (HAN) (Wang et al., 2019) generalize the Graph Attention Network (GAT) approach to heterogeneous graphs by considering messages between nodes connected by so-called meta-paths. Meta-paths (see the formal definition in Definition 2) are composite relationships between nodes that help to capture the rich structural information of heterogeneous graphs. HAN defines two sets of attention mechanisms. The first is between two different nodes, which is analogous to GAT. The second set of attention mechanisms is performed at the level of meta-paths, which computes the importance score of different composite relationships. Metapath Aggregated Graph Neural Network (MAGNN) (Fu et al., 2020) extends the approach of HAN by also considering the intermediary nodes along each meta-path (Dong et al., 2017). While HAN computes node-wise attention coefficients by only considering the features of the nodes at each end of the meta-path, MAGNN transforms all node features along the path into a single vector. The interest in GNNs is rapidly growing, and advancements in this field are consistently being made. There are several other methods available that haven't been discussed here. For a more comprehensive understanding, we once again refer to the survey conducted by Wu et al. (2020), which provides an overview of GNNs in general. Additionally, for insights specifically on Heterogeneous Network Representation Learning, including Heterogeneous GNNs, we refer to Yang et al. (2020) for a good overview. ## 3 Model In this section we define and describe our proposed heterogeneous GNN model, which is based on the generic framework from Message Passing Neural Network (MPNN) introduced in Gilmer et al. (2017). We start by introducing the original (homogeneous) MPNN model and algorithm before we move on to give precise definitions for our heterogeneous graph setup and present our novel extension of the MPNN model for heterogeneous networks. ### Message Passing Neural Network (MPNN) Gilmer et al. (2017) introduces the generic MPNN framework which is able to express a large group of different GNN models, including GCN, GraphSage, and GAT. This is done by formulating the message passing with two learned functions, \(M_{k}(\cdot)\), called the _message functions_, and \(U_{k}(\cdot)\), called the _node updated functions_, with forms to be specified later. The framework runs \(K\) message passing iterations \(k=1,\ldots,K\) between nodes along the edges that connect them. The node representation vectors are initialized as their feature vectors, \(\mathbf{h}_{v}^{0}=\mathbf{x}_{v}\), and the previous representation \(\mathbf{h}_{v}^{(k-1)}\) is the message that is being sent in each iteration \(k\). After \(K\) iterations, the final representation \(\mathbf{h}_{v}^{(K)}\) is passed on to an output layer to perform e.g. node-level prediction tasks. The message-passing function is defined as \[\mathbf{m}_{v}^{(k)} =\sum_{u\in N(v)}M_{k}(\mathbf{h}_{v}^{(k-1)},\mathbf{h}_{u}^{(k-1)},\mathbf{ r}_{uv}), \tag{1}\] \[\mathbf{h}_{v}^{(k)} =U_{k}(\mathbf{h}_{v}^{(k-1)},\mathbf{m}_{v}^{(k)}),\] where \(N(v)\) denotes the neighborhood of node \(v\), and \(\mathbf{r}_{uv}\) represents the edge features for the edge between node \(u\) and \(v\). By defining specific forms of \(U_{k}(\cdot)\) and \(M_{k}(\cdot)\), a distinct GNN method is formulated. From the viewpoint of this paper, an essential attribute of the MPNN framework is that the learned message-passing function utilizes edge features. Not many other proposed GNNs incorporate edge features into their model. Gilmer et al. (2017) emphasize the importance of edge features in the dataset they experiment on. For our dataset, the edge features contain essential information related to transactions and are required to be utilized in an expressive model. ### Formal definitions Before we can formulate our _heterogeneous_ MPNN framework, we need to establish a precise definition of a heterogeneous graph, as well as a couple of additional concepts. We largely adopt the commonly used graph notation from Wu et al. (2020), and use a heterogeneous graph definition which is a slight modification to those in Yang et al. (2020) and Wang et al. (2019) in order to allow for multiple edges of different types between the same two nodes. **Definition 1**.: (Heterogeneous graph) A heterogeneous graph is represented as \(G=(V,E,\mathbf{X},\mathbf{R},Q^{V},Q^{E},\phi)\) where each node \(v\in V\) and each edge \(e\in E\) has a type, and \(Q^{V}\) and \(Q^{E}\) denote finite sets of predefined node types and edge types, respectively. Each node \(v\in V\) has a node type \(\phi(v)=\nu\in Q^{V}\), where \(\phi(\cdot)\) is a node type mapping function. Further, for \(\phi(v)=\nu\), \(v\) has features \(\mathbf{x}_{v}^{\nu}\in\mathbf{X}^{\nu}\), where \(\mathbf{X}^{\nu}=\{\mathbf{x}_{v}^{\nu}\mid v\in V,\,\phi(v)=\nu\}\) and \(\mathbf{X}=\{\mathbf{X}^{\nu}\mid\nu\in Q^{V}\}\). The dimension and specifications of the node feature \(\mathbf{x}_{v}^{\nu}\) may be different for different node types \(\nu\). Further, let us denote by \(e_{uv}^{\varepsilon}\) an edge of type \(\varepsilon\in Q^{E}\) pointing from node \(u\) to \(v\). Each edge \(e_{uv}^{\varepsilon}\) has features \(\mathbf{r}_{uv}^{\varepsilon}\in\mathbf{R}^{\varepsilon}\), where \(\mathbf{R}^{\varepsilon}=\{\mathbf{r}_{uv}^{\varepsilon}\mid u,v\in V\}\) and \(\mathbf{R}=\{\mathbf{R}^{\varepsilon}\mid\varepsilon\in Q^{E}\}\). Just like for nodes, the edge features may have different dimensions for different edge types. To formulate heterogeneous message passing, we will use the concept of _meta-paths_. Meta-paths are commonly used to extend methods from a homogeneous to a heterogeneous graph. For example, Dong et al. (2017) use meta-paths when introducing _metapath2vec_, which extends _DeepWalk_(Perozzi et al., 2014) and the closely related _node2vec_(Grover and Leskovec, 2016) to a method applicable on heterogeneous graphs. Wang et al. (2019) use meta-paths to generalize the approach of graph attention networks (Velickovic et al., 2017) to that of heterogeneous graphs when formulating _heterogeneous graph attention network_ (HAN). In addition to meta-path, the below definition introduces our own term, _meta-steps_, which we use when formulating our model. **Definition 2**.: (Meta-path, meta-step) A _Meta-path_ belonging to a heterogeneous graph \(G\) is a sequence of specific edge types between specific node types, \[(\nu_{0},\varepsilon_{1},\nu_{1},\varepsilon_{2},\ldots,\nu_{k-1},\varepsilon_ {k},\nu_{k}),\quad\nu_{i}\in Q^{V},\varepsilon_{i}\in Q^{E}.\] Here, \(k\) is the length of the meta-path. Let \(S\) be the set of meta-paths of length 1, \[S=\{s=(\mu,\varepsilon,\nu)\mid\mu,\nu\in Q^{V},\varepsilon\in Q^{E}\},\] and refer to the elements \(s\in S\) as _meta-steps_. Finally, we introduce a definition of node neighborhood over a specific meta-step: **Definition 3**.: (Meta-step specific node neighborhood) Let \(N_{\mu}^{\varepsilon}(v)\) be the set of nodes of type \(\mu\) which is connected to node \(v\) by an edge of type \(\varepsilon\) pointing to \(v\): \[N_{\mu}^{\varepsilon}(v)=\{u\in V\mid\phi(u)=\mu,e_{uv}^{\varepsilon}\in E\}.\] We call \(N_{\mu}^{\varepsilon}(v)\) the (incoming) node neighborhood to node \(v\) with respect to the meta-step \(s=(\mu,\varepsilon,\phi(v))\in S\). ### Heterogeneous MPNN We are now ready to formulate our heterogeneous version of the MPNN method (HMPNN). The complete algorithm is provided in Algorithm 1. Our approach for extending a homogeneous GNN to a heterogeneous one is, essentially, the same as used by Schlichtkrull et al. (2018), where they generalize GCN to the method _Relational Graph Convolutional Network_ (RGCN) applicable on graphs with multiple edge types. The algorithm performs (at each iteration) multiple MPNN message passing operations, one for each meta-step \(s\in S\) in the graph. Each of these has its separate learned functions \(M_{k}^{s}(\cdot)=M_{k}^{(\mu,\varepsilon,\nu)}(\cdot)\) and \(U_{k}^{s}(\cdot)=U_{k}^{(\mu,\varepsilon,\nu)}(\cdot)\), which allows the method to learn the context of each meta-step, and also allow message passing between nodes and across edges with varying numbers of features. The intermediate output of this process is multiple representation vectors for each node. To reduce these to a single vector, they are aggregated by a learned aggregation function \(A^{\nu}(\cdot)\) which is specific to each node type. In line with the generic formulation of MPNN we do not specify a specific form of the aggregation function in Algorithm 1. During the experiments, we have assessed two alternative options, which we will discuss shortly. Our HMPNN model is implemented in Python, using the library PyTorch Geometric (PyG) (Fey and Lenssen, 2019) allowing for high-performance computing by utilizing massive parallelization through GPUs. Source code is available here: [https://github.com/fredjo89/heterogeneous-mpnn](https://github.com/fredjo89/heterogeneous-mpnn) ``` Initialize the representation of each node as its feature vector, \[\mathbf{h}_{v}^{(0)}=\mathbf{x}_{v}^{\nu},\;\forall\;v\in V\text{ of type }\nu=\phi(v).\] for\(k=1,\dots,K\)do\(\triangleright\) For each iteration for\(v\in V\)do\(\triangleright\) For each node for\(\mu\in Q^{V},\varepsilon\in Q^{E}\) such that \(N_{\mu}^{\varepsilon}(v)\neq\emptyset\)do\(\triangleright\) For each meta-step ending at \(\phi(v)\) Compute the new representation for node \(v\) of type \(\nu=\phi(v)\), for the specific meta-step, \[\mathbf{m}_{v}^{(\mu,\varepsilon,k)}=\sum_{u\in N_{\mu}^{\varepsilon}(v)}M_{k}^{( \mu,\varepsilon,\nu)}(\mathbf{h}_{v}^{(k-1)},\mathbf{h}_{u}^{(k-1)},\mathbf{r}_{uv}^{ \varepsilon}),\] \[\mathbf{h}_{v}^{(\mu,\varepsilon,k)}=U_{k}^{(\mu,\varepsilon,v)}(\mathbf{h}_{v}^{(k- 1)},\mathbf{m}_{v}^{(\mu,\varepsilon,k)}).\] endfor Aggregate the representations from the multiple meta-steps into a single new representation for that node, \[\mathbf{h}_{v}^{(k)}=A_{k}^{(\nu)}\big{(}\{\mathbf{h}_{v}^{(\mu,\varepsilon,k)}\mid\mu \in Q^{V},\varepsilon\in Q^{E}\}\big{)}\] endfor endfor ``` **Algorithm 1** Heterogeneous MPNN ### Choosing specific forms of the functions As our setup is very general, the algorithm in Algorithm 1 gives rise to a whole range of new heterogeneous GNN methods. By defining specific forms of \(U_{k}^{(\mu,\varepsilon,\nu)}(\cdot)\), \(M_{k}^{(\mu,\varepsilon,\nu)}(\cdot)\) and \(A_{k}^{(\nu)}(\cdot)\), a distinct GNN method is formulated. Note that there is nothing that prevents us from choosing different forms of these functions for different meta-steps or iterations. However, for the AML use case in Section 4 we have limited the scope to a single form for each of the three functions, respectively. These are described in the following. As message function, we use the same as Gilmer et al. (2017): \[M_{k}^{(\mu,\varepsilon,\nu)}(\mathbf{h}_{v}^{(k-1)},\mathbf{h}_{u}^{(k-1)},\mathbf{r}_{uv }^{\varepsilon})=g_{k}^{(\mu,\varepsilon,\nu)}(\mathbf{r}_{uv}^{\varepsilon})\mathbf{ h}_{u}^{(k-1)}.\] Here, \(g_{k}^{(\mu,\varepsilon,\nu)}(\cdot)\) is a single layer neural network which maps the edge feature vector \(\mathbf{r}_{uv}^{\varepsilon}\) to a \(d^{v}\times d^{u}\) matrix, where \(d^{u}\), and \(d^{v}\) are the number of features for the sending and receiving node type, respectively. As update function we use \[U_{k}^{(\mu,\varepsilon,\nu)}\big{(}\mathbf{h}_{v}^{(k-1)},\mathbf{m}_{v}^{(\mu, \varepsilon,\nu,k)}\big{)}=\sigma\Big{(}\mathbf{m}_{v}^{(\mu,\varepsilon,k)}+\mathbf{ B}_{k}^{(\mu,\varepsilon,\nu)}\mathbf{h}_{v}^{(k-1)}\Big{)},\] where \(\mathbf{B}_{k}^{(\mu,\varepsilon,\nu)}\) is a matrix. Note that for a homogeneous graph, our choices of \(M(\cdot)\) and \(U(\cdot)\) are similar to those in Hamilton et al. (2017), except that the matrix applied in the message function is conditioned on the edge features rather than being the same across all edges. For the aggregation function, we consider two alternatives. The first is to take the sum of the vectors from each meta-step before performing a nonlinear transformation, in the same fashion as (Schlichtkrull et al., 2018): \[A_{k}^{(\nu)}\big{(}\{\mathbf{h}_{v}^{(\mu,\varepsilon,k)}\mid\mu\in Q^{V}, \varepsilon\in Q^{E}\}\big{)}=\sigma\bigg{(}\sum_{\mu\in Q^{V},\varepsilon \in Q^{E}}\mathbf{h}_{v}^{(\mu,\varepsilon,k)}\bigg{)}. \tag{2}\] Here, \(\sigma(\cdot)\) is the sigmoid function. In the second aggregation method, the vectors \(\mathbf{h}_{v}^{(\mu,\varepsilon,k)}\) are concatenated into a single vector. A single-layer perceptron (neural network) is then applied to it, and outputs the new representation: \[A_{k}^{(\nu)}\big{(}\{\mathbf{h}_{v}^{(\mu,\varepsilon,k)}\mid\mu\in Q^{V}, \varepsilon\in Q^{E}\}\big{)}=\sigma\Big{(}W_{k}^{(\nu)}\underset{\mu\in Q^{V},\varepsilon\in Q^{E}}{||}\sigma\big{(}\mathbf{h}_{v}^{(\mu,\varepsilon,k)}\big{)} \Big{)}. \tag{3}\] We denote the two resulting models _HMPNN-sum_ and _HMPNN-ct_, respectively. Figure 2 illustrates the architectural extension of the homogeneous MPNN model to our HMPNN model (HMPNN-ct) as messages are passed to one of the node types, where (3) is used as aggregation function. ## 4 AMI use case In this section, we first describe the heterogeneous graph data. Then we lay out the setup of the experiments we have performed on this dataset before we provide the results. ### Data The graph is established based on customer and transaction data from Norway's largest bank, DNB, in the period from February 1, 2022, to January 31, 2023. The nodes in the graph represent entities that are senders and/or recipients of financial transactions. If two entities participate in a transaction with each other, this is represented by an edge (of type transaction) between the two, where the direction of the edge points from the sender to the recipient. There are in total 5 million nodes and 9 million such edges in the graph. There are three types of nodes in the graph. The first one is called _individual_ and represents a human individual's customer relationship in the bank. It includes all of the individual's accounts in the bank3. The second type of node is called _organization_, and represents an organization's or company's customer relationship in the bank in the same manner as a node representing an individual. The third type of node is called _external_ and represents a sender or recipient of a transaction that is outside of the bank. Footnote 3: Transactions made to/from any of the accounts in the bank belonging to the customer will result in an edge to/from the node representing the individual. The majority of the edges in the graph represent presence of a financial transaction between different individuals/organizations/external entities in the edge direction. In addition to this edge type, the graph includes role as a second edge type. That edge points from an individual to an organization if the individual Figure 2: Illustration of the architectural extension of the homogeneous MPNN model to our HMPNN model (HMPNN-ct) as messages are passed to one of the node types (A), from three node types (A, B, and C) and with three edge types (1, 2 and 3). SLP is short for Single Layer Perceptron (neural network). Analogous architectures are used for messages passed to the other node types. occupies a position on the board, is the CEO, or holds ownership in the organization. The resulting graph is directed and heterogeneous with respect to both nodes and edges. Figure 3 shows the schema of the graph, including the nine possible meta-steps. As shown in the schema, there are no edges between different external nodes, since the bank does not have access to transactions not involving their customers. The nodes that represent individuals are assigned a binary class (0 for regular individuals, and 1 for individuals known to conduct suspicious behavior). As mentioned in the introduction, the data only contains labels for individual nodes. Suspicious individuals are defined as those that have been subject of an AML case (stage 2 in Figure 1) during a certain time window. Note that customers implicated in cases that were not reported to the FIU are still defined as suspicious. This decision was made because our objective is to model suspicious activity, which these customers certainly have conducted, even though the suspiciousness was diminished by a close manual inspection. Less than 0.5% of the individuals belong to class 1 (suspicious). Due to the sensitive nature of these data, containing both personal and possibly competition sensitive information for the bank, the data are not shareable. We are neither allowed to reveal the exact details of the graphs nor the features associated with the different nodes/edges in our model. Below, we give a broad overview of the characteristics and features of the graph, within our permission restrictions. To get a feeling of the local characteristics of the graph, Figure 4 shows egonets of four random nodes, with the starting node enlarged. The upper and lower panels show, respectively, the 3-hop and 9-hop egonets of the (undirected) transaction and role edges. The number of shown hops was chosen to balance presentability and amount of detail. Moreover, Figure 5 shows histograms of the degree centrality for the three different node types. Some nodes have a large number of neighbors, while others have few. While the degree distribution of individuals and organizations is similar, the distribution for organizations has a thicker tail, indicating that it's more common for organizations to exhibit a higher degree. As we don't have knowledge of edges between external nodes, this node type typically has much fewer neighbors. The three node types have separate sets of features that contain basic information about the entity. There are eleven, eight, and two node features for, respectively, individuals, organizations, and external nodes. The transaction edges have as features the number and monetary amount of transactions made within the one-year period. The role edges have two features, the first denoting the role type and the second the ownership percentage (provided the role represents ownership in the organization). ### Experiment setup The goal of this use case and experiment is to see to what extent our HMPNN method is able to predict the label (suspicious/regular) on nodes where the label is unknown to the model. As mentioned in the Introduction, we concentrate on building models for detecting fraudulent _individual_ customers. This means Figure 3: The schema of the graph that has been the subject of our experiments, with the nodes representing customer relationships in DNB outlined by the grey ellipse. Here, the transaction edges are abbreviated as _Txn_. that even if we use the entire graph for message passing, only individual nodes (in the training set) are assigned a label to learn from, and only individual nodes (in the test set) are subsequently predicted and evaluated. To evaluate the performance of our methods, we benchmark and compare their performance to a set of alternative models/methods. To mimic a scenario with unknown labels, we split the nodes into a training set and a test set. The whole graph is available for each of the models when training them, but the labels are only available for the nodes in the training set. At testing time, each method attempts to predict the (unknown) nodes in the test set, and their performance is compared using different performance measures. We used a 70-30 train-test split, where the splitting was performed using stratified random sampling with allocation proportional to the original class balance, such that the class balance is preserved in the two sets. The same split was used for all methods. Figure 4: Homogeneous egonets for four random nodes in the graph. The upper two panels show 3-hop egonets of the (undirected) transaction edges in the graph. The lower two panels show 9-hop egonets for the (undirected) role edges in the graph. The starting node is enlarged. We compare the result of four model frameworks: two non-graph methods supplied with additional node features, and two GNN methods. The models are applied with multiple model complexity configurations. To enhance the non-graph methods, we generated 83 additional node features that accompany the original node features. These include 11 summaries of the node-neighborhoods, i.e. the number of neighbors of different types, both incoming and outgoing. We also computed 8 weighted summaries of node-neighborhoods, specifically for transaction edges and the monetary amount node feature. Additionally, we added 64 features generated using metapath2vec. These were assembled from embeddings of dimension 8 generated for each meta-path of length 2 starting and ending at a node of type individual. Together with the 11 intrinsic node features, this results in a total of 94 features. For more detailed information on these additional node features, please refer to Appendix A. The four methods are briefly described below: **Logistic regression**: This classic method serves as a very basic benchmark not directly utilizing the network information and with a basic and inflexible parametric form. The method has access to the additional network-generated node features. **Regular Neural Network**: This method, applied with both 1 and 2 hidden layers, is much more flexible than the logistic regression model, but neither utilizes the network directly. The method has access to the additional network-generated node features. **HGraphSage**: GraphSage (Hamilton et al., 2017) is a well-known homogeneous GNN method and is applied to our heterogeneous graph in the same fashion as HMPNN, as described below. In contrast to MPNN, GraphSage does not utilize edge features. We, therefore, test its performance both with and without additional node features that hold the weighted in/out degrees for the transaction edges, and the weight is the transaction amount on the edges. This results in 6 additional node features for nodes of type _individual_ and _organization_, and 4 on nodes of type _external_. We test the method with the aggregation function in (2) (HMPNN-sum). We applied the method with both one, two, and three hidden layers. **HMPNN**: This GNN method, described in Section 3.3, is our extension of MPNN to heterogeneous graphs. We test the method with the two aggregation functions in (2) (HMPNN-sum) and (3) (HMPNN-ct). We applied the method with both one, two, and three hidden layers for each of the aggregation methods. In total, our experiment contains 15 different models/method variants. The number of parameters involved in each of these is listed in Table 4 in the Appendix. The logistic regression and Regular Neural Network models were trained using the open source deep learning library Pytorch (Paszke et al., 2019). HMPNN and HGraphSage were trained using the open source library Pytorch Geometric (PyG), which expands Pytorch with utilities for representing and training GNNs. All models were trained with the Adam optimiser (Kingma & Ba, 2014), using the Binary Cross Entropy loss function: \[\text{Loss}(\hat{y}_{v},y_{v})=y_{v}\cdot\log(\hat{y}_{v})+(1-y_{v})\cdot\log( 1-\hat{y}_{v}).\] Figure 5: Histogram showing the distribution of the degree centrality for the three different node types, completely ignoring the edge type. The hyperparameters for each of the methods were tuned using 5-fold cross-validation on the training set. Here, three hyperparameters were determined: (1) The regularization strength, (2) the learning rate, and (3) the number of training iterations, i.e., by early stopping. For all methods, the learning rate was in the range \([10^{-4},10^{-1}]\). As for regularization, the \(L_{2}\)-constraint was used, and was in the range \([10^{-8},10^{-1}]\). The optimal value for the \(L_{2}\) constraint was highly dependent on the complexity of the model to be trained. The experiments were carried out using _python 3.7.0_, with PyTorch 1.12.1 and PyG 2.2.0. The computer used to run the experiments had 8 CPUs of the type _High-frequency Intel Xeon E5-2686 v4 (Broadwell) processors_, with 61GB shared memory, and one GPU of type _NVIDIA Tesla V100_ with 16GB memory. This GPU has 5,120 CUDA Cores and 640 Tensor Cores. ### Results For each node in the test set, all the different methods output a score between 0 and 1, reflecting the probability that the node is a suspicious customer. To measure the overall performance of different methods on the test set we rely on the area under the precision/recall curve (PR AUC) and the area under the receiver operator curve (ROC AUC). PR AUC computes the area under the curve obtained by plotting the _precision_ (TP/(TP+FP)) as a function of the _recall_ (TP/(TP+FN)), and PR ROC computes the area under the curve obtained by plotting the recall as a function of _False Positive Rate_ (FP/(FP+TN)). Here TP/FP/TN/FN represents the number of classified nodes which are, respectively, true positives, false positives, true negatives, and false negatives. Figure 6 displays the ROC AUC and PR AUC on the test set for all different methods and number of neural network layers used by the respective methods. All methods benefit from including more layers. HMPNN-ct with 3 layers is the best method in terms of both PR AUC and ROC AUC. While HMPNN-ct does quite well when applied with a single hidden layer, this is not the case for the other network models, at least when compared to the basic logistic regression (regular neural network with one layer). In terms of ROC AUC, the logistic regression model is actually better than all the other network models, and in terms of PR AUC, it is better than the HGraphSage models and comparable to HMPNN-sum. This is quite remarkable as the logistic regression model does not know anything about the network, except for additional network summary features, and has the simplest form of architecture. Note, however, that the Regular Neural Networks with 2 and 3 layers only do slightly better than Logistic Regression. This indicates that the simple architecture of the Logistic Regression model is not a significant downside for that limited data set. Moreover, the large performance gap to the HMPNN-ct model also shows that it is certainly possible to get more out of the network structure than the other network models manage. Thus, we believe that the lack of performance for the other network models Figure 6: ROC AUC AND PR AUC on the test set for all different methods and number of neural network layers used by the methods. The Regular Neural Network with one layer corresponds to Logistic Regression. is related to an inappropriate and inefficient architecture compared to that of HMPNN-ct. The fact that overall, HMPNN-sum does not perform on par with HMPNN-ct further indicates that the performance boost in HMPNN-ct is mainly due to the architectural trick of the last single-layer neural network. Considering the limitations in resources faced by banks, conducting thorough examinations of a substantial volume of suspicious cases is typically unfeasible. Therefore, the primary purpose of the model is to generate a limited set of high-quality predictions where money laundering is likely to occur, meaning that the precision at small to medium-sized recall levels is more relevant than the higher ones. Table 1 shows the precision corresponding to recall levels of 1%, 5%, 10% and 50%, respectively, and allows studying the performance of the methods in greater depth and at a wider range. Focusing on HMPNN-ct, we see that when the classification threshold is set such that we identify 1% of the suspicious customers (recall = 1%) two-thirds of those classified as suspicious _are_ actually suspicious (precision \(\approx 67\%\)). Increasing the classification threshold to 5% or 10% gives precisions of about 58% and 51%. Moreover, if we decrease the threshold such that half of the suspicious customers (recall = 50%) are identified, 90% of the customers classified as suspicious are not really suspicious. These rates may not seem impressive at first glance. Considering the severe class imbalance in the data (less than 0.5% of the total number of observations are suspicious), and the fact that detection of money laundering is a notoriously difficult problem, these performance scores are, actually, very promising. Finally, note that even though the 3-layer HMPNN-sum model performs worse than HMPNN-ct overall and for the larger recalls, it obtains a significantly better precision (84% vs 67%) at recall 1%. In essence, this model is better at detecting the most evident instances of money laundering. This aspect is crucial to consider when selecting a model, particularly if resource limitations restrict the investigation to a small number of customers for potential money laundering. ## 5 Summary and concluding remarks The present paper proposed and applied a heterogeneous extension of the homogeneous GNN model, MPNN (Gilmer et al., 2017), to detect money laundering in a large-scale real-world heterogeneous graph. The graph is derived from data originating from Norway's largest bank, encompassing 5 million nodes and close to 10 million edges, and comprises customer data, transaction data, and business role data. Our heterogeneous MPNN model (HMPNN) incorporates distinct message-passing operators for each combination of node and edge types to account for the graph's heterogeneity. Two versions of the model are proposed: HMPNN-sum and HMPNN-ct. Notably, HMPNN-ct used a novel strategy for constructing the final node embeddings, \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & Number & \multicolumn{3}{c}{Recall(\%)} & PR & ROC \\ \cline{3-8} & of Layers & 1 & 5 & 10 & 50 & AUC & AUC \\ \hline \multirow{2}{*}{Regular Neural Network} & 1 & 61.54 & 36.28 & 28.55 & 5.53 & 0.1075 & 0.8547 \\ & 2 & 64.00 & 43.82 & 30.51 & 5.89 & 0.1173 & 0.8613 \\ & 3 & 61.54 & 47.56 & 34.68 & 5.88 & 0.1237 & 0.8612 \\ \hline \multirow{3}{*}{HGraphSage} & 1 & 59.26 & 33.62 & 20.58 & 4.01 & 0.0836 & 0.8329 \\ & 2 & 61.54 & 48.15 & 38.56 & 6.51 & 0.1280 & 0.8879 \\ & 3 & 59.26 & 60.47 & 42.47 & 7.19 & 0.1452 & 0.8960 \\ \hline \multirow{3}{*}{HGraphSage (extra features)} & 1 & 48.48 & 34.21 & 21.29 & 4.24 & 0.0858 & 0.8405 \\ & 2 & 64.00 & 50.65 & 35.39 & 7.41 & 0.1368 & 0.8882 \\ & 3 & 69.57 & 50.32 & 38.27 & 7.68 & 0.1424 & 0.8915 \\ \hline \multirow{3}{*}{HMPNN-sum} & 1 & 64.00 & 38.24 & 29.75 & 5.34 & 0.1090 & 0.8401 \\ & 2 & 69.57 & 42.62 & 36.38 & 7.76 & 0.1359 & 0.8863 \\ & 3 & 84.21 & 53.79 & 38.27 & 8.67 & 0.1532 & 0.8955 \\ \hline \multirow{3}{*}{HMPNN-ct} & 1 & 76.19 & 48.75 & 38.18 & 6.92 & 0.1418 & 0.8801 \\ & 2 & 61.54 & 50.65 & 43.66 & 8.96 & 0.1555 & 0.8989 \\ \cline{1-1} & 3 & 66.67 & 58.21 & 50.99 & 10.25 & 0.1800 & 0.9083 \\ \hline \hline \end{tabular} \end{table} Table 1: Precision at specific values of Recall, in addition to PR AUC and ROC AUC for the different models as the embeddings from each node-edge operator were concatenated and fed into a final single-layer neural network. Overall, this version outperformed HMPNN-sum as well as all alternative models by a significant margin. HMPNN-sum was, however, the best model at recall = 1%, i.e. it was most accurate for the customers assigned the largest probabilities of being suspicious. As we saw from the overall measures in Figure 6, all models performed the best when fitted using 3 hidden layers. We may have gotten even better performance if we increased the number of layers further. However, 3 layers is where we hit the memory roof on our GPU, so we were unable to explore this in practice. It is worth noting that the HMPNN-ct architecture is also clearly the most successful when we are restricting the number of layers to 2 or 1. This is relevant as larger networks, and/or less computational resources or training time available, may in other situations demand a reduction in the number of layers. From Table 4 in the Appendix, we also see that apart from the regular neural network model, HMPNN-ct has the fewest number of model parameters of the 3-layer models, indicating that it has an efficient architecture. Customers labeled as "regular" may, in fact, be suspicious and potentially involved in money laundering - they just haven't been controlled in the existing AML system and are therefore labeled as "regular". This scenario holds true for practically all instances of money laundering modeling. Consequently, if a customer labeled as "regular" is assigned a high probability of being suspicious by the model, it is possible that the customer has been mislabeled. As a result, modeling test phases with a labeled test set, as outlined here, can also serve as a means to generate suggestions for customers who warrant further investigation into their past behaviors. In other words, the modeling approach allows for the identification of customers who may require re-examination based on the model's predictions, even if they were initially labeled as "regular". When implementing a predictive model for suspicious transactions within a real AML system, several crucial decisions need to be made. One vital consideration involves determining the optimal stage in the process (see Figure 1) for applying the predictions: either preceding the alert inspection or preceding the case investigation. If the predictions are used prior to the alert inspection, it might be preferable to set a classification threshold with a higher recall. On the other hand, if the predictions are applied before the case investigation, it may be more appropriate to select a more stringent threshold, i.e. that has a lower recall. This would help minimize the allocation of investigation resources towards false positives, leading to greater efficiency. In order to increase the performance of our money laundering modeling approach, some aspects become readily apparent. While our data set is rich in terms of the number of nodes and edges, contains network data from both financial transactions and professional roles, and has a large number of edge and node features, it can always be richer. In our setup, the transaction edges contain the number and total monetary amount made in the one-year period. This could be refined by also including the standard deviation, median, and other summary statistics like in Jullum et al. (2020). Moreover, provided high-quality data can be obtained, it would be valuable to include customer links using connections from social media platforms, geographical information such as shared address or phone numbers, or even family relations. As mentioned, organizational customers are fewer in number and exhibit less homogeneity compared to individuals, rendering them less suitable for modeling compared to individuals. It still presents a natural candidate for further work to develop models that make predictions on the bank's organization customers. However, to truly see the potential of such a model, we believe that it is essential to expand the dataset in time to include more fraudulent organizations to learn from and enrich the data with more organization-specific features. A significant limitation that applies to our work, as well as most endeavors related to money laundering detection, is the restricted nature of the data, which is confined to customers within a single bank. Although our graph includes "external" customers from other banks, the transactions and professional role links between customers in external banks are unavailable. This is primarily due to banks being either unwilling or prohibited from merging their customer information with other banks, due to security regulations or competitive considerations. Surmounting these data-sharing challenges among prominent financial institutions would enable the modeling of a more comprehensive network of transactions and relationships, leaving fewer hiding places for money launderers. Nevertheless, the administration of such collaborative, analytical, and modeling systems demands substantial resources and investments, a process that may take years to accumulate. In any case, to the best of our knowledge, no scientific work has been published regarding the utilization of heterogeneous graph neural networks in the context of detecting money laundering on a large-scale real-world graph. This paper should be viewed as a first attempt to leverage heterogeneous GNN architecture within AML and has showcased promising outcomes. We envision that our paper will provide invaluable perspectives and directions for scholars and professionals engaged in money laundering modeling. Ultimately, we hope this contribution will aid in the continuous endeavors to combat money laundering. ## Code availability The implementation of our HMPNN model is available here: [https://github.com/fredjo89/heterogeneous-mpnn](https://github.com/fredjo89/heterogeneous-mpnn) ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Acknowledgements Funding: This work was supported by the Norwegian Research Council [grant number 237718]. ## Appendix A Network features This appendix provides details of how we created additional node features that capture information about a node's role in the network. These were utilized in our entity-based models (and HGraphSage) for benchmarking in Section 4. We generated a total of 83 additional features, which when combined with the original 11 intrinsic node features, resulted in a total of 94 features. These additional features are categorized into three distinct types: 1) Unweighted Neighborhood summary consisting of 11 features, 2) Weighted Neighborhood summary consisting of 8 features, and 3) Metapath2vec embeddings with a total of 64 features. _Unweighted Neighborhood Summary._ The unweighted neighborhood summary features encapsulate the (unweighted) in/out degree for each meta-step that nodes of type individual are part of, see Figure 3. This amounts to seven features, six from transaction edges and one from role edges. Further, four summary features are added: 1) The total in-degree, which is the count of all incoming edges, irrespective of the meta-step, 2) The total out-degree, which is the count of all outgoing edges, irrespective of the meta-step, 3) The total degree, which is the count of all edges, irrespective of the meta-step or the direction, 4) The total count of meta-steps in which the node is involved. In total, this gives 11 additional node features. _Weighted Neighborhood Summary._ The weighted neighborhood summary features contain the weighted in/out degree for each meta-step involving transaction edge. Here the edge feature representing the monetary amount was used as edge weight. This provides six features for a node of type individual. Further, two summary features are added: 1) The total weighted in-degree, 2) The total weighted out-degree, In total, this gives 8 additional node features. _Metapath2vec Embeddings._ Metapath2Vec was used to generate embeddings for each of the four meta-paths shown in Table 2. These are all meta-paths of length 2 that start and end at a node of type individual. The dimension of the embedding for each meta-path was set to 8. The embeddings were added as features on both the node at the start and end of the meta-path. This amounts to 64 additional node features. We used the implementation of MetaPath2Vec in Pytorch Geometric. Table 3 lists the parameters used in the creation of the embeddings. ## Appendix B Network parameters Table 4 shows the number of parameters for each of the models discussed in Section 4. \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{Number of Layers} \\ \cline{2-4} & 1 & 2 & 3 \\ \hline NeuralNetwork & 95 & 9,025 & 17,955 \\ HGraphSage & 189 & 3,999 & 7,809 \\ HGraphSage (extra features) & 329 & 13,045 & 25,761 \\ HMPNN-sum & 296 & 6,536 & 12,776 \\ HMPNN-ct & 3,071 & 4,487 & 6,303 \\ \hline \hline \end{tabular} \end{table} Table 4: Number of model parameters for the different models
2307.01937
A Neural Network-Based Enrichment of Reproducing Kernel Approximation for Modeling Brittle Fracture
Numerical modeling of localizations is a challenging task due to the evolving rough solution in which the localization paths are not predefined. Despite decades of efforts, there is a need for innovative discretization-independent computational methods to predict the evolution of localizations. In this work, an improved version of the neural network-enhanced Reproducing Kernel Particle Method (NN-RKPM) is proposed for modeling brittle fracture. In the proposed method, a background reproducing kernel (RK) approximation defined on a coarse and uniform discretization is enriched by a neural network (NN) approximation under a Partition of Unity framework. In the NN approximation, the deep neural network automatically locates and inserts regularized discontinuities in the function space. The NN-based enrichment functions are then patched together with RK approximation functions using RK as a Partition of Unity patching function. The optimum NN parameters defining the location, orientation, and displacement distribution across location together with RK approximation coefficients are obtained via the energy-based loss function minimization. To regularize the NN-RK approximation, a constraint on the spatial gradient of the parametric coordinates is imposed in the loss function. Analysis of the convergence properties shows that the solution convergence of the proposed method is guaranteed. The effectiveness of the proposed method is demonstrated by a series of numerical examples involving damage propagation and branching.
Jonghyuk Baek, Jiun-Shyan Chen
2023-07-04T21:52:09Z
http://arxiv.org/abs/2307.01937v1
**A Neural Network-Based Enrichment of Reproducing Kernel Approximation for Modeling Brittle Fracture** ###### Abstract Numerical modeling of localizations is a challenging task due to the evolving rough solution in which the localization paths are not predefined. Despite decades of efforts, there is a need for innovative discretization-independent computational methods to predict the evolution of localizations. In this work, an improved version of the neural network-enhanced Reproducing Kernel Particle Method (NN-RKPM) is proposed for modeling brittle fracture. In the proposed method, a background reproducing kernel (RK) approximation defined on a coarse and uniform discretization is enriched by a neural network (NN) approximation under a Partition of Unity framework. In the NN approximation, the deep neural network automatically locates and inserts regularized discontinuities in the function space. The NN-based enrichment functions are then patched together with RK approximation functions using RK as a Partition of Unity patching function. The optimum NN parameters defining the location, orientation, and displacement distribution across location together with RK approximation coefficients are obtained via the energy-based loss function minimization. To regularize the NN-RK approximation, a constraint on the spatial gradient of the parametric coordinates is imposed in the loss function. Analysis of the convergence properties shows that the solution convergence of the proposed method is guaranteed. The effectiveness of the proposed method is demonstrated by a series of numerical examples involving damage propagation and branching. _Keywords: neural network, enrichment, reproducing kernel, fracture, damage_ ## 1 Introduction Neural networks (NNs) have been shown to have powerful approximation ability [1, 2]. The strong adaptivity and hidden information extraction capability have made deep neural networks a core element of machine learning in various applications. This feature also makes NNs appealing for solving challenging problems in computational mechanics. For example, data-driven computations for path-dependent material modeling [3, 4, 5, 6, 7, 8], reduced order modeling [9, 10], and parameter identification [11, 12, 13]. Additionally, the flexible adaptivity in NN allows an approximation space to be goal-specifically optimized. Utilizing this flexibility in the approximation space, NNs can be considered an alternative to traditional mesh-based methods in solving challenging problems involving localizations, such as fracture, for which special treatment is needed near the localizations. Traditional approaches for fracture modeling can be divided into two broad categories: discrete crack approaches and diffuse crack approaches. The former category includes extended or generalized FEMs [14, 15, 16], partition of unity-based enrichment [17, 18], and meshfree method with near-tip enrichment [19, 20]. In these methods, strong discontinuities are directly inserted into the approximation, necessitating the detection and tracking of crack surfaces, significantly increasing the complexity of the computation for multidimensional problems. Nonlocal averaging [21], high order gradient models [22, 23, 24], and phase field methods [25, 26, 27, 28] have been employed in the diffuse crack approaches. In this family of methods, nonlocal effects are typically introduced in the approximation or in the energy function, yielding diffused, regularized representation of cracks. This property enables traditional mesh-based or meshfree methods to approximate localizations without enrichment and the need for localization tracking. However, for sufficient accuracy, intense mesh refinement is required in the regions of localizations. For example, Geelen et al. (2019) [28] used an element size as small as one-tenth the width of the diffuse crack. With their adaptive nature as an approximation, NNs provide a new paradigm in searching for solutions of mathematical models. Recently, NNs have been successfully applied as a solver of partial differential equations [11, 12, 29, 30, 31, 32, 33]. In physics-informed neural network (PINN)by Raissi et al. ) [11, 12], the solution of a PDE is approximated by densely-connected deep neural networks with the residual-based loss function minimization. Haghighat and Juanes (2021) [34] developed the Python package SciANN for scientific computing using PINN and demonstrated its ability to capture strain and stress localization in a perfectly plastic material. More recently, PINNs have been extended to multi-physics problems [35, 36]. However, one drawback of utilizing a deep neural network combined with a residual-based and collocated loss function is its computational cost, e.g., in [34], where 100 million unknown weights and biases were used. Samaniego et al. (2020) [29] demonstrated that potential-based loss functions produced superior results with significantly fewer unknowns than the residual-based loss function commonly used in PINN. Zhang et al. (2021) [30] proposed a deep neural network that reproduces standard approximations along with automatic refinement enabled by treating nodal positions as unknown network parameters, which, however, introduces sparsity into the neural network. Lu et al. (2021) [31], based on the universal approximation theorem [37], designed a new deep neural network architecture, in which the output of one deep neural network is multiplied by the output of another deep neural network, resulting effective approximations of nonlinear operators in partial differential equations. Despite the growing interest in PINNs, there has been limited research on developing effective and computationally efficient NN-based approximation for modeling localizations. Baek et al. (2022) [33] proposed a neural network-enhanced reproducing kernel particle method (NN-RCPM) for modeling localizations. In this work, the approximation is constructed as the superposition of the NN approximation and the reproducing kernel (RK) approximation. For computational efficiency, NNs are limited to approximating localizations, while the RK approximation on a coarse and uniform discretization is employed to approximate the smooth solutions. In this approach, the NN approximation control parameters play the role in automatically capturing the location, orientation, and the localization profile at the localizations. These NN parameters are determined by the optimization of an energy-based loss function. In this work, we propose an improved version of NN-RKPM in which the NN approximation and the background RK approximation are patched together with Partition of Unity for ensured convergence. This approach is derived through an NN-based correction of standard RK shape functions. In the modified NN-RK approximation, the deep neural network automatically locates and inserts regularized discontinuities in the function space, and the NN enriched RK coefficient function provides varying magnitude of the discontinuity along the localization path. Additionally, convergence properties of the proposed method are analyzed. The paper is organized as follows. In Section 2, the basic equations are provided, including the minimization problem for brittle fracture and the reproducing kernel particle method. In Section 3, a neural network-enriched Partition of Unity reproducing kernel approximation is proposed, along with convergence analysis and regularization technique. In Section 4, the implementation details including the neural network architecture and solution procedure are provided. This is followed by numerical examples in Section 5 and concluding remarks in Section 6. ## 2 Background ### Minimization Problem for Fracture For a domain \(\Omega\in\mathbb{R}^{d}\) with the space dimension \(d\) and its boundary \(\partial\Omega=\partial\Omega_{g}\cup\partial\Omega_{h}\) that consists of the Dirichlet boundary \(\partial\Omega_{g}\) and the Neumann boundary \(\partial\Omega_{h}\), let us consider the following minimization problem: for \(\mathbf{u}\in H^{1}\), \(\mathbf{u}=\mathbf{g}\) on \(\partial\Omega_{g}\), \[\min_{\mathbf{u}}\Pi(\mathbf{u})=\int_{\Omega}\!\!\psi(\mathbf{u})\ d\Omega- \int_{\Omega}\mathbf{u}\cdot\mathbf{b}\ d\Omega-\int_{\partial\Omega_{\text{ h}}}\mathbf{u}\cdot\mathbf{h}\ d\Gamma, \tag{1}\] where \(\mathbf{u}\), \(\psi(\mathbf{u})\), \(\mathbf{b}\), and \(\mathbf{h}\) are the displacement, energy density functional, body force, and traction, respectively. The energy density functional \(\psi(\mathbf{u})\) has the following form: \[\psi(\mathbf{u})=g\left(\eta\big{(}\boldsymbol{\varepsilon}(\mathbf{u})\big{)} \right)\psi_{0}^{+}(\mathbf{u})+\psi_{0}^{-}(\mathbf{u})+\bar{\psi}\left(\eta \big{(}\boldsymbol{\varepsilon}(\mathbf{u})\big{)}\right). \tag{2}\] Herein, \(\boldsymbol{\varepsilon}=\frac{1}{2}(\nabla\mathbf{u}+(\nabla\mathbf{u})^{T})\), \(\eta\), and \(g\) are the strain tensor, the (strain dependent) damage variable, and the degradation function, respectively. Three energy density components \(\psi_{0}^{+}\), \(\psi_{0}^{-}\), and \(\bar{\psi}\) denote non-degraded tensile strain energy, compressive strain energy, and dissipation functional, respectively. The tensile and compressive strain energies are defined as \[\begin{gathered}\psi_{0}=\mu\bar{\varepsilon}_{i}\bar{ \varepsilon}_{i}+\frac{\lambda}{2}tr(\bar{\boldsymbol{\varepsilon}})^{2},\\ \psi_{0}^{+}=\mu(\bar{\varepsilon}_{i})_{+}(\bar{\varepsilon}_{i })_{+}+\frac{\lambda}{2}\langle tr(\bar{\boldsymbol{\varepsilon}})\rangle_{+ }^{2},\\ \psi_{0}^{-}=\psi_{0}-\psi_{0}^{+},\end{gathered} \tag{3}\] where the summation notation is adopted. In (3), \(\mathbf{\varepsilon}\), \(\lambda\), and \(\mu\) are principal strain, Lame's first and second parameters, respectively. \(\langle\cdot\rangle_{+}=\max(\cdot\,,0)\) and \(\langle\cdot\rangle_{-}=\min(\cdot\,,0)\) are additionally used. The stress is defined as \[\mathbf{\sigma}=g\big{(}\eta(\mathbf{\varepsilon})\big{)}\frac{\partial\psi_{0}^{+}}{ \partial\mathbf{\varepsilon}}+\frac{\partial\psi_{0}^{-}}{\partial\mathbf{\varepsilon }}. \tag{4}\] In this work, the damage variable, dissipation functional, and degradation function are defined as follows: \[\eta=\frac{\psi_{0}^{+}}{\psi_{0}^{+}+p} \tag{5}\] \[\bar{\psi}=p\eta^{2},\] (6) \[g=(1-\eta)^{2}, \tag{7}\] where \(\mathbf{p}\) is a fracture energy-dependent material property. The adopted dissipation functional and degradation function in Eqs. (6) and (7) are the same as what is used in Miehe et al. (2010)[25] except the absence of the higher order term \(\mathcal{O}(\nabla\eta^{2})\) in the dissipation functional in (6). Therefore, it is straightforward to show that the damage model in Eqs. (5)-(7) is variationally consistent, i.e., for \(\mathbf{u}\in H^{1}\), \(\mathbf{u}=\mathbf{g}\) on \(\partial\Omega_{g}\), for all \(\delta\mathbf{u}\in H^{1}\), \(\delta\mathbf{u}=\mathbf{0}\) on \(\partial\Omega_{g}\), \[\delta\Pi=\int_{\Omega}\delta\mathbf{\varepsilon}(\mathbf{u})\colon\mathbf{\sigma}( \mathbf{\varepsilon})\;d\Omega=\int_{\Omega}\delta\mathbf{u}\cdot\mathbf{b}\;d \Omega+\int_{\partial\Omega^{h}}\delta\mathbf{u}\cdot\mathbf{h}\;d\Gamma, \tag{8}\] which leads to the following balance equation: \[\nabla\cdot\mathbf{\sigma}+\mathbf{b}=0\;\;\text{in}\;\;\Omega, \tag{9}\] with the boundary conditions \[\mathbf{u}=\mathbf{g}\;\;\text{on}\;\;\partial\Omega_{g}, \tag{10}\] \[\nabla\mathbf{u}\cdot\mathbf{n}=\mathbf{h}\;\;\text{on}\;\; \partial\Omega_{h}, \tag{11}\] where \(\mathbf{n}\) denotes the surface normal vector. To achieve the irreversibility of the damage, a history variable \[\mathcal{H}=\max\Big{(}\max_{t\in[0,T]}\{\psi_{0}^{+}(\mathbf{\varepsilon})-\psi_{ c}\},0\Big{)} \tag{12}\] is employed to describe the damage variable: \[\eta=\frac{\mathcal{H}}{\mathcal{H}+p}. \tag{13}\] For Eq. (12), the critical fracture energy \(\psi_{c}\) is defined as \[\psi_{c}=\frac{f_{t}}{2E} \tag{14}\] with the tensile strength of material \(f_{t}\) and Young's modulus \(E\). The model parameter \(p\) takes the following form \[p=\frac{\mathcal{G}_{c}}{\ell}, \tag{15}\] with critical energy release rate \(\mathcal{G}_{c}\) and length scale parameter \(\ell\). To take mixed mode fracture into account, we adopt the \(\mathcal{F}\)-criterion[38], with the mode I critical energy release rate \(\mathcal{G}_{cl}\) and the mode II critical energy release rate \(\mathcal{G}_{clI}\): \[\mathcal{F}\equiv\frac{\psi_{0}^{+}}{\mathcal{G}_{c}}\approx\frac{\psi_{I}^{ +}}{\mathcal{G}_{cl}}+\frac{\psi_{II}^{+}}{\mathcal{G}_{clI}}, \tag{16}\] with \[\psi_{I}^{+}=\frac{\lambda}{2}\langle\sum\bar{\varepsilon_{l}} \rangle_{+}^{2}, \tag{17}\] \[\psi_{II}^{+}=\mu\langle\bar{\varepsilon_{i}}\rangle_{+}\langle \bar{\varepsilon_{i}}\rangle_{+}. \tag{18}\] Eq. (16) leads to the following critical energy release rate: \[\mathcal{G}_{c}=\frac{\psi_{0}^{+}}{\psi_{l}^{+}/\mathcal{G}_{cl}+\psi_{II}^{ +}/\mathcal{G}_{clI}}. \tag{19}\] Note that Eq. (19) implies \(\mathcal{G}_{c}=\mathcal{G}_{cl}\) for pure mode I fracture when \(\psi_{0}^{+}=\psi_{I}^{+}\) and \(\mathcal{G}_{c}=\mathcal{G}_{clI}\) for pure mode II fracture when \(\psi_{0}^{+}=\psi_{II}^{+}\). _Remark 1.1_.: With \(\mathcal{G}_{c}\) defined in (19) which is a function of strain, the functional \(\Pi\) defined in (1) is not a minimization functional for the Euler-Lagrange equation (9). Therefore, in this work, we solve the minimization problem in (1) and the \(\mathcal{G}_{c}\) calculation in (19) in a staggered manner. _Remark 1.2_.: Different from the phase field fracture methods, the damage model described in this section is a local model in the absence of the higher order term in the dissipation functional. Therefore, there is possibility of the loss of ellipticity and the discretization-dependence of the numerical solution. This issue will be addressed in Section 3.3. ### Reproducing kernel particle method for background approximation Here we review the standard reproducing kernel particle method (RKPM) that is used to approximate smooth part of the solution in the proposed approach (see Section 3). #### 2.2.1 Reproducing kernel approximation Let \(\Omega\) be a domain discretized by \(NP\) nodes with nodal coordinate \(\{\mathbf{x}_{I}\}_{I\in\mathcal{S}}\) with a node set \(\mathcal{S}=\{1,\cdots,NP\}\). The reproducing kernel (RK) approximation, \(u^{RK}(\mathbf{x})\), of a function \(u(\mathbf{x})\) is \[u^{RK}(\mathbf{x})=\sum_{I\in\mathcal{S}}\Psi_{I}(\mathbf{x})d_{I}, \tag{20}\] with an RK shape function \(\Psi_{I}(\mathbf{x})\) and a generalized nodal coefficient \(d_{I}\). The RK shape function is a correction of a kernel function, \(\Phi_{a}(\mathbf{x}-\mathbf{x}_{I})\), defined on the compact support of node \(I\) with a support size of \(\alpha\): \[\Psi_{I}(\mathbf{x})=C_{I}(\mathbf{x})\Phi_{a}(\mathbf{x}-\mathbf{x}_{I}), \tag{21}\] where the kernel correction function \(C_{I}(\mathbf{x})\) is defined as \[C_{I}(\mathbf{x})\equiv\Bigg{\{}\sum_{|\alpha|\leq n}(\mathbf{x}-\mathbf{x}_{ I})^{\alpha}b_{\alpha}(\mathbf{x})\Bigg{\}}, \tag{22}\] where \((\mathbf{x}-\mathbf{x}_{I})^{\alpha}\) is a basis function, \(\alpha=(\alpha_{1},\alpha_{2},...,\alpha_{d})\) is a multi-dimensional index, and \(|\alpha|\equiv\sum_{i=1}^{d}\alpha_{i}\). \(\mathbf{x}^{\alpha}\) is defined as \[\mathbf{x}^{\alpha}\equiv x_{1}^{\alpha_{1}}\cdot x_{2}^{\alpha_{2}}\cdot... \cdot x_{d}^{\alpha_{d}}. \tag{23}\] The coefficients, \(b_{\alpha}(\mathbf{x})\), are obtained by solving the following set of reproducing conditions: \[\sum_{I\in\mathcal{S}}\Psi_{I}(\mathbf{x})\mathbf{x}_{I}^{\alpha}=\mathbf{x} ^{\alpha},\qquad|\alpha|\leq n. \tag{24}\] The results RK shape function takes the following explicit form: \[\Psi_{I}(\mathbf{x})=\mathbf{H}^{T}(\mathbf{0})\mathbf{M}^{-1}(\mathbf{x}) \mathbf{H}(\mathbf{x}-\mathbf{x}_{I})\Phi_{a}(\mathbf{x}-\mathbf{x}_{I}), \tag{25}\] where the moment matrix \(\mathbf{M}(\mathbf{x})\) and the basis vector \(\mathbf{H}(\mathbf{x}-\mathbf{x}_{I})\) are defined as \[\mathbf{M}(\mathbf{x})=\sum_{I\in\mathcal{S}}\mathbf{H}(\mathbf{x}-\mathbf{x} _{I})\mathbf{H}^{T}(\mathbf{x}-\mathbf{x}_{I})\Phi_{a}(\mathbf{x}-\mathbf{x}_ {I}), \tag{26}\] \[\mathbf{H}(\mathbf{x}-\mathbf{x}_{I})=[1,(x_{1}-x_{1I}),(x_{2}-x_{2I}),(x_{3}-x_{3I} ),\cdots,(x_{3}-x_{3I})^{n}]^{T}. \tag{27}\] The kernel function \(\Phi_{a}(\mathbf{x}-\mathbf{x}_{I})\) determines the order of continuity, while the basis vector \(\mathbf{H}(\mathbf{x}-\mathbf{x}_{I})\) determines the polynomial completeness. Thus, it is straightforward to introduce high order continuity into the approximation space, independent of the basis order, which makes the RK approximation more appealing for approximating the smooth part of solution than the C\({}^{0}\) interpolation-type approximations used in finite element methods. Figure 1 shows a smooth RK shape function constructed on the linear basis. For a quasi-uniform RK points distribution, the following global error estimation of standard RK approximation \(u^{RK}\) holds, for \(u\in H^{r}\), [41] \[\|u^{RK}-u\|_{l,\Omega}\leq Cka^{\gamma}|u|_{p+1,\Omega}, \tag{28}\] where \(a\), \(C\), \(k\), \(p\), and \(\gamma=\min(p+1-l,\ r-l)\) are the support size, a generic constant, the number of overlapping points, the order of RK basis, and the convergence rate, respectively. #### 2.2.2. Stabilized conforming nodal integration When Gauss integration (GI) is used for RKPM, a significantly high-order rule is required to yield optimal solution convergence, due to the rational shape function given in Eq. (26). This, in turn, leads to a significant increase in computational cost. To address this issue, the stabilized conforming nodal integration (SCNI) was proposed in [39]. SCNI enables optimal solution convergence for RKPM with a linear basis by satisfying the linear integration constraint. Compared to high-order GI, SCNI is computationally much more efficient as it eliminates the need to evaluate direct derivatives of RK shape functions at a large number of integration points. Additionally, Wei and Chen (2018) [40] show that the strain smoothing employed in SCNI helps to suppress spurious stress oscillation that can arise in localization problems. For this reason, SCNI is utilized to perform the domain integration required in Eq. (1). Figure 1: Illustration of RK discretization and shape function In SCNI, the domain is partitioned into \(N_{IC}\) conforming smoothing cells, such as Voronoi cells, as illustrated in Figure 2 where \(N_{IC}\) denotes the number of smoothing cells. Note that, while \(N_{IC}\) coincides with the number of particles for standard meshfree methods, the smoothing cells can be further refined to improve accuracy. The integration of the loss function by SCNI is performed as follows: \[\int_{\Omega}\psi(\mathbf{u}^{h},\nabla\mathbf{u}^{h})\ d\Omega\approx\sum_{L} ^{N_{IC}}\psi\left(\mathbf{u}^{h}(\mathbf{x}_{L}),\widehat{\nabla}\mathbf{u}^{ h}(\mathbf{x}_{L})\right)V_{L}, \tag{29}\] where \(\widehat{\nabla}\mathbf{u}^{h}\) is the smoothed gradient of \(\mathbf{u}\) defined as \[\widehat{\nabla}\mathbf{u}^{h}(\mathbf{x}_{L})\equiv\frac{1}{V_{L}}\int_{ \Gamma_{L}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! the true solution would be rough near localizations and smooth in the remaining part of the domain. As discussed in Section 2.2.1, the RK approximation is intended to capture the smooth part of the solution. With the enrichment function (to be constructed) near the evolving localizations, the total solution is constructed by superposing a background RK approximation \(u^{RK}(\mathbf{x})\) and a neural network (NN) approximation \(u^{NN}(\mathbf{x})\) as follows: for \(\mathbf{x}\in\Omega\), \[u^{h}(\mathbf{x})=u^{RK}(\mathbf{x})+u^{NN}(\mathbf{x}), \tag{31}\] where \(u^{h}(\mathbf{x})\) is an NN-enhanced RK (NN-RK) approximation. With this construction, uniform RK discretization is considered as a background discretization, and the localized solution will be represented by the NN approximation. The NN-RK approximation utilizes the RK approximation's flexibility in selecting the order of continuity and the order of monomial bases. #### 3.1.1 A neural network-based correction of RK approximation In this section, we derive the NN-RK approximation through a neural network-based correction (NN-correction) of an RK approximation. Let \(\Omega\) be a domain discretized by \(NP\) background RK nodes with nodal coordinate \(\{\mathbf{x}_{I}\}_{I\in\bar{S}}\) in a node set \(\bar{S}=\{1,\cdots,NP\}\). In addition, define a node subset \(\bar{\mathcal{S}}\) that contains the nodes with the associated RK shape functions to be corrected near localization. In this work, \(\bar{\mathcal{S}}=\{J\mid\exists\mathbf{x}\in supp\big{(}\Psi_{J}\big{)},\psi _{0}^{+}(\mathbf{x})\geq\kappa\psi_{c}\}\) with \(\kappa=0.5\) is applied. We start with the following NN-corrected RK approximation: \[u^{h}(\mathbf{x})=\sum_{I\in\bar{S}}\overline{\Psi}_{I}(\mathbf{x})\bar{d}_{I}, \tag{32}\] where the NN-corrected RK shape function \(\overline{\Psi}_{I}(\mathbf{x})\) is defined as follows: Figure 3: Schematic illustration of the NN-RK approximation: quasi-uniform background RK node distribution (blue dots) for smooth solution approximation and NN enrichment of the solution space for capturing localizations (black solid curves). \[\overline{\Psi}_{I}(\mathbf{x})=\begin{cases}\bar{C}_{I}(\mathbf{x})\Psi_{I}( \mathbf{x}),&\quad I\in\bar{\mathcal{S}}\\ \quad\Psi_{I}(\mathbf{x}),&\quad I\in\mathcal{S}\backslash\bar{\mathcal{S}}, \end{cases} \tag{33}\] where \(\Psi_{I}(\mathbf{x})\) and \(\bar{C}_{I}(\mathbf{x})\) denote the original RK shape function defined in Section 2.2.1 and an NN-correction function, respectively. The NN-correction function takes the following form of a neural network with \(n\) neurons possessed by the last hidden layer: \[\bar{C}_{I}(\mathbf{x})\equiv\bar{b}_{I}+\sum_{K=1}^{n}\bar{w}_{IK}\zeta_{IK}( \mathbf{x}), \tag{34}\] where \(\bar{b}_{I}\), \(\bar{w}_{IK}\), and \(\zeta_{IK}(\mathbf{x})\) denote bias, weight, and last hidden layer's output. By substituting (33) and (34) into (32) and defining \(d_{I}=\bar{b}_{I}\bar{d}_{I}\) and \(w_{IK}^{C}=\bar{w}_{IK}\bar{d}_{I}\), we have a general expression of NN-RK approximation as follows: \[u^{h}(\mathbf{x})=u^{RK}(\mathbf{x})+u^{NN}(\mathbf{x}), \tag{35}\] \[u^{RK}=\sum_{I\in\bar{\mathcal{S}}}\Psi_{I}(\mathbf{x})d_{I}, \tag{36}\] \[u^{NN}=\sum_{I\in\bar{\mathcal{S}}}\sum_{K=1}^{n}\Psi_{I}(\mathbf{x})\zeta_{ IK}(\mathbf{x})w_{IK}^{C}. \tag{37}\] _Remark 3.1_.: The background RK approximation \(u^{RK}(\mathbf{x})\) in (36) is a standard RK approximation based on a polynomial RK basis. Meanwhile, the NN approximation \(u^{NN}(\mathbf{x})\) in (37) contains nonstandard _adaptive_ basis functions, which enables it to capture localized material responses with a coarse background RK discretization. _Remark 3.2_.: As the RK shape functions possess the property of partition of unity, the NN-RK approximation \[\begin{split} u^{h}(\mathbf{x})=u^{RK}(\mathbf{x})+u^{NN}( \mathbf{x})=\sum_{I\in\bar{\mathcal{S}}}\Psi_{I}(\mathbf{x})\Bigg{(}d_{I}+ \sum_{K=1}^{n}\zeta_{IK}(\mathbf{x})w_{IK}^{C}\Bigg{)},\\ w_{IK}^{C}=0,\qquad\forall I\in\mathcal{S}\backslash\bar{\mathcal{S}} \end{split} \tag{38}\] can be viewed as patching the RK and NN approximations under the Partition of Unity framework. _Remark 3.3_.: In (37), \(\zeta_{IK}(\mathbf{x})\) is the activated output of \(K\)-th neuron in the last hidden layer of a neural network associated with node \(I\). By having \(\zeta_{IK}(\mathbf{x})\equiv\zeta_{K}(\mathbf{x})\) for all \(I\in\bar{\mathcal{S}}\), \(\zeta_{K}(\mathbf{x})\) is detached from a specific background node and becomes a flexible _foreground_ quantity. Then, the NN approximation in (37) can be rewritten as follows: \[u^{NN}=\sum_{K=1}^{n}\zeta_{K}(\mathbf{x})\upsilon_{K}(\mathbf{x}), \tag{39}\] \[\upsilon_{K}(\mathbf{x})\equiv\sum_{I\in\mathcal{S}}\Psi_{I}( \mathbf{x})w_{IK}^{C}. \tag{40}\] _Remark 3.4_.: The neural network to generate \(\zeta_{IK}(\mathbf{x})\) can be either a traditional or a nonstandard neural network. In section 3.1.2, we present a modified deep neural network designed to effectively capture localizations. #### 3.1.2 Block-level neural network approximation In this work, we introduce a modified deep neural network to increase the sparsity of the network architecture, improve the interpretability, and capture localizations effectively. In this regard, the following block-level NN approximation is introduced. \[u^{NN}=\sum_{J=1}^{n_{B}}u_{J}^{B}(\mathbf{x}), \tag{41}\] where \(n_{B}\) is the number of NN blocks, and the block-level NN approximation \(u_{J}^{B}(\mathbf{x})\) is defined as follows: \[u_{J}^{B}(\mathbf{x})=\sum_{K=1}^{n_{NK}}\hat{\phi}_{JK}(\mathbf{ x})\hat{\upsilon}_{JK}(\mathbf{x}), \tag{42}\] \[\hat{\upsilon}_{JK}(\mathbf{x})=\sum_{I\in\mathcal{S}}\Psi_{I}( \mathbf{x})\hat{\upsilon}_{IJK}^{C}, \tag{43}\] where \(\hat{\phi}_{JK}(\mathbf{x})\) and \(n_{NK}\) are \(K\)-th NN kernel function in \(J\)-th NN block and the number of NN kernel functions per NN block, respectively. Note that (41)-(43) are shown to be equivalent to (39) and (40) by flattening the indices \(JK\) in (42) and (43) into \(K\). Figure 4 illustrates the modified network architecture of \(J\)-th NN block, for which the construction is made so that the neural network approximation can capture complicated localization topologies effectively. Also, the construction of the neural network at the block level significantly increases the sparsity of the weight matrices, compared to the densely connected standard deep neural networks utilized in many previous studies in literature [29, 34]. As shown in Figure 4, three sets of unknown parameters are involved in the NN approximation: the location-control weight set \(\mathbf{W}_{I}^{L}\), the shape-control weight set \(\mathbf{W}_{J}^{S}\) as well as the NN-correction weight set \(\mathbf{W}_{J}^{C}=\left\{\left\{\hat{\upsilon}_{IJK}^{C}\right\}_{I\in \mathcal{S}}\right\}_{K=1}^{n_{NK}}\) in (43). These parameters are to be automatically determined by solving the minimization problem (1). Details on the sub-blocks described in Figure 4 and their associated unknown parameters are explained in the following subsections. #### 3.1.3 Parametrization sub-block As shown in Figure 4, the parametric coordinate \(\mathbf{y}_{j}\) in Layer PC is the output of the parametrization sub-block, which is an intermediate variable of a densely connected deep neural network \(\mathcal{N}\colon\mathbf{x}\rightarrow\mathbf{y}_{j}\) that takes \(\mathbf{x}\in\mathbb{R}^{d}\) and \(\mathbf{y}_{j}\equiv\mathbf{y}\big{(}\mathbf{x};\mathbf{W}_{j}^{L}\big{)} \in\mathbb{R}^{d}\) as its input and output, respectively. The parametrization projects complicated localization patterns onto a parametric space, so that complicated localizations can be captured with NN kernel functions in a simple mathematical form. With \(n_{HL}\) hidden layers, the function \(\mathbf{y}\big{(}\mathbf{x};\mathbf{W}_{j}^{L}\big{)}\) is defined as \[\mathbf{y}\big{(}\mathbf{x};\mathbf{W}_{j}^{L}\big{)}=\mathbf{f}\big{(}\cdot; \big{\{}\mathbf{w}_{J(n_{HL}+1)}^{L},b_{J(n_{HL}+1)}^{L}\big{\}}\big{)}\circ \mathbf{h}\big{(}\cdot;\big{\{}\mathbf{w}_{Jn_{HL}}^{L},b_{Jn_{HL}}^{L}\big{\}} \big{)}\circ\cdots\circ\mathbf{h}\big{(}\mathbf{x};\big{\{}\mathbf{w}_{J1}^{L},b_{J1}^{L}\big{\}}\big{)} \tag{44}\] with \[\mathbf{h}\big{(}\mathbf{\hat{x}};\big{\{}\mathbf{w}_{Jl}^{L},b_ {Jl}^{L}\big{\}}\big{)}=\alpha\left(\mathbf{f}\big{(}\mathbf{\hat{x}};\big{\{} \mathbf{w}_{Jl}^{L},b_{Jl}^{L}\big{\}}\big{)}\right), \tag{45}\] \[\mathbf{f}\big{(}\mathbf{\hat{x}};\big{\{}\mathbf{w}_{Jl}^{L},b_ {Jl}^{L}\big{\}}\big{)}=\mathbf{w}_{Jl}^{L}\mathbf{\hat{x}}+b_{Jl}^{L}. \tag{46}\] In (44), \(\mathbf{w}_{Jl}^{L}\) and \(b_{Jl}^{L}\) denote weight and bias of layer \(l\), respectively, and the location-control parameter set \(\mathbf{W}_{j}^{L}\) in Figure 4 is defined as \(\mathbf{W}_{j}^{L}=\big{\{}\mathbf{w}_{Jl}^{L},b_{Jl}^{L}\big{\}}_{l=1}^{n_{ HL}+1}\). In (45), \(\alpha(\cdot)\) denote an activation function. In this work, the hyperbolic tangent activation function is used. Figure 4: Modified neural network architecture of \(J\)-th NN block. The unknown parameters introduced in each part are denoted in red color. #### 3.1.4 NN kernel function As shown in Figure 4, the NN kernel functions \(\hat{\phi}_{JK}(\mathbf{x})\) in Layer NNK is the outcome of the normalization of unnormalized NN kernel functions \(\phi_{JK}(\mathbf{x})\). The normalization is defined as \[\hat{\phi}_{JK}(\mathbf{x})=\frac{\phi_{JK}(\mathbf{x})}{\sum_{l=1}^{n_{B}}\sum _{L=1}^{n_{NK}}\phi_{lL}(\mathbf{x})}, \tag{47}\] and the NN kernel function \(\phi_{JK}(\mathbf{x})\) is defined as \[\phi_{JK}(\mathbf{x})=\prod_{\alpha=1}^{d}\prod_{l=1}^{2}\bar{\phi}_{i}\big{(} y_{J\alpha};\{\bar{y}_{\alpha i}^{JK},c_{\alpha i}^{JK},\beta_{\alpha i}^{JK}\} \big{)}, \tag{48}\] where \(\bar{\phi}_{i}\) and \(\big{\{}\bar{y}_{\alpha i}^{JK},c_{\alpha i}^{JK},\beta_{\alpha i}^{JK}\big{\}}\) denote a regularized step function and shape-control parameters, respectively. The shape-control weight set \(\mathbf{W}_{J}^{S}\) in Figure 4 is defined as \(\mathbf{W}_{J}^{S}=\Big{\{}\Big{\{}\big{\{}\bar{y}_{\alpha i}^{JK},c_{\alpha i }^{JK},\beta_{\alpha i}^{JK}\big{\}}_{\alpha=1}^{d}\Big{\}}_{i=1}^{2}\Big{\}} \Big{\}}_{K=1}^{n_{NK}}\). In this work, the regularized step function is constructed based on the parametric softplus activation function \(S\) defined as follows: \[\bar{\phi}_{i}(\mathcal{y};\{\bar{y}_{i},c_{i},\beta_{i}\}) =S\Big{(}z_{i}(\mathcal{y})+\frac{1}{2};\beta_{i}\Big{)}-S\Big{(} z_{i}(\mathcal{y})-\frac{1}{2};\beta_{i}\Big{)}, \tag{49}\] \[z_{i}(\mathcal{y}) =(-1)^{i}(\mathcal{y}-\bar{\mathcal{y}})/c\,,\ \ i=1,2,\] (50) \[S(z;\mathcal{\beta})=\frac{1}{\beta}\log\big{(}1+e^{\beta z} \big{)}. \tag{51}\] In (49)-(51), \(\beta_{i}\) controls the sharpness in the transition of derivative as shown in Figure 5 (a-b), and \(c_{i}\) controls the sharpness of the solution transition as shown in Figure 5 (c). In addition, \(\bar{\mathcal{y}}_{i}\) influences the support of \(\bar{\phi}_{i}\). Note that \(\bar{\phi}_{i}\) is the output of Layer RSF in Figure 4, and \((1/c_{i})\) and \((-\bar{y}_{i}/c_{i})\) are respectively the weight and the bias of Layer RSF. Figure 6 shows a schematic illustration of a two-dimensional NN kernel which possesses a sharp transition in direction \(\mathcal{y}\). Interested readers refer to [33] for more details on the NN kernel functions. Figure 5: The influence of the control parameters on solution transition: (a) the influence of \(\beta\) on \(\bar{\phi}\), (b) the influence of \(\beta\) on \(\partial\bar{\phi}/\partial z\), and (c) the influence of \(c\) on \(\bar{\phi}\) with \(\beta=200\) Figure 6: Schematic illustration of an NN kernel function: (left) two-dimensional NN kernel function \(\phi\) and (right) its cross-sectional value across \(y\). ### Convergence Properties An error bound of the proposed NN-RK approximation is estimated. Let \(\widehat{\Omega}\) be the transition zone near the localization domain. Then, we have \[\left\|u^{h}-u\right\|_{0,\Omega}\leq\left\|u^{h}-u\right\|_{0,\Omega\setminus \widehat{\Omega}}+\left\|u^{h}-u\right\|_{0,\widehat{\Omega}}. \tag{52}\] As shown in Figure 7, we consider an arbitrary \(u\) with a sharp transition occurring in \(\widehat{\Omega}=[-\ell/2,+\ell/2]\) and its approximation \(u^{h}\) with a transition occurring in \(\widehat{\Omega}_{2}\). For both \(u\) and \(u^{h}\), it is assumed that there are weak discontinuities on the boundaries of the transition zones. For brevity, let us introduce the following function \(w\): \[w(\chi;\xi)\equiv\frac{\left[\xi\right]}{\ell}\chi+\langle\!\langle\xi\rangle\!\rangle, \tag{53}\] where \(\left[\!\left[\xi\right]\!\right]\equiv\xi^{+}-\xi^{-}\) and \(\langle\!\langle\xi\rangle\!\rangle\equiv(\xi^{+}+\xi^{-})/2\) are a difference operator and an average operator, respectively, with \(\xi^{+}\equiv\xi(x=+\ell/2)\) and \(\xi^{-}\equiv\xi(x=-\ell/2)\). Using (53), the true solution \(u\) in the transition domain \(\widehat{\Omega}\) can be written in a parametric coordinate \(y^{u}\)as \[u(x)=w(y^{u}(x);u^{\Gamma}), \tag{54}\] where \(u^{\Gamma}\) is the value of \(u\) on the boundary of \(\widehat{\Omega}\), and, from (53) and (54), \(y^{u}(x)\) is obtained as \[y^{u}(x)=\frac{\ell}{[\![u^{\Gamma}]\!]}(u(x)-\langle\!(u^{\Gamma})\!\rangle). \tag{55}\] Similarly, the approximated solution \(u^{h}(x)\) in the transition domain \(\widehat{\Omega}\) is written in an approximated parametric coordinate \(Y\)As \[u^{h}(x)=w\left(Y(x);u^{h^{\Gamma}}\right), \tag{56}\] with \[Y(x)=\begin{cases}-\ell/2,&x\in\widehat{\Omega}_{1}\\ y(x),&x\in\widehat{\Omega}_{2},\\ \ell/2,&x\in\widehat{\Omega}_{3}\end{cases} \tag{57}\] where \(y(x)\) is the neural network-based parametrization defined in (44), and \(u^{h^{\Gamma}}\) is the value of \(u^{h}\) on the boundary of \(\widehat{\Omega}\). In (57), the subdomains are defined as \(\widehat{\Omega}_{1}=\{x\mid y(-\ell/2)\leq y(x)\leq-\ell/2\}\), \(\widehat{\Omega}_{2}=\{x\mid-\ell/2<y(x)\leq\ell/2\}\), and \(\widehat{\Omega}_{3}=\{x\mid\ell/2<y(x)\leq y(\ell/2)\}\). Note that, with \(\beta\to\infty\), the NN kernel function defined in (48)-(50) introduces weak discontinuities on \(y(x)=\pm\ell/2\). With (54) and (56), the last term in (52) becomes \[\begin{split}\left\|u^{h}-u\right\|_{0,\widehat{\Omega}}& =\left\|w\left(Y(x);u^{h^{\Gamma}}\right)-w(y^{u}(x);u^{\Gamma} )\right\|_{0,\widehat{\Omega}}\\ &=\left\|w\left(Y(x);u^{h^{\Gamma}}\right)-w(Y(x);u^{\Gamma})+w \big{(}Y(x);u^{\Gamma}\big{)}-w\big{(}y^{u}(x);u^{\Gamma}\big{)}\right\|_{0, \widehat{\Omega}}\\ &=\left\|w\left(Y(x);u^{h^{\Gamma}}-u^{\Gamma}\right)+w\big{(}Y( x);u^{\Gamma}\big{)}-w\big{(}y^{u}(x);u^{\Gamma}\big{)}\right\|_{0,\widehat{ \Omega}}\\ &\leq\left\|w\left(Y(x);u^{h^{\Gamma}}-u^{\Gamma}\right)\right\|_ {0,\widehat{\Omega}}+\left\|w\big{(}Y(x);u^{\Gamma}\big{)}-w\big{(}y^{u}(x);u ^{\Gamma}\big{)}\right\|_{0,\widehat{\Omega}}.\end{split} \tag{58}\] The first term on the right-hand side of (58) is bounded as follows: \[\begin{split}\left\|w\left(Y(x);u^{h^{\Gamma}}-u^{\Gamma}\right) \right\|_{0,\widehat{\Omega}}&=\left\|\left(\left[u^{h^{\Gamma}}-u^{ \Gamma}\right]/\ell\right)Y(x)+\left\langle\!\left\langle u^{h^{\Gamma}}-u^{ \Gamma}\right\rangle\!\right\rangle\right\|_{0,\widehat{\Omega}}\\ &\leq\left\|\left|u^{h^{\Gamma-}}-u^{\Gamma-}\right|+\left|u^{h^ {\Gamma+}}-u^{\Gamma+}\right|\right\|_{0,\widehat{\Omega}}\\ &\leq\left\|u^{h^{\Gamma-}}-u^{\Gamma-}\right\|_{0,\widehat{ \Omega}}+\left\|u^{h^{\Gamma+}}-u^{\Gamma+}\right\|_{0,\widehat{\Omega}}\\ &=\ell^{1/2}\left(\left|u^{h^{\Gamma-}}-u^{\Gamma-}\right|+\left| u^{h^{\Gamma+}}-u^{\Gamma+}\right|\right)\end{split} \tag{59}\] The second term on the right-hand side of (58) is bounded as follows: \[\begin{split}\left\|w\left(Y(x);u^{\Gamma}\right)-w\left(y(x);u^ {\Gamma}\right)\right\|_{0,\widehat{\Omega}}&=\left\|\left[ \frac{\left[u^{\Gamma}\right]}{\ell}\right(Y(x)-y^{u}(x))\right\|_{0,\widehat {\Omega}}\\ &=\frac{\left\|\left[u^{\Gamma}\right]\right\|}{\ell}\left\|Y(x)- y^{u}(x)\right\|_{0,\widehat{\Omega}}\\ &\leq\frac{\left\|\left[u^{\Gamma}\right]\right\|}{\ell}\left\|y( x)-y^{u}(x)\right\|_{0,\widehat{\Omega}}.\end{split} \tag{60}\] Therefore, for \(\widehat{\Omega}\), the following error bound is obtained. \[\left\|u^{h}-u\right\|_{0,\widehat{\Omega}}\leq\ell^{1/2}\left(\left|u^{h^{ \Gamma-}}-u^{\Gamma-}\right|+\left|u^{h^{\Gamma+}}-u^{\Gamma+}\right|\right)+ \frac{\left|\left[u^{\Gamma}\right]\right|}{\ell}\left\|y(x)-y^{u}(x)\right\| _{0,\widehat{\Omega}}. \tag{61}\] For multi-dimensions, we have \[\left\|u^{h}-u\right\|_{0,\widehat{\Omega}}\leq\ell^{1/2}\left\|u^{h}-u \right\|_{0,\widehat{\Gamma}}+\frac{\left|\left[u^{\Gamma}\right]\right|}{ \ell}\left\|y(x)-y^{u}(x)\right\|_{0,\widehat{\Omega}}, \tag{62}\] where \(\hat{\Gamma}\equiv\partial\widehat{\Omega}\backslash\partial\Omega\) denotes the interface of weak discontinuity. Using the Sobolev trace inequality and (28), the first term on the right-hand side of (62) is bounded as follows: with a generic constant \(\hat{C}\) and \(\hat{C}\), \[\begin{split}\left\|u^{h}-u\right\|_{0,\widehat{\Gamma}}& \leq\left\|u^{h}-u\right\|_{0,\partial(\Omega\backslash\widehat{ \Omega})}\leq\hat{C}\|u^{h}-u\|_{0,\Omega\backslash\widehat{\Omega}}^{1/2} \|u^{h}-u\|_{1,\Omega\backslash\widehat{\Omega}}^{1/2}\\ &\leq\hat{C}ka^{\widehat{\gamma}}|u|_{p+1,\Omega\backslash \widehat{\Omega}},\end{split} \tag{63}\] where \(\hat{\gamma}=\max(p+0.5,\tilde{r})\) where \(\tilde{r}\) and \(p\) denotes the regularity of \(u\) in \(\Omega\backslash\widehat{\Omega}\) and the order of basis of the background RK discretization, respectively. With (28), (62), and (63), the global error (52) has the following error bound: \[\left\|u^{h}-u\right\|_{0,\Omega}\leq\left(Ca^{\gamma}+\hat{C}a^{\widehat{ \gamma}}\right)k\left|u\right|_{p+1,\Omega\backslash\widehat{\Omega}}+\frac{ \left|\left[u^{\Gamma}\right]\right|}{\ell}\left\|y(x)-y^{u}(x)\right\|_{0, \widehat{\Omega}}, \tag{64}\] where \(\gamma=\max(p+1,\tilde{r})\). For smooth \(u\) in \(\Omega\backslash\widehat{\Omega}\), \(\hat{\gamma}=\gamma-0.5\) holds, which means that \(\hat{\gamma}\) dominates the first term on the right-hand side of (64), leading to \[\left\|u^{h}-u\right\|_{0,\Omega}\leq\left(C+\hat{\mathcal{L}}\right)a^{\bar{ \gamma}}k|u|_{p+1,\Omega\setminus\widehat{\Omega}}+\frac{\left|\left[u^{\Gamma} \right]\right|}{\varrho}\left\|y(x)-y^{u}(x)\right\|_{0,\widehat{\Omega}}, \tag{65}\] In the last term of (65), \(\left\|y(x)-y^{u}(x)\right\|_{0,\widehat{\Omega}}\) denotes the parametrization error. (65) implies that, when the parametrization error is relatively large, the solution convergence will be governed by the convergence of the parametrization. Conversely, for \(\left\|y(x)-y^{u}(x)\right\|_{0,\Gamma}\to 0\), the convergence will be governed by the background RK discretization with a rate of \(\bar{\gamma}\), e.g., 1.5 when a linear RK basis is used. The error bound of \(\left\|y(x)-y^{u}(x)\right\|_{0,\Gamma}\) follows the universal approximation theorem [1, 37] when a neural network is used for parametrization. For example, for a neural network with a single hidden layer, the error bound is estimated as follows [1]: with a generic constant \(C_{y}<\infty\), \[\left\|y(x)-y^{u}(x)\right\|_{0,\Gamma}\leq C_{y}n_{NR}^{-1/2}, \tag{66}\] which leads to the following error estimation of NN-RK approximation \[\left\|u^{h}-u\right\|_{0,\Omega}\leq\left(C+\hat{\mathcal{L}}\right)a^{\bar{ \gamma}}k|u|_{p+1,\Omega\setminus\widehat{\Omega}}+C_{y}\,\frac{\left|\left[u^ {\Gamma}\right]\right|}{\varrho}n_{NR}^{-1/2}. \tag{67}\] ### Regularization To avoid the potential loss of ellipticity of the problem and the resulting discretization sensitivity in the numerical solution of the local problem defined in Section 2, a regularization treatment is needed. A straightforward remedy is to impose a proper constraint such that the physical bandwidth of the damage does not become narrower than a certain limit. To analyze a localization width possessed by the NN-RK approximation, we start with a Taylor expansion of the parametric coordinate as follows: \[y(\mathbf{x})\approx\bar{\gamma}+(\mathbf{x}-\bar{\mathbf{x}})\cdot\mathbf{ \nabla}^{\mathbf{x}}y(\bar{\mathbf{x}}), \tag{68}\] where \(\bar{\gamma}=\gamma(\bar{\mathbf{x}})\), and \(\bar{\gamma}\) is defined in Section 3.1.4, for which the superscripts and subscripts are omitted for brevity. With (68), z defined in (50) is written as \[z(y(\mathbf{x});\{\bar{\gamma},c\})=\frac{(y(\mathbf{x})-\bar{\gamma})}{c} \approx\frac{(\mathbf{x}-\bar{\mathbf{x}})\cdot\mathbf{\nabla}^{\mathbf{x}}y (\bar{\mathbf{x}})}{c}\equiv\frac{\bar{\xi}(\mathbf{x};\bar{\mathbf{x}})}{c}, \tag{69}\] with \(\bar{\xi}(\mathbf{x};\bar{\mathbf{x}})\equiv(\mathbf{x}-\bar{\mathbf{x}}) \cdot\mathbf{\nabla}^{\mathbf{x}}y(\bar{\mathbf{x}})\). When \(\left\|\mathbf{\nabla}^{\mathbf{x}}y(\bar{\mathbf{x}})\right\|=1\), \(\bar{\xi}(\mathbf{x};\bar{\mathbf{x}})\) in (69) is a projection of the physical coordinate onto the direction normal to the localization. Therefore, by satisfying conditions \[\begin{split}\|\mathbf{\nabla}^{\mathbf{x}}\mathbf{y}(\bar{\mathbf{x}})\|\leq 1,\\ c\geq\ell,\end{split} \tag{70}\] the transition width of \(\bar{\mathbf{\phi}}\) in (49) has a lower bound of \(\ell\), and thus the localization width in the NN-RK approximation has the same lower bound. In this work, a constraint \(\|\mathbf{\nabla}^{\mathbf{x}}\mathbf{y}\|\leq 1\) is imposed in the loss function (1), and the lower bound of the sharpness control parameter \(c\) in (50) is set to an NN length scale parameter \(\ell\). The modified loss function with regularization reads: \[\begin{split}\min_{\mathbf{u}}\overline{\Pi}(\mathbf{u},\mathbf{y}) =\Pi(\mathbf{u})+\Pi^{\text{Reg}}(\mathbf{y}),\\ \Pi^{\text{Reg}}(\mathbf{y})=\frac{\kappa\mu}{2}\sum_{\alpha,J}\int_{ \Omega}\langle\|\mathbf{\nabla}^{\mathbf{x}}y_{J\alpha}(\mathbf{x})\|-1\rangle_{+ }^{2}\;d\Omega,\end{split} \tag{71}\] where \(\Pi\) is the potential function defined in (1), and \(\kappa\) is the normalized penalty parameter. In this work, \(\kappa=10^{4}\) is used. Note that this approach is different from the \(\widehat{H}\)-regularization introduced by Baek et al. (2022) [33] in which the parametric coordinates are directly scaled by \(\widehat{H}\) as follows: \[\begin{split} z=\frac{(\mathbf{y}-\overline{y})\widehat{H}}{c}, \qquad\text{where }\widehat{H}\equiv 1/\max(\|\mathbf{\nabla}^{\mathbf{x}}\mathbf{y}\|,1). \end{split} \tag{72}\] An advantage of the regularization designed in this work over the \(\widehat{H}\)-regularization is that the necessity to compute the second order gradient of \(\mathbf{y}\) for the evaluation of the strain energy in the loss function is avoided. ## 4 Numerical implementation The minimization problem is rewritten as follows: \[\begin{split}\min_{\mathbf{d},\mathbf{W}}\left[\Pi\Big{(}\mathbf{u}^{ h}(\mathbf{d},\mathbf{W})\Big{)}+\Pi^{\text{Reg}}\big{(}\mathbf{y}(\mathbf{x};\mathbf{W}^{ L})\big{)}\right],\end{split} \tag{73}\] where \(\mathbf{u}^{h}(\mathbf{d},\mathbf{W})=\mathbf{u}^{RK}(\mathbf{d})+\mathbf{u}^{NN}( \mathbf{W})\) is the NN-RK approximation with the RK coefficient set, \(\mathbf{d}\), and the neural network weight set, \(\mathbf{W}=\{\mathbf{W}^{L},\mathbf{W}^{S},\mathbf{W}^{C}\}\) with \(\mathbf{W}^{L}=\big{\{}\mathbf{W}^{L}_{J}\big{\}}_{J=1}^{n_{B}}\), \(\mathbf{W}^{S}=\big{\{}\mathbf{W}^{S}_{J}\big{\}}_{J=1}^{n_{B}}\), and \(\mathbf{W}^{C}=\big{\{}\mathbf{W}^{C}_{J}\big{\}}_{J=1}^{n_{B}}\). In (73), \(\mathbf{\psi}\) and \(F\) denote the energy density and the external work defined in (1), respectively. Figure 8 shows the flowchart of the solution procedure. In the flowchart \(n\) and \(n_{Max}\) denotes the loading step and the maximum loading step, respectively. At loading step \(n+1\), the solution procedure mainly consists of two parts: RK precomputation stage and NN-RKPM optimization stage. ### RK precomputation stage To obtain the initial guesses \(\mathbf{\bar{d}}^{(n+1)}\) and \(\mathbf{\bar{W}}^{c(n+1)}\) to be used in the NN-RKPM optimization stage, the minimization problem (73) is first solved only for \(\mathbf{d}^{(n+1)}\) and \(\mathbf{W}^{c(n+1)}\): Figure 8: Flowchart of the solution procedure \[\begin{split}\bar{\mathbf{d}}^{(n+1)},\mathbf{W}^{c(n+1)}=\operatorname*{ argmin}_{\mathbf{d},\mathbf{W}^{c}}\left[\Pi\left(\mathbf{u}^{h}\left(\mathbf{d}, \left\{\mathbf{W}^{L(n)},\mathbf{W}^{S(n)},\mathbf{W}^{c}\right\}\right)\right) \right)+\Pi^{\text{Reg}}\left(\mathbf{y}\left(\mathbf{x};\mathbf{W}^{L(n)} \right)\right)\right]\\ \text{subjected to }\mathbf{u}(\mathbf{x})=\mathbf{g}^{(n+1)}\text{ \ on \ }\partial\Omega_{g}.\end{split} \tag{74}\] In this stage, the weight sets \(\{\mathbf{W}^{L},\mathbf{W}^{S}\}\) and the damage \(\eta\) from the previous loading step are used. Also, the damage is not updated. This is equivalent to the standard Galerkin-based RKPM problem and can be solved by a standard matrix solver. ### NN-RKPM optimization stage In the second stage, the minimization problem (73) is solved for the entire unknown parameters \(\mathbf{d}\) and \(\mathbf{W}\). \[\begin{split}\bar{\mathbf{d}}^{(n+1)},\mathbf{W}^{(n+1)}= \operatorname*{argmin}_{\mathbf{d},\mathbf{W}}\left[\Pi\Big{(}\mathbf{u}^{h}( \mathbf{d},\mathbf{W})\Big{)}+\Pi^{\text{Reg}}\big{(}\mathbf{y}(\mathbf{x}; \mathbf{W}^{L})\big{)}\right]\\ \text{subjected to }\mathbf{u}(\mathbf{x})=\mathbf{g}^{(n+1)} \text{ \ on \ }\partial\Omega_{g}.\end{split} \tag{75}\] In this stage, the damage is updated as well. The minimization problem can be solved iteratively by a suitable optimizer. In this work, _Adam_[42], a first-order optimizer with adaptive learning rate, is used for the first several epochs. Then, the optimizer is switched to limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm (L-BFGS) [43], a second-order optimizer, for the remaining optimization. For domain integration involved in (73), SCNI introduced in Section 2.2.2 is used with refined smoothing cells near localization. As discussed in Section 2.2.2, the advantage of using SCNI for the proposed method is twofold: 1) it eliminates the requirement of computing the computationally expensive direct derivative of \(\mathbf{u}^{NN}\) with the automatic differentiation to evaluate strain and stress, and 2) it suppresses stress oscillations. For computationally efficient implementation of the strain smoothing operation in SCNI, precomputed sparse smoothing matrices \(\mathbf{P}_{\alpha}\) with \(\alpha=1\cdots d\) can be considered to perform the following global smoothing: \[\mathbf{U}_{\alpha}^{\widehat{\mathbf{v}}}=\mathbf{P}_{\alpha} \mathbf{U}^{surf}, \tag{76}\] by which the strain smoothing in all the smoothing cells as discussed in section 2 are conducted simultaneously. In (76), \(\mathbf{U}_{\alpha}^{\widehat{\mathbf{v}}}=\left[\widetilde{u}_{\alpha}^{ \widehat{\mathbf{u}}}(\mathbf{x}_{1}),\cdots,\widetilde{u}_{\alpha}^{ \widehat{\mathbf{u}}}(\mathbf{x}_{L}),\cdots,\widetilde{u}_{\alpha}^{ \widehat{\mathbf{u}}}\big{(}\mathbf{x}_{N_{1}C}\big{)}\right]^{T}\) is a column vector containing the smoothed gradients of \(u^{h}\) with respect to \(x_{\alpha}\) for all the smoothing cells in the domain, i.e., \(L=1\cdots N_{IC}\). \(\mathbf{U}^{surf}=\left[u^{h}\big{(}\mathbf{x}_{1}^{surf}\big{)},\cdots,u^{h} \big{(}\mathbf{x}_{e}^{surf}\big{)},\cdots,u^{h}\left(\mathbf{x}_{Nseg}^{ surf}\right)\right]^{T}\) is a column vector containing \(u^{h}\) evaluated at a smoothing cell surface evaluation point \(\mathbf{x}_{e}^{surf}\) for \(e=1\cdots N_{surf}\), where \(N_{surf}\) denotes the total number of smoothing cell surface evaluation points in the domain. The \((L,e)\) component of the smoothing operator \(\mathbf{P}_{\alpha}\) is \[P_{\alpha Le}=\begin{cases}\dfrac{1}{V_{L}}A_{e}n_{\alpha}^{K},&\text{if $\Gamma_{e} \subset\Omega_{L}$,}\\ 0,&\text{otherwise}\end{cases} \tag{77}\] where \(\Gamma_{e}\), \(\Omega_{L}\), \(n_{\alpha}^{K}\), and \(A_{e}\) denote \(e\)-th smoothing cell surface segment, \(L\)-th smoothing cell domain, \(\alpha\)-th component of the surface normal, and the area of \(e\)-th smoothing cell surface segment, respectively. The same procedure can be used to compute \(\widetilde{\nabla}y_{i}\) for Eq. (71). ## 5 Numerical Examples Several numerical examples are presented to demonstrate the proposed method's accuracy, regularization ability, and capability to capture complicated localization patterns. Unless otherwise specified, for the RK approximation, the linear basis with cubic B-spline kernel function of normalized support size 2.0 is used, and, for the NN approximation, a single 4-kernel NN block is used along with a densely connected neural network with the hyperbolic tangent activation function for the parametrization sub-block. For the domain integration, SCNI is used with refined smoothing cells in the zone along the expected damage path. ### Elasticity with pre-existing damaged zone Consider a domain \([-L/2,\ L/2]\times[-H/2,\ H/2]\) with a degraded zone with width \(w\). We consider two different cases of pre-existing damaged zone geometry, as show in Figure 9(a) and (b). For both cases, \(L=2\) mm and \(H=0.5\) mm are used. For Case I, the degraded zone is vertically aligned at the center of the domain. For Case II, the anti-symmetric degraded zone is centered at the origin with \(\mathbf{x}_{c1}=(-0.1,-0.5)\), \(R_{1}=0.35\), \(\mathbf{x}_{c2}=(-0.1,0)\), and \(R_{2}=0.1\) in unit of mm. For both cases, Dirichlet boundary conditions are applied to the left and right surfaces with \(g=1\times 10^{-2}\) mm, and zero traction boundary conditions are applied to the top and bottom surfaces. For Case I, \(w=H/100\), \(E=210\) GPa, and \(\nu=0\) are used, and for Case II, \(w=H/1000\), \(E=210\) GPa, and \(\nu=0.3\) are used. The Young's modulus within the degraded zones is \(kE\) with \(k=10^{-2}\) for Case I and \(k=10^{-3}\) for Case II. Figure 9: Geometry and boundary conditions for problem of elasticity with pre-existing damaged zone: (a) Case I and (b) Case II For Case I, the exact solution is as follows: \[u_{1}(\mathbf{x})=\begin{cases}b(x_{1}+L)-g,&x_{1}\leq-w/2\\ (b/k)x_{1},&-w/2<x_{1}\leq w/2\\ b(x_{1}-L)+g,&x_{1}>w/2\end{cases} \tag{78}\] \[u_{2}(\mathbf{x})=0\] where \(b=2g/\big{(}(1/k-1)w+2L\big{)}\). For the numerical solution, the domain is uniformly discretized by \(21\times 6\) RK nodes (see Figure 10 (a)), and a single 10-neuron hidden layer is used for the parametrization sub-block. Figure 11 shows the displacement predicted by the proposed method. The numerical solution captures the sharp transition in the horizontal displacement very well along with the zero vertical displacement due to zero Poisson's ratio. As shown in the figures in the 2nd row in Figure 11, the NN approximation appears near the localization capturing the sharp transition of \(u_{1}\), and the RK approximation captures the solution in the other area, with smooth transition between two approximations. Figure 12 shows the horizontal displacement and normal strain along \(y=0\) in which the numerical solution is shown to be highly accurate compared to the exact solution. The computed \(L_{2}\) norm and \(H^{1}\) semi-norm of Figure 10: Background RK discretizations used for the elasticity with pre-damaged material: (a) \(21\times 6\) RK nodes with \(h=H/5\), (b) \(41\times 11\) RK nodes with \(h=H/10\), and (c) \(81\times 21\) RK nodes with \(h=H/20\) the solution error are \(2.921\times 10^{-4}\) and \(2.437\times 10^{-6}\), respectively. Figure 11: Predicted displacement (Case I) For Case II, the background RK discretizations employed in this section are plotted in Figure 10 (a-c), and a 1,070,298-node, body-fitted Q8-FEM solution with a minimum nodal spacing of \(H/2000\) near the localization (see Figure 13 for discretization) is used as a reference solution. Figure 14 shows the numerical solution for Case II, using \(41\times 11\) uniformly distributed background RK nodes (Figure 10 (b)) and a single 40-neuron hidden layer. Although the background RK discretizations shown in Figure 8 are relatively coarse compared to the width of degraded zone, the displacements predicted by the proposed method match the reference solution very well. The convergence curve for varying background RK nodal spacing (\(h\)) and the convergence curve for the varying number of neurons (\(n_{NR}\)) are plotted in Figure 15 (a) and (b). The convergence curve for varying background RK nodal spacing (\(h\)) and the convergence curve for varying number of neurons (\(n_{NR}\)) are plotted in Figure 16 (a) and (b). The convergence curve for varying background RK nodal spacing (\(h\)) and the convergence curve for varying number of neurons (\(n_{NR}\)) are plotted in Figure 17 (a) and (b). The convergence curve for varying background RK nodal spacing (\(h\)) and the convergence curve for varying number of neurons (\(n_{NR}\)) are plotted in Figure 18 (a) and (b). The convergence curve for varying background RK nodal spacing (\(h\)) and the convergence curve for varying number of neurons (\(n_{NR}\)) are plotted in Figure 19 (a) and (b). The convergence curve for varying background RK nodal spacing (\(h\)) and the convergence curve for varying number of neurons (\(n_{NR}\)) are plotted in Figure 19 (a) and (b). The convergence curve for varying number of neurons (\(n_{NR}\)) are plotted in Figure 19 (a) and (b). The convergence curve for varying number of neurons (\(n_{NR}\)) are plotted in Figure 19 (a) and (b). The convergence curve for varying number of neurons (\(n_{NR}\)) are plotted in Figure 19 (a) and (b). The convergence curve for varying number of neurons (\(n_{NR}\)) are plotted in Figure 19 (a) and (b). The convergence curve for varying number of neurons (\(n_{NR}\)) are plotted in Figure 19 (a) and (b). The convergence curve for varying number of neurons (\(n_{NR}\)) is plotted in Figure 19 (a) and (b). (b), respectively. For the convergence study shown in Figure 15 (a), a fixed value of \(n_{NR}=160\) is used, and for the study shown in Figure 15(b), a fixed value of \(h=H/40\) is used. Both results show convergence behaviors consistent with the error analysis result presented in Section 3.2. Figure 14: Displacement field (Case II): reference solution and NNRK solution (41\(\times\)11) ### Pre-notched specimen subjected to simple shear A benchmark problem of pre-notched specimen under simple shear is considered. As shown in Figure 16, a specimen with domain \(\Omega=[-L,L]\times[-L,L]\) with a pre-existing crack of length \(L\) is subjected to Dirichlet boundary conditions on the top and bottom surfaces. Specimen dimension \(L=0.5\) mm is used in this problem. The horizontal boundary value \(g\) applied to the top surface is increased up to \(15\times 10^{-3}\) mm with an increment of \(1\times 10^{-4}\) mm. The material properties of \(E=210\) GPa, \(\nu=0.3,G_{c}=2.7\) N/mm are used. As shown in Figure 17, three levels of RK discretizations are used to study the regularization capability of the proposed method. For verification, a reference solution based on the reproducing kernel strain regularization [44] method is employed using 160,801 uniformly distributed RK nodes with nodal spacing of \(h=L/200\). Figure 18 (a-c) shows the damage propagation predicted by the proposed method. The damage is initiated with an orientation of approximately \(65^{\circ}\) and gradually changes the direction to the lower right corner during the propagation. The predicted damage paths plotted in Figure 18 (d) are not sensitive to the background RK discretization and agree very well with the reference solution. In addition, as shown in Figure 19, the load-displacement curves also demonstrate the good regularization capability of the proposed method and present reasonable agreement with the reference solution. Figure 15: \(L_{2}\) convergence rates: (a) for varying background RK nodal spacing with a fixed width of hidden layer (\(n_{NR}=160\)) and (b) for varying \(n_{NR}\) with a fixed RK discretization (\(h=H/40\)). The values enclosed by the parentheses in the legend denote the average convergence rates. Figure 16: A pre-notched specimen for simple shear problem Figure 17: Background RK discretizations employed for simple shear problem. (a) M1: \(h=L/4\), (b) M2: \(h=L/8\), (c) M3: \(h=L/16\) Figure 18. Damage evolution in simple shear problem (M2) for (a) \(g=9\times 10^{-3}\), (b) \(g=10\times 10^{-3}\), (c) \(g=11.5\times 10^{-3}\), and (d) comparison of the predicted damage paths and the reference solution ### Quasi-static crack branching problem In this section, the proposed method's ability to capture branching is demonstrated through a numerical example inspired by the problem proposed by Muixi et al [45, 46]. Consider a square domain \(\Omega=[-L,L]\times[-L,L]\) with a pre-existing notch with a length of \(L\), as shown in Figure 20. The specimen is subjected to vertical displacement boundary conditions \(g(x)=g_{D}(1-x^{2})/8\) on the top and bottom surfaces while the right surface is fixed in both directions. Herein, \(L=1\) mm is considered, and \(g_{D}\) is applied up to \(0.08\) mm with \(\Delta g_{D}=4\times 10^{-3}\) mm. The material properties \(E=20\) GPa, \(\nu=0.3\), and \(\mathcal{G}_{c}=8.9\times 10^{-5}\) kN/mm are used. In Figure 21, a progressive damage field is plotted in which the fracture initially propagates horizontally and branched near the fixed boundary as the accumulated strain energy associated with the vertical strain decreases due to the displacement constraint, which prevents further propagation of the fracture toward the fixed boundary. The branching is predicted to occur abruptly, then the propagation rate slows down. At the late stage of simulation, two branches switch the direction to the left. The overall trend of the damage propagation agrees with the reference PF-XFEM solution [46] superimposed in Figure 21 (d). Figure 19: Load-displacement curve in simple shear problem Figure 20. A pre-notched specimen for static branching problem: (a) geometry and boundary conditions and (b) background RK discretization Figure 21: Predicted damage propagation and branching: \(g_{D}\) of (a) 0.02 mm, (b) 0.036 mm, (c) 0.04 mm, and (d) 0.08 mm with a reference solution [46] superimposed in orange color ### Mixed-mode fracture of a doubly notched rock-like specimen subjected to uniaxial compression A uniaxial compression of a rock-like specimen with double pre-existing cracks [47] is simulated. As shown in Figure 22, a rectangular specimen with \(H=152.4\) mm consists of two 1-mm thick pre-existing cracks with \(L=c=w=12.7\) mm and \(\alpha=45^{\circ}\). The Dirichlet boundary condition on the top surface is prescribed up to \(g=-0.65\) mm with the increment \(\Delta g=-1\times 10^{-2}\) mm. Material parameters are Young's modulus of \(E=5.96\) GPa, Poisson's ratio of \(\nu=0.24\), the mode-I fracture energy of \(\mathcal{G}_{I}=5\) N/m, and the mode-II fracture energy of \(\mathcal{G}_{II}=20\mathcal{G}_{I}\). The domain is uniformly discretized by \(16\times 31\) RK particles. For NN approximation, the parametrization subblock consists of a neural network with two 40-neuron hidden layers along with the hyperbolic tangent activation function, which involves 1,842 unknown weights and biases. The NN length scale of 1 mm is employed. Figure 23 shows the predicted damage propagation in the rock specimen. At the initial stage, four wing cracks are initiated from the four corners of the pre-existing notches and propagates with curved paths. Then, secondary shear cracks start to develop approximately at \(g=-0.65\) mms the experimental observation [47]. Figure 22: A rock specimen with double preexisting cracks: (a) geometry and boundary conditions, (b) details of preexisting notch, and (c) background RK discretization Figure 23: Progressive damage in rock-like specimen induced by uniaxial compression: \(g=\) (a) -0.4 mm, (b) -0.5 mm, (c) -0.6 mm, and (d) -0.65 mm Figure 24: Comparison of (a) numerical results and (b) experimental observation [47] Figure 23: Progressive damage in rock-like specimen induced by uniaxial compression: \(g=\) (a) -0.4 mm, (b) -0.5 mm, (c) -0.6 mm, and (d) -0.65 mm ## 6 Conclusion An improved neural network-enhanced reproducing kernel particle method has been proposed for modeling brittle fracture. Derived through an NN-based correction of standard RK shape functions, the proposed method enriches a background reproducing kernel (RK) approximation with a coarse and uniform discretization by a neural network (NN) approximation equipped with a Partition of Unity property. The NN approximation is constructed by a deep neural network designed to capture localization, and the NN based enrichment functions are then patched together with RK approximation functions using RK as a Partition of Unity patching function. In the NN approximation, the deep neural network locates and inserts regularized discontinuities in the approximation function automatically, and the resulting NN enriched RK coefficient function provides varying magnitude of the discontinuities along the localization path. To automatically capture the location, orientation, and solution transition across and along the localization, the optimum values of the control parameters contained in the deep neural network as well as the RK coefficients are obtained via minimization of the energy-based loss function. A regularization by introducing a constraint on the spatial gradient of the parametric coordinates to the loss function is employed to ensure a discretization-independent solution. Error analysis of the proposed NN-RK approximation is performed, and its verification with the numerical results show good agreement on the convergence rates. The numerical examples demonstrate the effectiveness of the proposed method in modeling damage evolution and branching with a fixed background discretization without conventional adaptive refinement. **Acknowledgments** The support from the National Science Foundation under award #1826221 to University of California, San Diego, is greatly acknowledged.
2306.00900
Suppression of chaos in a partially driven recurrent neural network
The dynamics of recurrent neural networks (RNNs), and particularly their response to inputs, play a critical role in information processing. In many applications of RNNs, only a specific subset of the neurons generally receive inputs. However, it remains to be theoretically clarified how the restriction of the input to a specific subset of neurons affects the network dynamics. Considering RNNs with such restricted input, we investigate how the proportion, $p$, of the neurons receiving inputs (the "inputs neurons") and the strength of the input signals affect the dynamics by analytically deriving the conditional maximum Lyapunov exponent. Our results show that for sufficiently large $p$, the maximum Lyapunov exponent decreases monotonically as a function of the input strength, indicating the suppression of chaos, but if $p$ is smaller than a critical threshold, $p_c$, even significantly amplified inputs cannot suppress spontaneous chaotic dynamics. Furthermore, although the value of $p_c$ is seemingly dependent on several model parameters, such as the sparseness and strength of recurrent connections, it is proved to be intrinsically determined solely by the strength of chaos in spontaneous activity of the RNN. This is to say, despite changes in these model parameters, it is possible to represent the value of $p_c$ as a common invariant function by appropriately scaling these parameters to yield the same strength of spontaneous chaos. Our study suggests that if $p$ is above $p_c$, we can bring the neural network to the edge of chaos, thereby maximizing its information processing capacity, by amplifying inputs.
Shotaro Takasu, Toshio Aoyagi
2023-06-01T16:56:29Z
http://arxiv.org/abs/2306.00900v4
# Suppression of chaos in a partially driven recurrent neural network ###### Abstract The dynamics of recurrent neural networks (RNNs), and particularly their response to inputs, play a critical role in information processing. In many applications of RNNs, only a specific subset of the neurons generally receive inputs. However, it remains to be theoretically clarified how the restriction of the input to a specific subset of neurons affects the network dynamics. Considering recurrent neural networks with such restricted input, we investigate how the proportion, \(p\), of the neurons receiving inputs (the "inputs neurons") and a quantity, \(\xi\), representing the strength of the input signals affect the dynamics by analytically deriving the conditional maximum Lyapunov exponent. Our results show that for sufficiently large \(p\), the maximum Lyapunov exponent decreases monotonically as a function of \(\xi\), indicating the suppression of chaos, but if \(p\) is smaller than a critical threshold, \(p_{c}\), even significantly amplified inputs cannot suppress spontaneous chaotic dynamics. Furthermore, although the value of \(p_{c}\) is seemingly dependent on several model parameters, such as the sparseness and strength of recurrent connections, it is proved to be intrinsically determined solely by the strength of chaos in spontaneous activity of the RNN. This is to say, despite changes in these model parameters, it is possible to represent the value of \(p_{c}\) as a common invariant function by appropriately scaling these parameters to yield the same strength of spontaneous chaos. Our study suggests that if \(p\) is above \(p_{c}\), we can bring the neural network to the edge of chaos, thereby maximizing its information processing capacity, by adjusting \(\xi\). Large-scale recurrent neural networks (RNNs) exhibit various dynamical patterns, including limit cycles and chaos. They have been used to model brain functions such as working memory [1], motor control [2], and context-based learning [3]. The rich dynamics exhibited by an RNN can also be used for information processing. Reservoir computing (RC) [4; 5] is a machine learning framework that utilizes large RNNs, called "reservoirs", to reproduce the time series data of interest. RC enables quicker learning in RNNs than conventional backpropagation-based methods, because with this method, only the output weights are updated, with the other recurrent weights left unchanged. RC is not restricted to RNNs, and indeed a wide range of dynamical systems can serve as the reservoir under appropriate conditions. Physical reservoir computing, in which a real physical system is used as the reservoir, has been an area of active research in recent years [6]. In general, RNNs must possess rich dynamics in order to assimilate a diverse set of signals to be learned. For this reason, it is advantageous for RNNs to exhibit chaotic spontaneous activity. On the other hand, for an RNN to successfully reproduce a target time series, it must converge to the same state each time it receives a particular set of input signals, regardless of its initial internal state. This property is known as the "echo state property (ESP)" [4] in the context of RC. Hence, it is hypothesized that an RNN that displays varied spontaneous activity while maintaining consistency in response to inputs will exhibit superior computational performance. Such an RNN is commonly referred to as being at the "edge of chaos", and it is known empirically that reservoirs in this regime have the highest computational capacity [7]. In fact, there is experimental evidence suggesting that mammalian neuronal networks operate in this critical regime [8; 9]. Lyapunov spectrum analysis [10] allows us to study the dynamics of RNNs in a quantitative manner. The maximum Lyapunov exponent (MLE) characterizes the exponential rate of separation of infinitesimally close trajectories. A dynamical system with a positive MLE exhibits chaotic behavior. In the case that a dynamical system is driven by given input signals, the MLE is called the maximum conditional Lyapunov exponent (MCLE). Two identical RNNs with slightly different initial states will converge to the same state under the same inputs if and only if the MCLE is negative. Therefore, a negative MCLE is necessary for an RNN to exhibit consistency with respect to inputs. The MLE and MCLE of an RNN with random weights can be analytically computed in the limit of a large network size, using the dynamic mean-field approach [11]. It is known that the MCLE of a random RNN decreases with the strength of the input signal and eventually becomes negative in both the cases that the signal is white noise [12; 13; 14; 15] and deterministic [16]. In other words, sufficiently amplified driving input signals can suppress chaotic dynamics. These findings suggest that it is possible to shift the state of an RNN exhibiting chaotic spontaneous dynamics toward the edge of chaos by appropriately amplifying the input, and then use its shifted state as an efficient reservoir. It is known that FORCE learning can efficiently train a chaotic RNN with feedback connections, with the feedback input suppressing chaos [17]. There is also experimental evidence indicating similar suppression of chaos in the brain. Specifically, extensive neural recordings taken with monkeys and cats show that visual stimuli can attenuate spontaneous fluc tuations in cortical neurons [18]. Previous studies of the Lyapunov exponents of large random RNNs assume a model in which every unit in the RNN connects to the input layer and receives driving signals. Hereafter, we refer to RNNs of this type as "full-input RNNs". However, such a model is biologically implausible because biological synapses are sparse [19]. Additionally, in physical reservoir computing, it is often unfeasible to connect the input to all reservoir units. Therefore, it is important to investigate the dynamics of RNNs in the case that only a subset of the neurons receive input signals. We refer to RNNs of this type as "partial-input RNNs." It remains to be elucidated whether sufficiently amplified inputs always suppress the chaotic activity in a partial-input RNN, as in the case of a full-input RNN. In this work, we address this question by analitically calculating the MCLE of a partial-input random RNN. We investigate the discrete-time dynamics of a random sparse RNN with \(N\gg 1\) neurons of which only \(pN\) receive inputs (Fig.1). We define the parameter \(p\in[0,1]\), called the "input partiality," which determines the fraction of neurons coupling to the input unit. We consider the case in which the input signal \(s(t)\) at time \(t\) is a scalar for simplicity, although our theory can be straightforwardly extended to multi-dimensional inputs. The state of a neuron that receives inputs is represented by a dynamical variable \(x_{i}(t)\in\mathbb{R}\) (\(i=1,\cdots,pN\)) whose evolution obeys the equation \[x_{i}(t+1)=\sum_{j=1}^{pN}J_{ij}\phi(x_{j}(t))+\sum_{k=pN+1}^{N} J_{ik}\phi(y_{k}(t))+u_{i}s(t), \tag{1}\] while the state of a neuron that does not receive inputs is represented by a dynamical variable \(y_{i}(t)\in\mathbb{R}\) (\(i=pN+1,\cdots,N\)) whose evolution obeys the equation \[y_{i}(t+1) =\sum_{j=1}^{pN}J_{ij}\phi(x_{j,t})+\sum_{k=pN+1}^{(1-p)N}J_{ik} \phi(y_{k,t}). \tag{2}\] Here, \(u_{i}\) (\(i=1,\cdots,pN\)) is the coupling weight connecting the input signal to the \(i\)th neuron, \(J_{ij}\) is a recurrent weight matrix determining the coupling from the \(j\)th to the \(i\)th neuron, and \(\phi\) is the activation function. The value of \(u_{i}\) is drawn randomly from a Gaussian distribution with zero mean and unit variance. We define \(J_{ij}\) to be a random sparse matrix whose elements are chosen as follows. The value of each element is assigned independently. For a given element, first its value is randomly chosen to be zero or non-zero with respective probabilities \(1-\alpha\) and \(\alpha\) (where \(\alpha\in(0,1]\)), and then, in the latter case, it is assigned a value drawn from a Gaussian distribution with zero mean and variance \(g^{2}/N\). The gain parameter \(g\) represents the recurrent coupling strength. The activation function is chosen as \(\phi(x)=\mathrm{erf}(\frac{\sqrt{\pi}}{2}x)\) for analytic tractability. It has been found that in the absence of inputs, this RNN exhibits a transition from fixed-point dynamics to chaotic dynamics at \(g=1\) in the limit of a large network [11]. In previous studies, zero mean white noise has typically been used for the input signal \(s(t)\)[12; 13; 14; 15], but we do not limit the choice of \(s(t)\) in this way, and instead regard it to be an arbitrary time series, except when stated otherwise. In the investigation of the RNN defined by the above equations, the model parameters to be varied are the input partiality, \(p\), the recurrent connection sparsity, \(\alpha\), and the recurrent coupling strength, \(g\). We can obtain the statistical properties of \(x_{i}(t)\) and \(y_{i}(t)\) using a mean-field approach [13; 14] in the limit \(N\to\infty\). As seen from Eqs.(1) and (2), \(x_{i}(t+1)\) and \(y_{i}(t+1)\) are obtained as sums of large numbers of identically distributed independent variables, \(\{J_{ij}\phi(x_{j}(t))\}_{j=1}^{pN}\) and \(\{J_{ik}\phi(y_{k}(t))\}_{k=pN+1}^{N}\). Thus, according to the central limit theorem, we can consider \(x_{i}(t)\) and \(y_{i}(t)\) to follow Gaussian distributions. It thus suffices to determine their averages, \(\langle x_{i}(t)\rangle\) and \(\langle y_{i}(t)\rangle\), and variances, \(\langle x_{i}^{2}(t)\rangle\) and \(\langle y_{i}^{2}(t)\rangle\), where \(\langle\cdots\rangle\) denotes the average over realizations of the quenched weights \(J_{ij}\) and \(u_{i}\). Introducing the notation \(K(t):=\langle y_{i}(t)^{2}\rangle\), the Gaussian distributions exhibited by \(x_{i}(t)\) and \(y_{i}(t)\) can be expressed as follows (see Supplemental Material for derivation [20]: \[x_{i}(t+1) \sim\mathcal{N}(0,K(t+1)+s(t)^{2}), \tag{3a}\] \[y_{i}(t+1) \sim\mathcal{N}(0,K(t+1)). \tag{3b}\] The value of \(K(t)\) obeys the recurrence relation, \[K(t+1) =-\alpha g^{2}+\frac{4}{\pi}\alpha g^{2}\bigg{(}(1-p)\arctan \sqrt{1+\pi K(t)} \tag{4}\] \[+p\arctan\sqrt{1+\pi(K(t)+s(t-1)^{2})}\bigg{)}.\] Sequentially substituting the input time series \(\{s(t)\}_{t}\) into Eq.(4), we can obtain \(\{K(t)\}_{t}\). The time series \(\{K(t)\}_{t}\) allows us to derive the MCLE, \(\lambda\), of the RNN [12]. The MCLE is defined as the asymptotic growth rate of the distance between two replicated Figure 1: A schematic depiction of the partial-input RNN studied in this work. The shaded region represents the neurons that receive input signals through input connectivity. RNNs, \[\lambda :=\lim_{T\rightarrow\infty,\mathbf{\delta}(0)\to 0}\frac{1}{T}\log \frac{\|\mathbf{\delta}(T)\|}{\|\mathbf{\delta}(0)\|}, \tag{5}\] where \(\|\mathbf{\delta}(t)\|\) denotes the distance at time \(t\) between two replicas receiving the same input signals. We can calculate \(\lambda\) with random matrix theory [21] (see Supplemental Material for derivation [20]), obtaining \[\lambda =\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=0}^{T}\frac{1}{2} \log\alpha g^{2}\bigg{(}\frac{1-p}{\sqrt{1+\pi K(t)}} \tag{6}\] \[+\frac{p}{\sqrt{1+\pi(K(t)+s(t-1)^{2})}}\bigg{)}.\] Because \(\{K(t)\}_{t}\) has been determined by Eq.(4), we can obtain \(\lambda\) by substituting \(\{K(t)\}_{t}\) and \(\{s(t)\}_{t}\) into Eq.(6). Figure 2(a) displays the MCLE, \(\lambda\), as a function of the input intensity \(\sigma\) defined below, for \(p=0.4\) and \(p=0.6\), with \(g=3.0\) and \(\alpha=1.0\). Here we assume the input to be Gaussian white noise with zero mean and standard deviation \(\sigma\). The analytic results obtained from Eq.(6) are consistent with the results of the numerical simulations. For both values of \(p\), \(\lambda\) decreases monotonically as a function of \(\sigma\). However, while the MCLE for \(p=0.6\) falls below \(0\) around \(\sigma=20\), the MCLE for \(p=0.4\) remains positive throughout the range of \(\sigma\) shown in the figure. In this work, our main interest is to determine whether the MCLE falls below \(0\) with sufficiently amplified inputs. However, it cannot be answered by observing the numerical simulations shown in Figure.2(a) even if simulations are performed for quite large \(\sigma\), which motivates us to derive the analytic value of the MCLE conditioned by infinitely large inputs, \(\lambda_{\infty}\). To answer the above question, it is sufficient to determine the sign of \(\lambda_{\infty}\). We introduce a scaling parameter of input signals, \(\xi\in\mathbb{R}\), and then represent the amplified input signals by \(\{\xi s(t)\}_{t}\). The MCLE conditioned by infinitely large inputs is denoted by \(\lambda_{\infty}:=\lim_{|\xi|\rightarrow\infty}\lambda\). The quantity \(\lambda_{\infty}\) can be derived analytically as follows. As the value of \(\xi\) becomes sufficiently large, \(K(t)\) approaches a limiting constant value \(K_{\infty}\) given by \[K_{\infty}=-\alpha g^{2}+\frac{4}{\pi}\alpha g^{2}\bigg{(}\frac {\pi}{2}p+(1-p)\arctan\sqrt{1+\pi K_{\infty}}\bigg{)}. \tag{7}\] This expression for \(K_{\infty}\) is obtained by replacing \(s(t)\) with \(\xi s(t)\) in Eq.(4) and considering the limit \(|\xi|\rightarrow\infty\). Taking this limit and substituting \(K_{\infty}\) for \(K(t)\) in Eq.(6), we obtain \[\lambda_{\infty}=\frac{1}{2}\log\alpha g^{2}\bigg{(}\frac{1-p}{ \sqrt{1+\pi K_{\infty}}}\bigg{)}. \tag{8}\] We have confirmed that the value of \(\lambda_{\infty}\) obtained from Eqs.(7) and (8) is consistent with the results of numerical simulations (Fig.2(b)). Figure 2(b) depicts the relationship between \(p\) and \(\lambda_{\infty}\) for various values of the recurrent weight intensity \(g\). We define the value of \(p\) at \(\lambda_{\infty}=0\) as the "critical input partiality", \(p_{c}\). Because the condition \(\lambda_{\infty}>0\) always holds for \(p<p_{c}\), we conclude that even sufficiently amplified input signals cannot suppress the chaotic activity of the RNN if \(p<p_{c}\). Figure 3(a) depicts the dependence of \(p_{c}\) on \(g\) with fixed \(\alpha\). We obtain the curve by solving Eq.(8) for \(K_{\infty}\) with \(\lambda_{\infty}=0\) and substituting this \(K_{\infty}\) into Eq.(7), yield Figure 2: (a) The maximum conditional Lyapunov exponent (MCLE), \(\lambda\), calculated for various values of the input partiality, \(p\). The theoretical form given in Eq.(6) is plotted with a solid curve for \(p=0.4\) and a dashed curve for \(p=0.6\), where the time average in Eq.(6) is computed up to \(T=10^{5}\). Error bars represent \(\pm\)std of direct numerical simulations based on the definition of \(\lambda\) in Eq.(5). For each plot, the coupling strength is set to \(g=3.0\), and the sparsity is set to \(\alpha=1.0\). The input signal \(s(t)\) is Gaussian white noise with mean \(0\) and variance \(\sigma^{2}\). For the numerical simulations, the value of the MCLE was obtained by directly calculating Eq.(5) for a sufficiently long time, i.e., \(T=10^{4}\). (b) Analytic results (solid, dashed, dot-dashed curves) and numerical results (error bars indicating \(\pm\)std) for \(\lambda_{\infty}\) with sparsity \(\alpha=1.0\). In numerical simulations, the value of \(\lambda_{\infty}\) was obtained by directly calculating Eq.(5) for a sufficiently large input magnitude (\(\sigma=10^{3}\)). In the cases of both (a) and (b), the values of the MCLE (\(\lambda\) and \(\lambda_{\infty}\)) were calculated for \(10\) different network realizations with a network size of \(N=1000\). ing \[p_{c} =1-\frac{1}{\alpha g^{2}}\biggl{[}1-\pi\alpha g^{2}\] \[+4\alpha g^{2}\biggl{(}\frac{\pi}{2}p_{c}+(1-p_{c})\arctan\left((1- p_{c})\alpha g^{2}\right)\biggr{)}\biggr{]}^{1/2}.\] Below the curve, \(\lambda_{\infty}\) is positive, and thus, in this region, chaos is not suppressed no matter how strong the input because \(\lambda\) is a decreasing function of the input intensity as shown in Figure 2(a). As seen in Figure 3(a), \(p_{c}\) is an increasing function of \(g\). This implies that as the values of the coupling strength increases, larger values of \(p\) are needed to suppress chaos. We next investigate the effect of sparsity \(\alpha\) on \(p_{c}\). Plotting the \(g\)-\(p_{c}\) curves with several values of \(\alpha\), we find that a sparser RNN results in a smaller value of \(p_{c}\) (Fig.3(a)). This is intuitively understandable, because the dynamics of a sparser RNN are less chaotic, and thus a smaller value of the input partiality is sufficient to control the chaos. To take account of this relationship, we introduce the MLE of the RNN with no input, denoted by \(\lambda_{0}\). Clearly, \(\lambda_{0}\) quantifies the strength of chaos in spontaneous activity of the RNN, and it can be determined analytically by substituting \(s(t)\equiv 0\) into Eq.(4) and Eq.(6), yielding \[K =\alpha g^{2}\left(-1+\frac{4}{\pi}\arctan\sqrt{1+\pi K}\right), \tag{10}\] \[\lambda_{0} =\frac{1}{2}\log\frac{\alpha g^{2}}{\sqrt{1+\pi K}}. \tag{11}\] Interestingly, we find that when \(p_{c}\) is plotted with respect to \(\lambda_{0}\), the resulting curves for all values of \(\alpha\) coincide, as seen in Figure 3(b). The reason for this coincidence is easily understood by considering Eqs.(9)-(11). From Eqs.(10) and (11), we see that \(\lambda_{0}\) is a function of \(\alpha g^{2}\). Writing the corresponding inverse function as \(\alpha g^{2}=f(\lambda_{0})\), and substituting this into Eq.(9), we obtain \(p_{c}\) expressed as a function of \(\lambda_{0}\) alone. This finding implies that \(p_{c}\) depends primarily on the strength of spontaneous chaos, independently of how sparse the recurrent connection is. Finally, we study the information processing capability of a partial-input RNN employed as a reservoir for RC. Memory capacity [22] is a commonly used benchmarks for RC. It is a measure of the ability of a reservoir to perform short-term memory tasks through the reconstruction of its past input signals. We define the memory capacity as follows. From the \(N\) reservoir units, \(K\) (\(1\leq K\leq N\)) lead-out units are randomly chosen, and represented by a vector \(\mathbf{\tilde{x}}(t)\in\mathbb{R}^{K}\). The reservoir's output is defined as \(\hat{z}(t):=\mathbf{w}^{\top}\mathbf{\tilde{x}}(t)\), where the vector \(\mathbf{w}\in\mathbb{R}^{K}\) represents the output weights. In a \(\tau\)-delay memory task, the reservoir at time \(t\) is required to output the previous input signal \(s(t-\tau)\), and the output weights are trained to minimize the mean squared error between \(\hat{z}(t)\) and the desired output, \(s(t-\tau)\). This is accomplished with a least-squares method, and the trained output weights are determined as \(\mathbf{\hat{w}}=(\mathbf{XX}^{\top})^{-1}\mathbf{X}\mathbf{s}\), where \(\mathbf{X}:=(\mathbf{\tilde{x}}(1)\cdots\mathbf{\tilde{x}}(T))\) and \(\mathbf{s}:=(s(1)\cdots s(T))^{\top}\) (\(T\) being the length of the simulation). After training, we evaluate the task performance \(M_{\tau}\) defined as \[M_{\tau}:=1-\frac{\langle(\hat{z}(t)-s(t-\tau))^{2}\rangle}{\langle s(t)^{2} \rangle}, \tag{12}\] where the brackets represent the time average. Because the numerator of the second term in Eq.(12) is the mean squared error, \(M_{\tau}\) approaches \(1\) as the reservoir learns to accurately reconstruct its past input \(s(t-\tau)\). The memory capacity \(MC\) is defined as the sum of the \(M_{\tau}\), \(MC:=\sum_{\tau=1}^{\infty}M_{\tau}\). It has been mathematically proved that \(MC\) satisfies the inequality \(0\leq MC\leq K\)[22; 23]. Assuming that the input signal \(s(t)\) is Gaussian white noise with zero mean and variance \(\sigma^{2}\), we calculated both the MCLE, \(\lambda\), and memory capacity, \(MC\). The results are plotted as functions of \(\sigma\) in Figure 4(a) and (b). As previously noted, an increase in the input magnitude leads to a decrease in the MCLE (as seen in Fig.4(a)), Figure 3: (a) Relation between the coupling strength, \(g\), and the critical input partiality, \(p_{c}\), given by Eq.(9) for several values of the sparsity, \(\alpha\). (b) The same data as in (a) plotted with respect to \(\lambda_{0}\) rather than \(g\). It is seen that when plotted in this manner, the data for \(p_{c}\) fall along the same curve for each value of \(\alpha\) considered. with the result that the plot in Fig.4(c) shifts leftward as \(\sigma\) increases. From Eq.(9), \(p_{c}\) is found to be approximately \(0.074\) under the conditions employed in Fig.4. When \(p=0.15\) and \(p=0.50\), the plots intersect the vertical line \(\lambda=0\) in Fig.4(c), as our theory predicts, and the memory capacity reaches its maximum value near \(\lambda=0\). Contrastingly, the plots with \(p=0.05\) and \(p=0.07\) remain in the chaotic domain (\(\lambda>0\)), and the memory capacity remains relatively low. It is thus seen that once input connections have been built such that \(p\) exceeds \(p_{c}\), optimal computational capability can be realized only by amplifying the input signals appropriately. This finding should be helpful for the physical reservoir computing paradigm, because amplifying input signals is generally easier and more cost effective than adding new input connections. In the present work, we have examined a partial-input RNN with rate neurons and have analytically shown the existence of a critical input partiality \(p_{c}\) that determines whether the chaotic activity can be suppressed by input signals. Our theory can be applied to realistic situations in which an RNN receives input signals with non-Gaussian statistics or temporal correlations, because we do not assume any particular statistics of the input signals, in contrast to previous theoretical works [12; 13; 14; 15]. In a future work, we will investigate whether there exists critical input partiality in other types of RNNs, such as those with heavy-tailed recurrent weights [24] or unsaturated activation functions [25]. In addition, we have confirmed that memory capacity is maximized near the critical value of MCLE, \(\lambda=0\), which corresponds to the "edge of chaos." Our theory suggests that we can readily construct a partial-input RNN at the edge of chaos by tuning the input signals, as long as the input partiality exceeds \(p_{c}\). The present study provides a possible novel approach to designing reservoir computing. S.T. was supported by JSPS KAKENHI Grant No. JP22J21559. T.A. was supported by JSPS KAKENHI Grant No.JP20K21810, No.JP20H04144, and No.JP20K20520. ## References * Rajan _et al._ [2016]K. Rajan, C. Harvey, and D. Tank, Neuron **90**, 128 (2016). * Laje and Buonomano [2013]R. Laje and D. Buonomano, Nat. neurosci. **16**, 925 (2013). * Enel _et al._ [2016]P. Enel, E. Procyk, R. Quilodran, and P. F. Dominey, PLOS Comput. Biol. **12**, 1 (2016). * Jaeger [2001]H. Jaeger, _The "echo state" approach to analysing and training recurrent neural networks_, Tech. Rep. GMD Report 148 (German National Research Center for Information Technology, 2001). * Maass _et al._ [2002]W. Maass, T. Natschlager, and H. Markram, Neural Comput. **14**, 2531 (2002). * Nakajima and Fischer [2021]K. Nakajima and I. Fischer, _Reservoir Computing: Theory, Physical Implementations, and Applications_ (Springer, 2021). * Bertschinger and Natschlager [2004]N. Bertschinger and T. Natschlager, Neural Comput. **16**, 1413 (2004). * Morales _et al._ [2023]G. B. Morales, S. di Santo, and M. A. Munoz, Proceedings of the National Academy of Sciences **120**, e2208998120 (2023). * Dahmen _et al._ [2019]D. Dahmen, S. Grun, M. Diesmann, and M. Helias, Proceedings of the National Academy of Sciences **116**, 13051 (2019). * Pikovsky and Politi [2016]A. Pikovsky and A. Politi, _Lyapunov Exponents: A Tool to Explore Complex Dynamics_ (Cambridge University Press, 2016). * Sompolinsky _et al._ [1988]H. Sompolinsky, A. Crisanti, and H. J. Sommers, Phys. Rev. Lett. **61**, 259 (1988). * Molegedy _et al._ [1992]L. Molegedy, J. Schuchhardt, and H. G. Schuster, Phys. Rev. Letters **69**, 3717 (1992). * Massar and Serge [2013]M. Massar and M. Serge, Phys. Rev. E **87**, 042809 (2013). * Haruna and Nakajima [2019]T. Haruna and K. Nakajima, Phys. Rev. E **100**, 062312 (2019). * Schuecker _et al._ [2018]J. Schuecker, S. Goedeke, and M. Helias, Phys. Rev. X **8**, 041029 (2018). * Rajan _et al._ [2010]K. Rajan, L. F. Abbott, and H. Sompolinsky, Phys. Rev. E **82**, 011903 (2010). * Sussilo and Abbott [2009]D. Sussilo and L. F. Abbott, Neuron **63**, 544 (2009). Figure 4: Relation between the memory capacity, \(MC\), and the maximum conditional Lyapunov exponent (MCLE), \(\lambda\), for network size \(N=1000\), coupling strength \(g=1.5\), sparsity \(\alpha=1.0\), and number of lead-out nodes \(K=10\). The values of (a) \(\lambda\) and (b) \(MC\) are respectively plotted as functions of the standard deviation of the input signals, \(\sigma\). (c) Each plot represents \((\lambda,MC/K)\) with various values of \(\sigma\) (\(0.01\leq\sigma\leq 20\)). The sum of the \(M_{\tau}\) is calculated up to \(\tau=500\). * Churchland _et al._ [2010]M. M. Churchland _et al._, Nat. Neurosci. **13**, 369 (2010). * Wildenberg _et al._ [2021]G. A. Wildenberg, M. R. Rosen, J. Lundell, D. Paukner, D. J. Freedman, and N. Kasthuri, Cell Rep. **36**, 109709 (2021). * [20]Supplemental Material. * Ahmadian _et al._ [2015]Y. Ahmadian, F. Fumarola, and K. D. Miller, Phys. Rev. E **91**, 012820 (2015). * Jaeger [2001]H. Jaeger, _Short term memory in echo state networks_, Tech. Rep. GMD Report 152 (German National Research Center for Information Technology, 2001). * Dambre _et al._ [2012]J. Dambre, D. Verstraeten, B. Schrauwen, and S. Massar, Sci. Rep. **2**, 514 (2012). * Kusmierz _et al._ [2020]L. Kusmierz, S. Ogawa, and T. Toyoizumi, Phys. Rev. Lett. **125**, 028101 (2020). * Ahmadian _et al._ [2013]Y. Ahmadian, D. B. Rubin, and K. D. Miller, Neural Computation **25**, 1994 (2013). **Supplemental Material: Suppression of chaos in a partially driven recurrent neural network** Shotaro Takasu1 and Toshio Aoyagi Footnote 1: [email protected] _Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan_ Footnote 2: footnotemark: Footnote 3: footnotemark: Footnote 4: footnotemark: November 3, 2021 ## I Mean-field theory for a partially driven recurrent neural network Here we derive statistical properties of a partially driven recurrent neural network (Eqs.(3)-(4) in the main text) applying a mean-field approach [1; 2]. As mentioned in the main text, the values of \(x_{i}(t)\) and \(y_{k}(t)\) obey Gaussian distributions in the limit of a large network size \(N\rightarrow\infty\) because of the central limit theorem. Thus, it is sufficient to determine their mean and variance. Taking the average of the evolution equation of \(x_{i}(t)\) and \(x_{i}(t)^{2}\) over realizations of weights \(J_{ij}\) and \(u_{i}\), we obtain \[\langle x_{i}(t+1)\rangle =\sum_{j=1}^{pN}\langle J_{ij}\rangle\langle\phi(x_{j}(t))\rangle +\sum_{k=pN+1}^{N}\langle J_{ik}\rangle\langle\phi(y_{k}(t))\rangle+\langle u _{i}\rangle s(t)=0,\] (S.1) \[\langle x_{i}(t+1)^{2}\rangle =\sum_{j,j^{\prime}=1}^{pN}\langle J_{ij}J_{ij^{\prime}}\rangle \langle\phi(x_{j}(t))\phi(x_{j}^{\prime}(t))\rangle+\sum_{k,k^{\prime}=pN+1}^ {N}\langle J_{ik}J_{ik^{\prime}}\rangle\langle\phi(y_{k}(t))\phi(y_{k}^{ \prime}(t))\rangle+\langle u_{i}^{2}\rangle s(t)^{2}\] \[=p\alpha g^{2}\big{\langle}\phi(x_{i}(t))^{2}\big{\rangle}+(1-p) \alpha g^{2}\big{\langle}\phi(y_{k}(t))^{2}\big{\rangle}+s(t)^{2}.\] (S.2) Similarly, \(\langle y_{i}(t+1)\rangle\) and \(\langle y_{i}(t+1)^{2}\rangle\) are given by \[\langle y_{i}(t+1)\rangle =0,\] (S.3) \[\langle y_{i}(t+1)^{2}\rangle =p\alpha g^{2}\big{\langle}\phi(x_{i}(t))^{2}\big{\rangle}+(1-p) \alpha g^{2}\big{\langle}\phi(y_{k}(t))^{2}\big{\rangle}.\] (S.4) Note that we have used the assumption that \(\phi(x_{j})\) and \(\phi(y_{k})\) is independent of its incoming weight \(J_{ij}\) and \(J_{ik}\). This assumption is justified in the limit \(N\rightarrow\infty\) as can be shown using the generating-function formalism [3; 4]. Introducing the notation \(K(t):=\langle y_{i}(t)^{2}\rangle\), the Gaussian distributions exhibited by \(x_{i}(t)\) and \(y_{i}(t)\) can be expressed as follows: \[x_{i}(t+1) \sim\mathcal{N}(0,K(t+1)+s(t)^{2}),\] (S.5 a) \[y_{i}(t+1) \sim\mathcal{N}(0,K(t+1)).\] (S.5 b) From Eqs.(S.4 ) and (S.5 ), we can directly calculate \(K(t+1)\), obtaining \[K(t+1) =p\alpha g^{2}\int Dx\phi^{2}\left(\sqrt{K(t)+s(t-1)^{2}}x\right) +(1-p)\alpha g^{2}\int Dy\phi^{2}\left(\sqrt{K(t)}y\right)\] \[=-\alpha g^{2}+\frac{4}{\pi}\alpha g^{2}\left((1-p)\arctan\sqrt{ 1+\pi K(t)}+p\arctan\sqrt{1+\pi(K(t)+s(t-1)^{2})}\right),\] (S.6) where \(\int Dx:=\int dx\frac{1}{\sqrt{2\pi}}e^{-\frac{x^{2}}{2}}\). ## II Derivation of the maximum conditioned Lyapunov exponent Here we derive the maximum conditioned Lyapunov exponent \(\lambda\) analytically. Linearizing the evolution equation of the RNN, we obtain the variational equation describing the evolution of infinitesimal perturbation \(\mathbf{\delta}(t)\), \[\mathbf{\delta}(t+1)=\mathbf{J}\mathbf{\phi^{\prime}}(t)\mathbf{\delta}(t),\] (S.7) where the matrix \(\mathbf{\phi}^{\prime}(t)\) is the diagonal matrix whose \(i\)th diagonal entry is \(\phi^{\prime}(x_{i}(t))\) for \(1\leq i\leq pN\) or \(\phi^{\prime}(y_{i}(t))\) for \(pN+1\leq i\leq N\). The typical growth rate, \(\|\mathbf{\delta}(t+1)\|/\|\mathbf{\delta}(t)\|\), is determined by the spectral radius of the Jacobian \(\mathbf{J}\mathbf{\phi}^{\prime}(t)\). According to random matrix theory [5], the spectral radius, \(\rho(t)\), is given by \[\rho(t)^{2}=p\alpha g^{2}\left\langle\phi^{\prime}(x(t))^{2}\right\rangle+(1- p)\alpha g^{2}\left\langle\phi^{\prime}(y(t))^{2}\right\rangle,\] (S.8) in the limit of large network size. Applying the results of the mean-field theory (Eq.(S.5 )), we can calculate \(\lambda\), yielding \[\lambda :=\lim_{T\to\infty,\mathbf{\delta}(0)\to 0}\frac{1}{T}\log\frac{\|\mathbf{ \delta}(T)\|}{\|\mathbf{\delta}(0)\|}=\lim_{T\to\infty,\mathbf{\delta}(0)\to 0}\frac{1}{ 2T}\sum_{t=0}^{T-1}\log\frac{\|\mathbf{\delta}(t+1)\|^{2}}{\|\mathbf{\delta}(t)\|^{2}}\] \[\approx\lim_{T\to\infty}\frac{1}{2T}\sum_{t=0}^{T-1}\log\rho(t)^ {2}\] \[=\lim_{T\to\infty}\frac{1}{2T}\sum_{t=0}^{T-1}\log\left(p\alpha g ^{2}\int Dx\phi^{\prime 2}\left(\sqrt{K(t)+s(t-1)^{2}}x\right)+(1-p)\alpha g^{2} \int Dy\phi^{\prime 2}\left(\sqrt{K(t)}y\right)\right)\] \[=\lim_{T\to\infty}\frac{1}{T}\sum_{t=0}^{T}\frac{1}{2}\log\alpha g ^{2}\left(\frac{p}{\sqrt{1+\pi(K(t)+s(t-1)^{2})}}+\frac{1-p}{\sqrt{1+\pi K(t) }}\right),\] (S.9) where we used \(\phi(x)=\text{erf}(\frac{\sqrt{\pi}}{2}x)\) in the last equation.
2303.07669
AutoTransfer: AutoML with Knowledge Transfer -- An Application to Graph Neural Networks
AutoML has demonstrated remarkable success in finding an effective neural architecture for a given machine learning task defined by a specific dataset and an evaluation metric. However, most present AutoML techniques consider each task independently from scratch, which requires exploring many architectures, leading to high computational cost. Here we propose AutoTransfer, an AutoML solution that improves search efficiency by transferring the prior architectural design knowledge to the novel task of interest. Our key innovation includes a task-model bank that captures the model performance over a diverse set of GNN architectures and tasks, and a computationally efficient task embedding that can accurately measure the similarity among different tasks. Based on the task-model bank and the task embeddings, we estimate the design priors of desirable models of the novel task, by aggregating a similarity-weighted sum of the top-K design distributions on tasks that are similar to the task of interest. The computed design priors can be used with any AutoML search algorithm. We evaluate AutoTransfer on six datasets in the graph machine learning domain. Experiments demonstrate that (i) our proposed task embedding can be computed efficiently, and that tasks with similar embeddings have similar best-performing architectures; (ii) AutoTransfer significantly improves search efficiency with the transferred design priors, reducing the number of explored architectures by an order of magnitude. Finally, we release GNN-Bank-101, a large-scale dataset of detailed GNN training information of 120,000 task-model combinations to facilitate and inspire future research.
Kaidi Cao, Jiaxuan You, Jiaju Liu, Jure Leskovec
2023-03-14T07:23:16Z
http://arxiv.org/abs/2303.07669v1
# AutoTransfer: AutoML with Knowledge Transfer - An Application to Graph Neural Networks ###### Abstract AutoML has demonstrated remarkable success in finding an effective neural architecture for a given machine learning task defined by a specific dataset and an evaluation metric. However, most present AutoML techniques consider each task independently from scratch, which requires exploring many architectures, leading to high computational costs. Here we propose AutoTransfer, an AutoML solution that improves search efficiency by transferring the prior architectural design knowledge to the novel task of interest. Our key innovation includes a _task-model bank_ that captures the model performance over a diverse set of GNN architectures and tasks, and a computationally efficient _task embedding_ that can accurately measure the similarity among different tasks. Based on the task-model bank and the task embeddings, we estimate the design priors of desirable models of the novel task, by aggregating a similarity-weighted sum of the top-K design distributions on tasks that are similar to the task of interest. The computed design priors can be used with any AutoML search algorithm. We evaluate AutoTransfer on six datasets in the graph machine learning domain. Experiments demonstrate that (i) our proposed task embedding can be computed efficiently, and that tasks with similar embeddings have similar best-performing architectures; (ii) AutoTransfer significantly improves search efficiency with the transferred design priors, reducing the number of explored architectures by _an order of magnitude_. Finally, we release GNN-Bank-101, a large-scale dataset of detailed GNN training information of 120,000 task-model combinations to facilitate and inspire future research. ## 1 Introduction Deep neural networks are highly modular, requiring many design decisions to be made regarding network architecture and hyperparameters. These design decisions form a search space that is nonconvex and costly even for experts to optimize over, especially when the optimization must be repeated from scratch for each new use case. Automated machine learning (AutoML) is an active research area that aims to reduce the human effort required for architecture design that usually covers hyperparameter optimization and neural architecture search. AutoML has demonstrated success (Zoph and Le, 2016; Pham et al., 2018; Zoph et al., 2018; Cai et al., 2018; He et al., 2018; Guo et al., 2020; Erickson et al., 2020; LeDell and Poirier, 2020) in many application domains. Finding a reasonably good model for a new learning _task1_ in a computationally efficient manner is crucial for making deep learning accessible to domain experts with diverse backgrounds. Efficient AutoML is especially important in domains where the best architectures/hyperparameters are highly sensitive to the task. A notable example is the domain of graph learning2. _First_, graph learning methods receive input data composed of _a variety of data types_ and optimize over tasks that span an equally _diverse set of domains and modalities_ such as recommendation (Ying et al., 2018; He et al., 2020), physical simulation (Sanchez-Gonzalez et al., 2020; Pfaff et al., 2020), and bioinformatics (Zitnik et al., 2018). This differs from computer vision and natural language processing where the input data has a predefined, fixed structure that can be shared across different neural architectures. _Second_, neural networks that operate on graphs come with _a rich set of design choices_ and _a large set of parameters to explore_. However, unlike other domains where a few pre-trained architectures such as ResNet (He et al., 2016) and GPT-3 (Brown et al., 2020) dominate the benchmarks, it has been shown that the best graph neural network (GNN) design is highly task-dependent (You et al., 2020). Although AutoML as a research domain is evolving fast, existing AutoML solutions have massive computational overhead when finding a good model for a _new_ learning task is the goal. Most present AutoML techniques consider each task independently and in isolation, therefore they require redoing the search from scratch for each new task. This approach ignores the potentially valuable architectural design knowledge obtained from the previous tasks, and inevitably leads to a high computational cost. The issue is especially significant in the graph learning domain Gao et al. (2019); Zhou et al. (2019), due to the challenges of diverse task types and the huge design space that are discussed above. Here we propose AutoTransfer3, an AutoML solution that drastically improves AutoML architecture search by transferring previous architectural design knowledge to the task of interest. Our key innovation is to introduce a _task-model bank_ that stores the performance of a diverse set of GNN architectures and tasks to guide the search algorithm. To enable knowledge transfer, we define a _task embedding_ space such that tasks close in the embedding space have similar corresponding top-performing architectures. The challenge here is that the task embedding needs to capture the performance rankings of different architectures on different datasets, while being efficient to compute. Our innovation here is to embed a task by using the condition number of its Fisher Information Matrix of various _randomly initialized_ models and also a learning scheme with an empirical generalization guarantee. This way we implicitly capture the properties of the learning task, while being orders of magnitudes faster (within seconds). We then estimate the design prior of desirable models for the new task, by aggregating design distributions on tasks that are close to the task of interest. Finally, we initiate a hyperparameter search algorithm with the task-informed design prior computed. Footnote 3: Source code is available at [https://github.com/snap-stanford/AutoTransfer](https://github.com/snap-stanford/AutoTransfer). We evaluate AutoTransfer on six datasets, including both node classification and graph classification tasks. We show that our proposed task embeddings can be computed efficiently and the distance measured between tasks correlates highly (0.43 Kendall correlation) with model performance rankings. Furthermore, we present AutoTransfer significantly improves search efficiency when using the transferred design prior. AutoTransfer reduces the number of explored architectures needed to reach a target accuracy by _an order of magnitude_ compared to SOTA. Finally, we release GNN-Bank-101--the first large-scale database containing detailed performance records for 120,000 task-model combinations which were trained with 16,128 GPU hours--to facilitate future research. ## 2 Related Work In this section, we summarize the related work on AutoML regarding its applications on GNNs, the common search algorithms, and pioneering work regarding transfer learning and task embeddings. **AutoML for GNNs.** Neural architecture search (NAS), a unique and popular form of AutoML for deep learning, can be divided into two categories: multi-trial NAS and one-shot NAS. During multi-trial NAS, each sampled architecture is trained separately. GraphNAS (Gao et al., 2020) and Auto-GNN (Zhou et al., 2019) are typical multi-trial NAS algorithms on GNNs which adopt an RNN controller that learns to suggest better sets of configurations through reinforcement learning. One-shot NAS (_e.g._, (Liu et al., 2018; Qin et al., 2021; Li et al., 2021)) involves encapsulating the entire model space in one super-model, training the super-model once, and then iteratively sampling sub-models from the super-model to find the best one. In addition, there is work that explicitly studies fine-grained design choices such as data augmentation (You et al., 2021), message passing layer type (Cai et al., 2021; Ding et al., 2021; Zhao et al., 2021), and graph pooling (Wei et al., 2021). Notably, AutoTransfer is the _first_ AutoML solution for GNNs that efficiently transfer design knowledge across tasks. **HPO Algorithms.** Hyperparameter Optimization (HPO) algorithms search for the optimal model hyperparameters by iteratively suggesting a set of hyperparameters and evaluating their performance. Random search samples hyperparameters from the search space with equal probability. Despite not learning from previous trials, random search is commonly used for its simplicity and is much more efficient than grid search (Bergstra and Bengio, 2012). The TPE algorithm (Bergstra et al., 2011) builds a probabilistic model of task performance over the hyperparameter space and uses the results of past trials to choose the most promising next configuration to train, which the TPE algorithm defines as maximizing the Expected Improvement value (Jones, 2001). Evolutionary algorithms (Real et al., 2017; Jaderberg et al., 2017) train multiple models in parallel and replace poorly performing models with "mutated" copies of the current best models. AutoTransfer is a general AutoML solution and can be applied in combination with any of these HPO algorithms. **Transfer Learning in AutoML.** Wong et al. (2018) proposed to transfer knowledge across tasks by reloading the controller of reinforcement learning search algorithms. However, this method assumes that the search space on different tasks starts with the same learned prior. Unlike AutoTransfer, it cannot address the core challenge in GNN AutoML: the best GNN design is highly task-specific. GraphGym (You et al., 2020) attempts to transfer the best architecture design directly with a metric space that measures task similarity. GraphGym (You et al., 2020) computes task similarity by training a set of 12 "anchor models" to convergence which is computationally expensive. In contrast, AutoTransfer designs light-weight task embeddings requiring minimal computations overhead. Additionally, Zhao and Bilen (2021); Li et al. (2021) proposes to conduct architecture search on a proxy subset of the whole dataset and later transfer the best searched architecture on the full dataset. Jeong et al. (2021) studies a similar setting in vision domain. **Task Embedding.** There is prior research trying to quantify task embeddings and similarities. Similar to GraphGym, Taskonomy (Zamir et al., 2018) estimates the task affinity matrix by summarizing final losses/evaluation metrics using an Analytic Hierarchy Process (Saaty, 1987). From a different perspective, Task2Vec (Achille et al., 2019) generates task embeddings for a given task using the Fisher Information Matrix associated with a pre-trained probe network. This probe network is shared across tasks and allows Task2Vec to estimate the Fisher Information Matrix of different image datasets. Le et al. (2022) extends a similar idea to neural architecture search. The aforementioned task embeddings cannot be directly applied to GNNs as the inputs do not align across datasets. AutoTransfer avoids the bottleneck by using asymptotic statistics of the Fisher Information Matrix with randomly initiated weights. ## 3 Problem Formulation and Preliminaries We first introduce formal definitions of data structures relevant to AutoTransfer. Figure 1: **Overview of AutoTransfer. Left: We introduce GNN-Bank-101, a large database containing a diverse set of GNN architectures and hyperparameters applied to different tasks, along with their training/evaluation statistics. Middle: We introduce a task embedding space, where each point corresponds to a different task. Tasks close in the embedding space have similar corresponding top-performing models. Right: Given a new task of interest, we guide the AutoML search by referencing the design distributions of the most similar tasks in the task embedding space.** **Definition 1** (Task): _We denote a task as \(T=(\mathcal{D},\mathcal{L}(\cdot))\), consisting of a dataset \(\mathcal{D}\) and a loss function \(\mathcal{L}(\cdot)\) related to the evaluation metric._ For each training attempt on a task \(T^{(i)},\) we can record its model architecture \(M_{j}\), hyperparameters \(H_{j}\), and corresponding value of loss \(l_{j}\), _i.e._, \((M_{j},H_{j},l_{j})\). We propose to maintain a task-model bank to facilitate knowledge transfer to future novel tasks. **Definition 2** (Task-Model Bank): _A task-model bank \(\mathcal{B}\) is defined as a collection of tasks, each with multiple training attempts, in the form of \(\mathcal{B}=\{(T^{(i)},\{(M_{j}^{(i)},H_{j}^{(i)},l_{j}^{(i)})\})\}\)._ **AutoML with Knowledge Transfer.** Suppose we have a task-model bank \(\mathcal{B}\). Given a novel task \(T^{(n)}\) which has not been seen before, our goal is to quickly find a model that works reasonably well on the novel task by utilizing knowledge from the task-model bank. In this paper, we focus on AutoML for graph learning tasks, though our developed technique is general and can be applied to other domains. We define the input graph as \(G=\{V,E\}\), where \(V\) is the node set and \(E\subseteq V\times V\) is the edge set. Furthermore, let \(y\) denote its output labels, which can be node-level, edge-level or graph-level. A GNN parameterized by weights \(\theta\) outputs a posterior distribution \(\mathcal{P}(G,y,\theta)\) for label predictions. ## 4 Proposed Solution: AutoTransfer In this section, we introduce the proposed AutoTransfer solution. AutoTransfer uses the _task embedding space_ as a tool to understand the relevance of previous architectural designs to the target task. The designed task embedding captures the performance rankings of different architectures on different tasks while also being efficient to compute. We first introduce a theoretically motivated solution to extract a scale-invariant performance representation of each task-model pair. We use these representations to construct task features and further learn task embeddings. These embeddings form the task embedding space that we finally use during the AutoML search. ### Basics of the Fisher Information Matrix (FIM) Given a GNN defined above, its Fisher Information Matrix (FIM) \(F\) is defined as \[F=\mathbb{E}_{G,y}[\nabla_{\theta}\log\mathcal{P}(G,y,\theta)\;\nabla_{\theta} \log\mathcal{P}(G,y,\theta)^{\top}].\] which formally is the expected covariance of the scores with respect to the model parameters. There are two popular geometric views for the FIM. First, the FIM is an upper bound of the Hessian and coincides with the Hessian if the gradient is 0. Thus, the FIM characterizes the local landscape of the loss function near the global minimum. Second, similar to the Hessian, the FIM models the loss Figure 2: **Pipeline for extracting task embeddings. Left: To efficiently embed a task, we first extract task features by concatenating features measured from \(R\) randomly initialized anchor models. Then, we introduce a projection function \(g(\cdot)\) with learned weights to transform the task features into task embeddings. Right: Training objective for optimizing \(g(\cdot)\) with triplet supervision.** landscape with respect not to the input space, but to the parameter space. In the information geometry view, if we add a small perturbation to the parameter space, we have \[\text{KL}(\mathcal{P}(G,y,\theta)\|\mathcal{P}(G,y,\theta+d\theta))=d\theta^{ \top}Fd\theta.\] where \(\text{KL}(\cdot,\cdot)\) stands for Kullback-Leibler divergence. It means that the parameter space of a model forms a Riemannian manifold and the FIM works as its Riemannian metric. The FIM thus allows us to quantify the importance of a model's weights in a way that is applicable to different architectures. ### FIM-based Task Features **Scale-invariant Representation of Task-Model Pairs.** We aim to find a scale-invariant representation for each task-model pair which will form the basis for constructing task features. The major challenge in using the FIM to represent GNN performance is that graph datasets do not have a universal, fixed input structure, so it is infeasible to find a single pre-trained model and extract its FIM. However, training multiple networks poses a problem as the FIMs computed for different networks are not directly comparable. We choose to use multiple networks but additionally propose to use asymptotic statistics of the FIM associated with randomly initialized weights. The theoretical justification for the relationship between the asymptotic statistics of the FIM and the trainability of neural networks was studied in (Karakida et al., 2019; Pennington and Worah, 2018) to which we refer the readers. We hypothesize that such a measure of trainability encodes loss landscapes and generalization ability and thus correlates with final model performance on the task. Another issue that relates to input structures of graph datasets is that different models have different number of parameters. Despite some specially designed architectures, _e.g._, (Lee et al., 2019; Ma et al., 2019), most GNN architecture design can be represented as a sequence of pre-processing layers, message passing layers, and post-processing layers. Pre-process layers and post-process layers are Multilayer Perceptron (MLP) layers, of which the dimensions vary across different tasks due to different input/output structures. Message passing layers are commonly regarded as the key design for GNNs and the number of weight parameters can remain the same across tasks. In this light, we only consider the FIM with respect to the parameters in message passing layers so that the number of parameters considered stays the same for all datasets. We note that such formulation has its limitations, in the sense that it cannot cover all the GNN designs in the literature. We leave potential extensions with better coverage for future work. We further approximate the FIM by only considering the diagonal entries, which implicitly neglects the correlations between parameters. We note that this is common practice when analyzing the FIMs of deep neural networks, as the full FIM is massive (quadratic in the number of parameters) and infeasible to compute even on modern hardware. Similar to Pennington and Worah (2018), we consider the first two moments of FIM \[m_{1}=\frac{1}{n}\text{tr}[F]\quad\text{and}\quad m_{2}=\frac{1}{n}\text{tr}[F ^{2}] \tag{1}\] and use \(\alpha=m_{2}/m_{1}^{2}\) as the scale-invariant representation. The computed \(\alpha\) is lower bounded by 1 and captures how concentrated the spectrum is. A small \(\alpha\) indicates the loss landscape is flat, and its corresponding model design enjoys fast first-order optimization and potentially better generalization. To encode label space information into each task, we propose to train only the last linear layer of each model on a given task, which can be done efficiently. The parameters in other layers are frozen after being randomly initialized. We take the average over \(R\) initializations to estimate the average \(\bar{\alpha}\). **Constructing Task Features.** We denote task features as measures extracted from each task that characterize its important traits. The design of task features should reflect our final objective: to use these features to identify similar tasks and transfer the best design distributions. Thus, we select \(U\) model designs as anchor models and concatenate the scale-invariant representations \(\bar{a}_{u}\) of each design as task features. To retain only the relative ranking among anchor model designs, we normalize the concatenated feature vector to a scale of 1. We let \(\mathbf{z}_{f}\) denote the normalized task feature. ### From Task Features to Task Embeddings The task feature \(\mathbf{z}_{f}\) introduced above can be regarded as a means of feature engineering. We construct the feature vector with domain knowledge, but there is no guarantee it functions as anticipated. We thus propose to learn a projection function \(g(\cdot):\mathbb{R}^{U}\rightarrow\mathbb{R}^{D}\) that maps task feature \(\mathbf{z}_{f}\) to final task embedding \(\mathbf{z}_{e}=g(\mathbf{z}_{f})\). We do not have any pointwise supervision that can be used as the training objective. Instead, we consider the metric space defined by GraphGym. The distance function in GraphGym - computed using the Kendall rank correlation between performance rankings of anchor models trained on the two compared tasks - correlates nicely with our desired knowledge transfer objective. It is not meaningful to enforce that task embeddings mimic GraphGym's exact metric space, as GraphGym's metric space can still contain noise, or does not fully align with the transfer objective. We consider a surrogate loss that enforces only the rank order among tasks. To illustrate, let us consider tasks \(T^{(i)},T^{(j)}\), \(T^{(k)}\) and their corresponding task embeddings, \(\mathbf{z}_{e}^{(i)}\), \(\mathbf{z}_{e}^{(j)}\), \(\mathbf{z}_{e}^{(k)}\). Note that \(\mathbf{z}_{e}\) is normalized to 1 so \({\mathbf{z}_{e}^{(i)}}^{\top}\mathbf{z}_{e}^{(j)}\) measures the cosine similarity between tasks \(T^{(i)}\) and \(T^{(j)}\). Let \(d_{g}(\cdot,\cdot)\) denote the distance estimated by GraphGym. We want to enforce \[\mathbf{z}_{e}^{(i)}{}^{\top}\mathbf{z}_{e}^{(j)}>\mathbf{z}_{e}^{(i)}{}^{\top}\mathbf{z}_{e}^ {(k)}\quad\text{if}\quad d_{g}(T^{(i)},T^{(j)})<d_{g}(T^{(i)},T^{(k)}).\] To achieve this, we use the margin ranking loss as our surrogate supervised objective function: \[\mathcal{L}_{r}(\mathbf{z}_{e}^{(i)},\mathbf{z}_{e}^{(j)},\mathbf{z}_{e}^{(k)},y)=\max(0, -y\cdot(\mathbf{z}_{e}^{(i)}{}^{\top}\mathbf{z}_{e}^{(j)}-\mathbf{z}_{e}^{(i)}{}^{\top} \mathbf{z}_{e}^{(k)})+\text{margin}). \tag{2}\] Here if \(d_{g}(T^{(i)},T^{(j)})<d_{g}(T^{(i)},T^{(k)})\), then we have its corresponding label \(y=1\), and \(y=-1\) otherwise. Our final task embedding space is then a FIM-based metric space with cosine distance function, where the distance is defined as \(d_{e}(T^{(i)},T^{(j)})=1-\mathbf{z}_{e}^{(i)}{}^{\top}\mathbf{z}_{e}^{(j)}\). Please refer to the detailed training pipeline at Algorithm 2 in the Appendix. ### AutoML Search Algorithm with Task Embeddings To transfer knowledge to a novel task, a naive idea would be to directly carry over the best model configuration from the closest task in the bank. However, even a high Kendall rank correlation between model performance rankings of two tasks \(T^{(i)}\), \(T^{(j)}\) does not guarantee the best model configuration in task \(T^{(i)}\) will also achieve the best performance on task \(T^{(j)}\). In addition, since task similarities are subject to noise, this naive solution may struggle when there exist multiple reference tasks that are all highly similar. To make the knowledge transfer more robust to such failure cases, we introduce the notion of design distributions that depend on top performing model designs and propose to transfer design distributions rather than the best design configurations. Formally, consider a task \(T^{(i)}\) in the task-model bank \(\mathcal{B}\), associated with its trials \(\{(M^{(i)}_{j},H^{(i)}_{j},l^{(i)}_{j})\}\). We can summarize its designs as a list of configurations \(C=\{c_{1},\ldots,c_{W}\}\), such that all potential combinations of model architectures \(M\) and hyperparameters \(H\) fall under the Cartesian product of the configurations. For example, \(c_{1}\) could be the instantiation of aggregation layers, and \(c_{2}\) could be the start learning rate. We then define design distributions as random variables \(\texttt{c}_{1},\texttt{c}_{2},\ldots,\texttt{c}_{W}\), each corresponding to a hyperparameter. Each random variable c is defined as the frequency distribution of the design choices used in the top \(K\) trials. We multiply all distributions for the individual configurations \(\{\texttt{c}_{1},\ldots,\texttt{c}_{W}\}\) to approximate the overall task's design distribution \(\mathcal{P}(C|T^{(i)})=\prod_{w}\mathcal{P}(\texttt{c}_{w}|T^{(i)})\). During inference, given a novel task \(T^{(n)}\), we select a close task subset \(\mathcal{S}\) by thresholding task embedding distances, _i.e._, \(\mathcal{S}=\{T^{(i)}|d_{e}(T^{(n)},T^{(i)})\leq d_{\text{these}}\}\). We then derive the transferred design prior \(\mathcal{P}_{t}(C|T^{(n)})\) of the novel task by weighting design distributions from the close task subset \(\mathcal{S}\). \[\mathcal{P}_{t}(C|T^{(n)})=\frac{\sum_{T^{(i)}\in\mathcal{S}}\frac{1}{d_{e}(T^ {(n)},T^{(i)})}\mathcal{P}(C|T^{(i)})}{\sum_{T^{(i)}\in\mathcal{S}}\frac{1}{d_{ e}(T^{(n)},T^{(i)})}}. \tag{3}\] The inferred design prior for the novel task can then be used to guide various search algorithms. The most natural choice for a few trial regime is random search. Rather than sampling each design configuration following a uniform distribution, we propose to sample from the task-informed design prior \(\mathcal{P}_{t}(C|T^{(n)})\). Please refer to Appendix A to check how we augment other search algorithms. For AutoTransfer, we can preprocess the task-model bank \(\mathcal{B}\) into \(\mathcal{B}_{p}=\{(\mathcal{D}^{(i)},\mathcal{L}^{(i)}(\cdot)),\mathbf{z}_{e}^{(i )},\mathcal{P}(C|T^{(i)})\}\) as our pipeline only requires using task embedding \(\mathbf{z}_{e}^{(i)}\) and design distribution \(\mathcal{P}(C|T^{(i)})\) rather than detailed training trials. A detailed search pipeline is summarized in Algorithm 1. ``` 0: A processed task-model bank \(\mathcal{B}_{p}=\{(\mathcal{D}^{(i)},\mathcal{C}^{(i)}(\cdot)),\mathbf{z}_{e}^{(i)}, \mathcal{P}(C|T^{(i)})\}\), a novel task \(T^{(n)}=\big{(}\mathcal{D}^{(n)},\mathcal{C}^{(n)}(\cdot)\big{)}\), \(U\) anchor models \(M_{1},...,M_{U}\), \(R\) specifies the number of repeats. 1:for\(u=1\) to \(U\)do 2:for\(r=1\) to \(R\)do 3: Initialize weights \(\theta\) for anchor model \(M_{u}\) randomly 4: Estimate FIM \(F\leftarrow\mathbb{E}_{\mathcal{D}}[\nabla_{\theta}\log\mathcal{P}(M_{u},y, \theta)\ \nabla_{\theta}\log\mathcal{P}(M_{u},y,\theta)^{\top}]\) 5: Extract scale-invariant representation \(a_{u}^{(v)}\gets m_{2}/m_{1}^{2}\) following Eq. 1 6:endfor 7:\(\bar{\mathbf{a}}_{u}\leftarrow\text{mean}(a_{u}^{(1)},a_{u}^{(2)},...,a_{u}^{(V)})\) 8:endfor 9:\(\mathbf{z}_{f}^{(n)}\leftarrow\text{concat}(\bar{a}_{1},\bar{a}_{2},...,\bar{a}_{U})\) 10:\(\mathbf{z}_{e}^{(n)}\gets g(\mathbf{z}_{f}^{(n)})\) 11: Select close task subset \(\mathcal{S}\leftarrow\{T^{(i)}|1-\mathbf{z}_{e}^{(n)}\}^{\top}\mathbf{z}_{e}^{(i)}\leq d _{\text{dnews}}\}\) 12: Get design prior \(\mathcal{P}_{t}(C|T^{(n)})\) by aggregating subset \(\mathcal{S}\) following Eq. 3 13: Start a HPO search algorithm with the task-informed design prior \(\mathcal{P}_{t}(C|T^{(n)})\) ``` **Algorithm 1** Summary of AutoTransfer search pipeline ## 5 Experiments ### Experimental Setup **Task-Model Bank: GNN-Bank-101.** To facilitate AutoML research with knowledge transfer, we collected GNN-Bank-101 as the first large-scale graph database that records reproducible design configurations and detailed training performance on a variety of tasks. Specifically, GNN-Bank-101 currently includes six tasks for node classification (AmazonComputers (Shchur et al., 2018), AmazonPhoto (Shchur et al., 2018), CiteSeer (Yang et al., 2016), CoauthorCS (Shchur et al., 2018), CoauthorPhysics (Shchur et al., 2018), Cora (Yang et al., 2016)) and six tasks for graph classification (PROTEINS (Ivanov et al., 2019), BZR (Ivanov et al., 2019), COX2 (Ivanov et al., 2019), DD (Ivanov et al., 2019), ENZYMES (Ivanov et al., 2019), IMDB-M (Ivanov et al., 2019)). Our design space follows (You et al., 2020), and we extend the design space to include various commonly adopted graph convolution and activation layers. We extensively run 10,000 different models for each task, leading to 120,000 total task-model combinations, and record all training information including train/val/test loss. **Benchmark Datasets.** We benchmark AutoTransfer on six different datasets following prior work (Qin et al., 2021). Our datasets include three standard node classification datasets (CoauthorPhysics (Shchur et al., 2018), CoraFull (Bojchevski and Gunnemann, 2017) and OGB-Arxiv (Hu et al., 2020)), as well as three standard benchmark graph classification datasets, (COX2 (Ivanov et al., 2019), IMDB-M (Ivanov et al., 2019) and PROTEINS (Ivanov et al., 2019)). CoauthorPhysics and CoraFull are transductive node classification datasets, so we randomly assign nodes into train/valid/test sets following a 50%:25%:25% split (Qin et al., 2021). We randomly split graphs following a 80%:10%:10% split for the three graph classification datasets (Qin et al., 2021). We follow the default train/valid/test split for the OGB-Arxiv dataset (Hu et al., 2020). To make sure there is no information leakage, we temporarily _remove_ all records related to the task from our task-model bank if the dataset we benchmark was collected in the task-model bank. **Baselines.** We compare our methods with the state-of-the-art approaches for GNN AutoML. We use GCN and GAT with default architectures following their original implementation as baselines. For multi-trial NAS methods, we consider GraphNAS (Gao et al., 2020). For one-shot NAS methods, we include DARTS (Liu et al., 2018) and GASSO (Qin et al., 2021). GASSO is designed for transductive settings, so we omit it for graph classification benchmarks. We further provide results of HPO algorithms based on our proposed search space as baselines: Random, Evolution, TPE (Bergstra et al., 2011) and HyperBand (Li et al., 2017). We by default allow searching 30 trials maximum for all the algorithms, _i.e._, an algorithm can train 30 different models and collect the model with the best accuracy. We use the default setting for one-shot NAS algorithms (DARTS and GASSO), as they only train a super-model once and can efficiently evaluate different architectures. We are mostly interested in studying the few-trial regime where most advanced search algorithms degrade to random search. Thus we additionally include a random search (3 trials) baseline where we pick the best model out of only 3 trials. ### Experiments on Search Efficiency We evaluate AutoTransfer by reporting the average best test accuracy among all trials considered over ten runs of each algorithm in Table 1. The test accuracy collected for each trial is selected at the epoch with the best validation accuracy. By comparing results from random search (3 trials) and AutoTransfer (3 trials), we show that our transferred task-informed design prior significantly improves test accuracy in the few-trial regime, and can be very useful in environments that are where computationally constrained. Even if we increase the number of search trials to 30, AutoTransfer still demonstrates non-trivial improvement compared with TPE, indicating that our proposed pipeline has advantages even when computational resources are abundant. Notably, with only 3 search trials, AutoTransfer surpasses most of the baselines, even those that use 30 trials. To better understand the sample efficiency of AutoTransfer, we plot the best test accuracy found at each trial in Figure 3 for OGB-Arxiv and TU-PROTEINS datasets. We notice that the advanced search algorithms (Evolution and TPE) have no advantages over random search at the few-trial regime since the amount of prior search data is not yet sufficient to infer potentially better design configurations. On the contrary, by sampling from the transferred design prior, AutoTransfer achieves significantly better average test accuracy in the first few trials. The best test accuracy at trial 3 of AutoTransfer surpasses its counterpart at trial 10 for every other method. ### Analysis of Task Embeddings **Qualitative analysis of task features.** To examine the quality of the proposed task features, we visualize the proposed task similarity matrix (Figure 4 (b)) along with the task similarity matrix (Figure 4 (a)) proposed in GraphGym. We show that our proposed task similarity matrix captures similar patterns as GraphGym's task similarity matrix while being computed much more efficiently \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{Node} & \multicolumn{3}{c}{Graph} \\ Method & Physics & CoraFull & OGB-Arxiv & COX2 & IUBB & PROTEINS \\ \hline GCN (30 trials) & 95.88\(\pm\)0.16 & 67.12\(\pm\)0.52 & 70.46\(\pm\)0.18 & 79.23\(\pm\)2.19 & 50.40\(\pm\)3.02 & 74.84\(\pm\)2.82 \\ GAT (30 trials) & 95.71\(\pm\)0.24 & 65.92\(\pm\)0.68 & 68.82\(\pm\)0.32 & 81.56\(\pm\)4.17 & 49.67\(\pm\)4.30 & 75.30\(\pm\)3.72 \\ GraphNAS (30 trials) & 92.77\(\pm\)0.84 & 63.13\(\pm\)3.28 & 65.90\(\pm\)2.64 & 77.73\(\pm\)4.04 & 46.93\(\pm\)3.94 & 72.51\(\pm\)3.36 \\ DARTS & 95.28\(\pm\)1.67 & 65.79\(\pm\)2.85 & 69.02\(\pm\)1.18 & 79.82\(\pm\)3.15 & 50.26\(\pm\)4.08 & 75.04\(\pm\)3.81 \\ GASSO4 & 96.38 & 68.89 & 70.52 & - & - & - \\ \hline Random (3 trials) & 95.16\(\pm\)0.55 & 61.24\(\pm\)4.04 & 67.92\(\pm\)1.92 & 76.88\(\pm\)3.17 & 45.79\(\pm\)4.39 & 72.47\(\pm\)2.57 \\ TPE (30 trials) & 96.41\(\pm\)0.36 & 66.37\(\pm\)1.73 & 71.35\(\pm\)4.04 & 82.27\(\pm\)2.00 & 50.33\(\pm\)4.00 & 79.46\(\pm\)1.28 \\ HyperBand (30 trials) & 96.56\(\pm\)0.30 & 67.75\(\pm\)1.24 & 71.60\(\pm\)0.36 & 82.21\(\pm\)1.79 & 50.86\(\pm\)3.45 & 79.32\(\pm\)1.16 \\ \hline \hline **AutoTransfer (3 trials)** & 96.64\(\pm\)0.42 & 69.27\(\pm\)0.76 & 71.24\(\pm\)0.39 & 82.13\(\pm\)1.59 & 52.33\(\pm\)2.13 & 77.81\(\pm\)2.19 \\ **AutoTransfer (30 trials)** & 96.91\(\pm\)0.27 & 70.05\(\pm\)0.42 & 72.21\(\pm\)0.27 & 86.52\(\pm\)1.58 & 54.93\(\pm\)1.23 & 81.25\(\pm\)1.17 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparisons of AutoTransfer and other baselines. We report the average test accuracy and the standard deviation over ten runs. With only 3 trials AutoTransfer already outperform most SOTA baselines with 30 trials. Figure 3: Performance comparisons in the few-trial regime. At trial \(t\), we plot the best test accuracy among all models searched from trial \(1\) to trial \(t\). AutoTransfer can reduce the number of trials needed to search by an order of magnitude (see also Table 4 in Appendix). by omitting training. We notice that the same type of tasks, _i.e._, node classification and graph classification, share more similarities within each group. As a sanity check, we examined that the closest task in the bank with respect to CoraFull is Cora. The top 3 closest tasks for OGB-Arxiv are AmazonComputers, AmazonPhoto, and CoauthorPhysics, all of which are node classification tasks. **Generalization of projection function \(g(\cdot)\).** To show the proposed projection function \(g(\cdot)\) can generate task embeddings that can generalize to novel tasks, we conduct leave-one-out cross validation with all tasks in our task-model bank. Concretely, for each task considered as a novel task \(T^{(n)}\), we use the rest of the tasks, along with their distance metric \(d_{g}(\cdot,\cdot)\) estimated by GraphGym's exact but computationally expensive metric space, to train the projection function \(g(\cdot)\). We calculate Kendall rank correlation over task similarities for Task Feature (without \(g(\cdot)\)) and Task Embedding (with \(g(\cdot)\)) against the exact task similarities. The average rank correlation and the standard deviation over ten runs is shown on Figure 4 (c). We find that with the proposed \(g(\cdot)\), our task embeddings indeed correlate better with the exact task similarities, and therefore, generalize better to novel tasks. **Ablation study on alternative task space design.** To demonstrate the superiority of the proposed task embedding, we further compare it with alternative task features. Following prior work (Yang et al., 2019), we use the normalized losses over the first 10 steps as the task feature. The results on OGB-Arxiv are shown in Table 2. Compared to AutoTransfer's task embedding, the task feature induced by normalized losses has a lower ranking correlation with the exact metric and yields worse performance. Table 2 further justifies the efficacy of using the Kendall rank correlation as the metric for task embedding quality, as higher Kendall rank correlation leads to better performance. ## 6 Conclusion In this paper, we study how to improve AutoML search efficiency by transferring existing architectural design knowledge to novel tasks of interest. We introduce a _task-model bank_ that captures the performance over a diverse set of GNN architectures and tasks. We also introduce a computationally efficient _task embedding_ that can accurately measure the similarity between different tasks. We release GNN-Bank-101, a large-scale database that records detailed GNN training information of 120,000 task-model combinations. We hope this work can facilitate and inspire future research in efficient AutoML to make deep learning more accessible to a general audience. \begin{table} \begin{tabular}{c|c c} \hline \hline & Kendall rank correlation & Test accuracy \\ \hline Alternative: Normalized Loss & -0.07\(\pm\)0.43 & 68.13\(\pm\)1.27 \\ AutoTransfer’s Task Feature & 0.18\(\pm\)0.30 & 70.67\(\pm\)0.52 \\ AutoTransfer’s Task Embedding & 0.43\(\pm\)0.22 & 71.42\(\pm\)0.39 \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study on the alternative task space design versus AutoTransfer’s task embedding. We report the average test accuracy and the standard deviation OGB-Arxiv over ten runs. Figure 4: (a) GraphGym’s task similarity between all pairs of tasks (computed from the Kendall rank correlation between performance rankings of models trained on the two compared tasks), a higher value represents a higher similarity. (b) The proposed task similarity computed by computing the dot-product between extracted task features. (c) The Kendall rank correlation of similarity rankings of the other tasks with respect to the central task between the proposed method and GraphGym. ## Acknowledgements We thank Xiang Lisa Li, Hongyu Ren, Yingxin Wu for discussions and for providing feedback on our manuscript. We also gratefully acknowledge the support of DARPA under Nos. HR00112190039 (TAMI), N660011924033 (MCS); ARO under Nos. W911NF-16-1-0342 (MURI), W911NF-16-1-0171 (DURIP); NSF under Nos. OAC-1835598 (CINES), OAC-1934578 (HDR), CCF-1918940 (Expeditions), NIH under No. 3U54HG010426-04S1 (HuBMAP), Stanford Data Science Initiative, Wu Tsai Neurosciences Institute, Amazon, Docomo, GSK, Hitachi, Intel, JPMorgan Chase, Juniper Networks, KDDI, NEC, and Toshiba. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding entities.
2308.09250
Capacity Bounds for Hyperbolic Neural Network Representations of Latent Tree Structures
We study the representation capacity of deep hyperbolic neural networks (HNNs) with a ReLU activation function. We establish the first proof that HNNs can $\varepsilon$-isometrically embed any finite weighted tree into a hyperbolic space of dimension $d$ at least equal to $2$ with prescribed sectional curvature $\kappa<0$, for any $\varepsilon> 1$ (where $\varepsilon=1$ being optimal). We establish rigorous upper bounds for the network complexity on an HNN implementing the embedding. We find that the network complexity of HNN implementing the graph representation is independent of the representation fidelity/distortion. We contrast this result against our lower bounds on distortion which any ReLU multi-layer perceptron (MLP) must exert when embedding a tree with $L>2^d$ leaves into a $d$-dimensional Euclidean space, which we show at least $\Omega(L^{1/d})$; independently of the depth, width, and (possibly discontinuous) activation function defining the MLP.
Anastasis Kratsios, Ruiyang Hong, Haitz Sáez de Ocáriz Borde
2023-08-18T02:24:32Z
http://arxiv.org/abs/2308.09250v1
# Capacity Bounds for Hyperbolic Neural Network Representations of Latent Tree Structures ###### Abstract We study the representation capacity of deep hyperbolic neural networks (HNNs) with a ReLU activation function. We establish the first proof that HNNs can \(\varepsilon\)-isometrically embed any finite weighted tree into a hyperbolic space of dimension \(d\) at least equal to \(2\) with prescribed sectional curvature \(\kappa<0\), for any \(\varepsilon>1\) (where \(\varepsilon=1\) being optimal). We establish rigorous upper bounds for the network complexity on an HNN implementing the embedding. We find that the network complexity of HNN implementing the graph representation is independent of the representation fidelity/distortion. We contrast this result against our lower bounds on distortion which any ReLU multi-layer perceptron (MLP) must exert when embedding a tree with \(L>2^{d}\) leaves into a \(d\)-dimensional Euclidean space, which we show at least \(\Omega(L^{1/d})\); independently of the depth, width, and (possibly discontinuous) activation function defining the MLP. C Capacity Bounds for Hyperbolic Neural Network Representations of Latent Tree Structures 1 Generalization Bounds, Graph Neural Networks, Digital Hardware, Discrete Geometry, Metric Embeddings, Discrete Optimal Transport, Concentration of Measure. 68T07, 30L05, 68R12, 05C05. ## 1 Introduction Trees are one of the most important hierarchical data structures in computer science, whose structure can be exploited to yield highly efficient algorithms. For example, leveraging the tree's hierarchical structure to maximize parameter search efficiency, as in branch-and-bound algorithms Land and Doig (2010) or depth-first searches Korf (1985). Consequentially, algorithms designed for trees and algorithms which map more general structures into trees, e.g. Fuchs et al. (1980), have become a cornerstone of computer science and its related areas. Nevertheless, it is known that the flat Euclidean geometry of \(\mathbb{R}^{d}\) is fundamentally different from the expansive geometry of trees which makes them difficult to embed into low dimensional Euclidean space with low distortion Bourgain (1986); Matousek (1999); Gupta (2000). This high distortion can be problematic for downstream tasks relying on Euclidean representations of such trees. This fundamental impasse in representing large trees with Euclidean space has sparked the search for non-Euclidean representation spaces whose geometry is tree-like, thus allowing for low-dimensional representation of arbitrary trees. One such family of representation spaces are the hyperbolic spaces \(\mathbb{H}^{d}\), \(d\geq 2\), which have recently gained traction in machine learning. Leveraging hyperbolic data representations with a (latent) tree-like structure has proven significantly more effective than their traditional Euclidean counterparts. Machine learning examples include learning linguistic hierarchies Nikiel (1989), natural language processing Ganea et al. (2018); Zhu et al. (2020), recommender systems Vinh Tran et al. (2020); Skopek et al. (2020), low-dimensional representations of large tree-like graphs Ganea et al. (2018); Law and Stam (2020); Kochurov et al. (2020); Zhu et al. (2020); Bachmann et al. (2020); Sonthalia and Gilbert (2020), knowledge graph representations Chami et al. (2020), network science Papadopoulos et al. (2014); Keller-Ressel and Nargang (2020), communication Kleinberg (2007), deep reinforcement learning Cetin et al. (2023), and numerous other recent applications. These results have motivated deep learning on hyperbolic spaces, of which the hyperbolic neural networks (HNNs) Ganea et al. (2018), and their several variants Gulcehre et al. (2018); Chami et al. (2019); Shimizu et al. (2021); Zhang et al. (2021) have assumed the role of the flagship deep learning model. This has led HNNs to become integral in several deep learning-power hyperbolic learning algorithms and have also fuelled applications ranging from natural language processing Ganea et al. (2018); Dhingra et al. (2018); Tay et al. (2018); Liu et al. (2019); Zhu et al. (2020), to latent graph inference for downstream graph neural network (GNN) optimization Kazi et al. (2022); de Ocariz Borde et al. (2022). Furthermore, the simple structure of HNNs makes them amenable to mathematical analysis, similarly to multi-layer perceptrons (MLPs) in classical deep learning. This has led to Figure 1.1: Minimal length curves in the hyperbolic space\({}^{1}\)(right) expand outwards exponentially quickly just as the number of nodes double exponentially rapidly in a tree (right) as one travels away from the origin/root. the establishment of their approximation capabilities in Kratsios and Bilokopytov (2020); Kratsios and Papon (2022). The central motivation behind HNNs is that they are believed to be better suited to representing data with latent tree-like, or hierarchical, structure than their classical \(\mathbb{R}^{n}\)-valued counterparts; e.g. MLPs, CNNs, GNNs, or Transformers, since the geometry of hyperbolic space \(\mathbb{H}^{d}\), \(d\geq 2\), is most similar to the geometry of trees than classical Euclidean space, see Figure 1.1. These intuitions are often fueled by classical embedding results in computer science Sarkar (2011), metric embedding theory Bonk and Schramm (2000), and the undeniable success of countless algorithms leveraging hyperbolic geometry Papadopoulos et al. (2012, 2014); Nickel and Kiela (2017); Balazevic et al. (2019); Sonthalia and Gilbert (2020); Keller-Ressel and Nargang (2020) for representation learning. Nevertheless, the representation potential of HNNs, in representing data with latent hierarchies, currently only rests on strong experimental evidence, expert intuition rooted in deep results from hyperbolic geometry Gromov (1981, 1987); Bonk and Schramm (2000). In this paper, we examine the problem of Euclidean-vs-hyperbolic representation learning when a latent hierarchy structures the data. We justify this common belief by first showing that HNNs can \(\varepsilon\)-isometrically embed any pointcloud with a latent weighted tree structure for any \(\varepsilon>0\). In contrast, such an embedding cannot exist in any Euclidean space. We show that the HNNs implementing these \(\varepsilon\)-embeddings are relatively small by deriving upper bounds on their depth, width, and number of trainable parameters sufficient for achieving any desired representation capacity. We find that HNNs only require \(\widetilde{\mathcal{O}}(N^{2})\) trainable parameters to embed any \(N\)-point pointcloud in \(n\)-dimensional Euclidean space, with a latent tree structure, into the 2-dimensional hyperbolic plane. We then return to the problem of Euclidean-vs-hyperbolic representation under a latent hierarchical structure, by proving that any MLP cannot faithfully embed them into a low-dimensional Euclidean space, thus proving that HNNs are superior to MLPs for representing tree-like structures. We do so by showing that any MLP, regardless of its depth, width, or number of trainable parameters, cannot embed a pointcloud with latent tree structure, with \(L>2^{d}\) leaves, into the \(d\)-dimensional Euclidean space with distortion less than \(\Omega(L^{1/d})\). We consider the distortion of an embedding as in the classical computer science literature, e.g. Linial et al. (1995); Bartal (1996); Gupta (2000); a formal definition will be given in the main text. OutlineThe rest of this paper is organized as follows. Section 2 introduces the necessary terminologies for hyperbolic neural networks, such as the geometry of hyperbolic spaces and the formal structure of HNNs. Section 3 formalizes latent tree structures and then numerically formalizes the representation learning problem. Our main graph representation learning results are given in Section 4.2. In Section 4.1, we first derive lower bounds for the best possible distortion achievable by an MLP representation of a latent tree. We show that MLPs cannot embed any large tree in a small Euclidean space, irrespective of how many wide hidden layers the network uses and irrespective of which, possibly discontinuous, non-linearity is used to define the MLP. In Section 4.2, we show that HNNs can represent any pointcloud with a latent tree structure to arbitrary precision. Furthermore, the depth, width, and number of trainable parameters defining the HNN are independent of the representation fidelity/distortion. Our theory is validated experimentally in Section 4.3. The analysis and proofs of our main results are contained in Section 5. We draw our conclusions in Section 6. ## 2 The Hyperbolic Neural Network Model Throughout this paper, we consider hyperbolic neural networks (HNNs) with the \(\operatorname{ReLU}(t)\stackrel{{\text{\tiny def}}}{{=}}\max\{0,t\}\) (Rectified Linear Unit) activation function/non-linearity, mapping into hyperbolic representation spaces \(\mathbb{H}_{\kappa}^{d}\). This section rigorously introduced the HNN architecture. This requires a brief overview of hyperbolic spaces, which we do now. ### The Geometry of the Hyperbolic Spaces \(\mathbb{H}_{\kappa}^{d}\) Fix a positive integer \(d\) and a _(sectional) curvature parameter_\(\kappa<0\). The (hyperboloid model for the real) hyperbolic \(d\)-space of constant sectional curvature \(\kappa\), denoted by \(\mathbb{H}_{\kappa}^{d}\), consists of all points \(x\in\mathbb{R}^{1+d}\) satisfying \[1+\sum_{i=1}^{d}\,x_{i}^{2}=x_{d+1}^{2}\text{ and }x_{d+1}>0\] where the distance between any pair of point \(x,y\in\mathbb{H}^{d}\) is given by \[d_{\kappa}(x,y)\stackrel{{\text{\tiny def.}}}{{=}}\frac{1}{ \sqrt{|\kappa|}}\,\cosh\Big{(}x_{d+1}y_{d+1}-\sum_{i=1}^{d}\,x_{i}y_{i}\Big{)}.\] It can be shown that \(\mathbb{H}_{\kappa}^{d}\) is a simply connected2 smooth manifold (Bridson and Haefliger, 1999, pages 92-93) and that the metric \(d_{\kappa}\) on any such \(\mathbb{H}_{\kappa}^{d}\) measures the length of the shortest curve joining any two points on \(\mathbb{H}_{\kappa}^{d}\) where length is quantified in the infinitesimal Riemannian sense3(Bridson and Haefliger, 1999, Propositions 6.17 (1) and 6.18). This means that, by the Cartan-Hadamard Theorem (Jost, 2017, Corollary 6.9.1), for every \(x\in\mathbb{H}_{\kappa}^{d}\) there is a map \(\exp_{x}:T_{x}(\mathbb{H}_{\kappa}^{d})\cong\mathbb{R}^{d}\to\mathbb{H}_{ \kappa}^{d}\) which puts \(\mathbb{R}^{d}\) into bijection with \(\mathbb{H}^{d}\) in a smooth manner with smooth inverse. Footnote 2: See (Jost, 2017, Section 6.4). **Remark 1**: _Often the metric \(d_{\kappa}\) is not relevant for a given statement, and only the manifold structure of \(\mathbb{H}_{\kappa}^{d}\) matters, or the choice of \(\kappa\) is clear from the context. In these instances, we write \(\mathbb{H}^{d}\) in place of \(\mathbb{H}_{\kappa}^{d}\) to keep our notation light._ For any point \(x\in\mathbb{R}^{d}\), we identify with the \(d\)-dimensional affine subspace \(T_{x}(\mathbb{H}^{d})\) of \(\mathbb{R}^{d+1}\) lying tangent to \(\mathbb{H}^{d}\) at \(x\). For any \(x\in\mathbb{H}^{d}\), the tangent space \(T_{x}(\mathbb{H}^{d})\) is the \(d\)-dimensional affine subspace of \(\mathbb{R}^{d+1}\) consisting of all \(y\in\mathbb{R}^{d+1}\) satisfying \[y_{d+1}x_{d+1}=\sum_{i=1}^{d}\,y_{i}x_{i}.\] The tangent space at \(\mathbf{1}_{n}\) plays an especially important role since its elements can be conveniently identified with \(\mathbb{R}^{d}\). This is because \(x\in T_{\mathbf{1}_{n}}(\mathbb{H}^{n})\) only if it is of the form \(x=(x_{1},\ldots,x_{n},1)\), for some \(x_{1},\ldots,x_{n}\in\mathbb{R}\). Thus, \[(x_{1},\ldots,x_{n},1)\stackrel{{\pi_{q}}}{{\to}}(x_{1},\ldots,x_ {n})\text{ and }(x_{1},\ldots,x_{n})\stackrel{{\iota_{n}}}{{\to}}(x_{1}, \ldots,x_{n},1) \tag{1}\] identify \(T_{\mathbf{1}_{n}}(\mathbb{H}^{n})\) with \(\mathbb{R}^{n}\). The map \(\exp_{x}\) can be explicitly described as the map which sends any "initial velocity vector" \(v\in\mathbb{R}^{d}\) lying tangent to \(x\in\mathbb{H}^{d}\) to the unique point in \(\mathbb{H}^{d}\) which one would arrive at by travelling optimally thereon. Here, optimally means along the unique minimal length curve in \(\mathbb{H}^{d}_{-1}\), illustrated in Figure 1b4. The (affine) tangent spaces \(T_{x}(\mathbb{H}^{d})\) and \(T_{y}(\mathbb{H}^{d})\) about any two points \(x\) and \(y\) in \(\mathbb{H}^{d}\) are identified by "sliding" \(T_{x}(\mathbb{H}^{d})\) towards \(T_{y}(\mathbb{H}^{d})\) in parallel across the minimal unique length curve joining \(x\) to \(y\) in \(\mathbb{H}^{d}\). This "sliding" operation, called _parallel transport_, is formalized by the linear isomorphism5\(P_{x\mapsto b}:T_{x}(\mathbb{H}^{n})\to T_{b}(\mathbb{H}^{n})\) given for any \(u\in T_{x}(\mathbb{H}^{n})\) by6 Footnote 4: In the Euclidean space \(\mathbb{R}^{d}\) these are simply straight lines. Footnote 5: In general, parallel transport is path dependant. However, since we only consider minimal length (geodesic) curves joining points on \(\mathbb{H}^{d}_{\kappa}\), and there is only one such choice by the Cartan-Hadamard theorem; then there is no ambiguity in the notation/terminology in our case. Footnote 6: In Kratsios and Papon (2022), the authors note that the identifications \(T_{c}(\mathbb{H}^{m})\) with \(\mathbb{R}^{m}\) are all made implicitly. However, here, we underscore each component of the HNN pipeline by making each identification processed by any computer completely explicit. \[P_{x\mapsto b}:\,u\mapsto u-\frac{\langle\log_{x}(b),u\rangle_{x}}{d_{-1}^{2}( x,b)}\,\big{(}\log_{x}(b)+\log_{b}(x)\big{)}, \tag{2}\] where the map \(\log_{x}:\mathbb{R}^{d}\to\mathbb{H}^{d}\) is defined for any \(y\in\mathbb{H}^{d}\) by \[\log_{x}:y\mapsto\frac{d_{-1}(x,y)(y-\langle x|y\rangle_{M}x)}{\|y-\langle x| y\rangle_{M}x\|_{2}},\] and where \(\|\cdot\|_{2}\) is the usual Euclidean norm on \(\mathbb{R}^{d+1}\). Typically parallel transport must be approximated numerically, e.g. Guigui and Pennec (2022), but this is not so for \(\mathbb{H}^{d}_{-1}\). The hyperbolic space is particularly convenient, amongst negatively curved Riemannian manifolds, since the map \(\exp_{x}\) is available in closed-form. For any \(x\in\mathbb{H}^{d}_{\kappa}\) and \(v\in\mathbb{R}^{d}\), \(\exp_{x}(v)\) is given by: \(\exp_{x}(v)=x\) if \(v=0\), otherwise \[\exp_{x}:v\mapsto\cosh\big{(}\sqrt{\langle v|v\rangle_{M}}\big{)}+\sinh\big{(} \sqrt{\langle v|v\rangle_{M}}\big{)}\,\frac{v}{\langle v|v\rangle_{M}}\] where \(\langle u|v\rangle_{M}\stackrel{{\text{\tiny def.}}}{{=}}-u_{d+1} v_{d+1}+\sum_{i=1}^{d}u_{i}v_{i}\) for any \(u,v\in\mathbb{R}^{d}\); see (Bridson and Haefliger, 1999, page 94). Furthermore, the inverse of \(\exp_{x}\) is \(\log_{x}\). We note that these operations are implemented in most standard geometric machine learning software, e.g. Boumal et al. (2014); Townsend et al. (2016); Miolane et al. (2020). Also, \(\mathbb{H}^{d}_{\kappa}\) and \(\mathbb{H}^{d}_{-1}\) are diffeomorphic by the Cartan-Hadamard Theorem (Jost, 2017, Corollary 6.9.1). Therefore, for all \(\kappa<0\), it suffices to consider the "standard" exponential map \(\exp_{x}\) in the particular case where \(\kappa=-1\) to map encode in \(\mathbb{R}^{d}\) into \(\mathbb{H}^{d}\) and visa-versa via \(\log_{x}\). ### The Hyperbolic Neural Network Model We now overview the hyperbolic neural network model studied in this paper. The considered HNN model contains the hyperbolic neural networks of Ganea et al. (2018), from the deep approximation theory Kratsios and Papon (2022), and the latent graph inference de Ocariz Borde et al. (2022) literatures as sub-models. The workflow of the hyperbolic neural network's layer, by which it processes data on the hyperbolic space \(\mathbb{H}^{n}\), is summarized in Figure 2.2. HNNs function similarly to standard MLPs, which generate predictions from any input by sequentially applying affine maps interspersed with non-affine non-linearities, typically via component-wise activation functions. Instead of leveraging affine maps, which are otherwise suited to the vectorial geometry of \(\mathbb{R}^{d}\), HNNs are built using analogues of the affine maps for Euclidean spaces, which are suited to the geometry of \(\mathbb{H}^{2}\). Thus, the analogues of the linear layers with component-wise \(\mathrm{ReLU}(t)\stackrel{{\mathrm{def}}}{{=}}\max\{0,t\}\) activation function, mapping \(\mathbb{H}^{n}\) to \(\mathbb{H}^{m}\) for \(n,m\in\mathbb{N}_{+}\), are thus given by \[x\mapsto\exp_{\mathbf{1}_{n}}(\mathrm{ReLU}\bullet(A\log_{\mathbf{1}_{m}}(x))) \tag{3}\] where \(\bullet\) denotes component-wise composition and \(A\) is an \(m\times n\) matrix, and7\(\mathbf{1}_{n}\in\mathbb{H}^{n}\). Thus, as discussed in (Ganea et al., 2018, Theorem 4 and Lemma 6), without the "hyperbolic bias" term, the elementary layers making up HNNs can be viewed as elementary MLP layers conjugated by the maps \(\exp_{\mathbf{1}_{n}}\) and \(\log_{\mathbf{1}_{m}}\). In this case, \(\exp_{\mathbf{1}_{n}}\) and \(\log_{\mathbf{1}_{m}}\) serve simply to respectively encode and decode the hyperbolic features into vectorial data, which can be processed by standard software. Footnote 7: Were we have the distinguished “origin” point 0 in the Poincaré disc model for \(\mathbb{H}^{m}_{\kappa}\) used in (Ganea et al., 2018, Definition 3.2) with its corresponding point in the hyperboloid model for \(\mathbb{H}^{n}_{\kappa}\) using the isometry between these two spaces given on (Bridson and Haefliger, 1999, page 86). The analogues of the addition of a bias term were initially formalized in Ganea et al. (2018) using the so-called gyro-vector addition and multiplication operators; see Vermeer Figure 2.2: **Workflow of an HNN layer:** First inputs in a hyperbolic space \(\mathbb{H}^{n}\) are mapped to a vector in \(\mathbb{R}^{n}\) in the “encoding phase”. Next, they are transformed to “deep features” in \(\mathbb{R}^{m}\) by a standard MLP layer in the “transformation phase”. Finally, the “hyperbolic decoding phase” applies a hyperbolic bias, at which point the “deep features with hyperbolic bias” are decoded, thus producing an output in the hyperbolic space \(\mathbb{H}^{m}\). (2005), which roughly states that a bias \(b\in\mathbb{H}^{n}\) can be added to any \(x\in\mathbb{H}^{n}\) by "shifting by" \(b\) along minimal length curves in \(\mathbb{H}^{n}\) using \(\exp_{b}\). Informally, \[y\mapsto\exp_{b}(P_{\mathbf{1}_{n}\mapsto b}\circ\log_{0}(y)) \tag{4}\] where \(P_{\mathbf{1}_{n}\mapsto b}:T_{\mathbf{1}_{n}}(\mathbb{H}^{n})\to T_{b}( \mathbb{H}^{n})\) linearly identifies the tangent space at \(\mathbf{1}_{n}\stackrel{{\text{\tiny def.}}}{{=}}(0,\ldots,0,1)\) with that at \(b\) by travelling along the unique minimal length curve in \(\mathbb{H}^{n}_{-1}\), defined by \(d_{-1}\), connecting \(\mathbf{1}_{n}\) to \(b\). The interpretation of (4) as the addition of a bias term dates back at least to early developments in geometric machine learning in Pennec (2006); Meyer et al. (2011); Fletcher (2013). The basic idea is that in the Euclidean space, the analogues of the \(\exp_{x}\) and \(\log_{x}\) are simply addition and subtraction, respectively. Here, we consider a generalization of the elementary HNN layers of Ganea et al. (2018), used in (Kratsios and Papon, 2022, Corollaries 23 and 24) to constructing universal deep learning models capable of approximating continuous functions8 between any two hyperbolic spaces. The key difference here is that they, and we, also allowed for a "Euclidean bias" to be added in together with the hyperbolic bias computed by (4). Similar HNN layers are also used in the contemporary latent graph inference literature de Ocariz Borde et al. (2022); Kazi et al. (2022). Incorporating this Euclidean bias, with the elementary maps (4), the hyperbolic biases (3), and the formal identifications (1) we obtain our elementary _hyperbolic_ layers \(\mathcal{L}_{a,b,c,A}:\mathbb{H}^{n}\rightarrow\mathbb{H}^{m}\) maps given by Footnote 8: Not intepolators. \[\mathcal{L}_{a,b,c,A}:x\mapsto\exp_{c}\biggl{(}\overline{\oplus}^{c}\Bigl{(} \operatorname{ReLU}\bullet\bigl{(}A\operatorname{\underline{\oplus}}_{a}\circ \log_{a}(x)+b\bigr{)}\Bigr{)}\biggr{)} \tag{5}\] where _weight matrix_\(A\) is an \(m\times n\) matrix, a _Euclidean bias_\(b\in\mathbb{R}^{n}\), and the incorporation of _hyperbolic biases_\(a\in\mathbb{H}^{n}\) and \(b\in\mathbb{H}^{m}\) are defined by 9 Footnote 9: The notation \(\overline{\oplus}^{b}\) and \(\operatorname{\underline{\oplus}}_{a}\) is intentionally similar to the gyrovector-based “hyperbolic bias translation” operation in (Ganea et al., 2018, Equation (28)) to emphasize the similarity between these operations. \[\overline{\oplus}^{c} \stackrel{{\text{\tiny def.}}}{{=}} P_{\mathbf{1}_{n}\mapsto c}\circ\iota_{m}:\mathbb{R}^{m} \to T_{c}(\mathbb{H}^{m})\] \[\operatorname{\underline{\oplus}}_{a} \stackrel{{\text{\tiny def.}}}{{=}} \pi_{n}\circ P_{a\mapsto\mathbf{1}_{n}}:T_{a}(\mathbb{H}^{n}) \rightarrow\mathbb{R}^{n}.\] The number of _trainable parameters_ defining any hyperbolic layer \(\mathcal{L}_{a,b,c,A}\) is \[\operatorname{Par}(\mathcal{L}_{a,b,c,A})\stackrel{{\text{\tiny def.}}}{{=}}\|A\|_{0}+\|a\|_{0}+\|b\|_{0}+\|c\|_{0} \tag{6}\] where \(\|\cdot\|_{0}\) counts the number of non-zero entries in a matrix or vector. We work with data represented as vectors in \(\mathbb{R}^{n}\) with a latent tree structure. Similarly to de Ocariz Borde et al. (2022); Kazi et al. (2022), these can be _encoded_ as hyperbolic features, making them compatible with standard HNN pipelines, using \(\exp_{\mathbf{1}_{n}}\) as a _feature map_. Any such feature map is regular in that it preserves the approximation capabilities of any downstream deep learning model, see (Kratsios and Bilokopytov, 2020, Corollary 3.16) for details. **Definition 2** (Hyperbolic Neural Networks): _Let \(n,d\in\mathbb{N}_{+}\). A function \(f:\mathbb{R}^{n}\to\mathbb{H}^{d}\) is called a hyperbolic neural network (HNN) if it admits the iterative representation: for any \(x\in\mathbb{R}^{n}\)_ \[f(x) =\exp_{c^{(I+1)}}\Big{(}\overrightarrow{\oplus}^{c^{(I+1)}}\big{(} A^{(I+1)}\,(\underline{\oplus}_{c^{(I)}}\circ\log_{c^{(I)}}(x^{(I)})+b^{(I+1)}) \big{)}\Big{)}\] \[x^{(i)} =\mathcal{L}_{c^{(i-1)},b^{(i)},c^{(i)},A^{(i)}}(x^{(i-1)}) \text{for }i=1,\ldots,I\] \[x^{(0)} =\exp_{c^{(0)}}(\overrightarrow{\oplus}^{c^{(0)}}\,x)\] _where \(I\in\mathbb{N}_{+}\), \(n=d_{0},\ldots,d_{I+2}=d\in\mathbb{N}_{+}\), and for \(i=1,\ldots,I+1\), \(A^{(i)}\) is a \(d_{i+1}\times d_{i}\) matrix, \(b^{(i)}\in\mathbb{R}^{d_{i}}\), \(c^{(i)}\in\mathbb{H}^{d_{i+1}}\subset\mathbb{R}^{d_{i+1}+1}\), and \(c^{(0)}\in\mathbb{H}^{n}\subset\mathbb{R}^{n+1}\)._ In the notation of Definition 2, the integer \(I+1\) is called the _depth_ of the HNN \(f\) and the _width_ of \(f\) is \(\max_{i=0,\ldots,I+1}\,d_{i}\). Similarly to (6), the total number of _trainable parameters_ defining the HNN \(f\), denoted by \(\operatorname{Par}(f)\), is tallied by \[\operatorname{Par}(f)\stackrel{{\text{\tiny def.}}}{{=}}\|c^{(0 )}\|_{0}+\sum_{i=1}^{I+1}\,\|A^{(i)}\|_{0}+\|b^{(i)}\|_{0}+\|c^{(i)}\|_{0}.\] Note that, since the hyperbolic bias \(c^{(i)}\) is shared between any two subsequent layers, in the notation of (6) \(\alpha=c^{(i-1)}\) and \(c=c^{(i)}\) for any \(i=1,\ldots,I+1\), then we do not _double count_ these parameters. ## 3 Representing Learning with Latent Tree Structures We now formally define latent tree structures. These capture the actual hierarchical structures between points in a pointcloud. We then formalize what it means to represent those latent tree structures in an ideally low-dimensional representation space with little distortion. During this formalization process we will recall some key terminologies pertaining to trees. ### Latent Tree Structure We now formalize the notation of a _latent tree structure_ between members of a pointclouds a vector space \(\mathbb{R}^{n}\). We draw from ideas in clustering, where the relationship between pairs of points is not captured by reflected by their configuration in Euclidean space but rather through some unobservable latent distance/structure. Standard examples from the clustering literature include the Mahalanobis distance Xiang et al. (2008), the Minkowski distance or \(\ell^{\infty}\) distances Singh et al. (2013), which are implemented in standard software Achtert et al. (2008); De Smedt and Daelemans (2012), and many others; e.g. Ye et al. (2017); Huang et al. (2023); Grande and Schaub (2023). In the graph neural network literature, the relationship between pairs of points is quantified graphically. In the cases of latent trees, the relationships between points are induced by a weighted tree graph describing a simple relational structure present in the dataset. The presents of an edge between any two points indicated a direct relationship between two nodes, and the weight of any such edge quantifying the strength of the relationship between the two connected, and thus related, nodes. This relational structure can be interpreted as a latent hierarchy upon specifying a root node in the latent tree. Let \(n\) be a positive integer and \(V\) be a non-empty finite subset of \(\mathbb{R}^{n}\), called a _pointcloud_. Illustrated by Figure 3.3, a _latent tree structure_ on \(V\) is a triple \((V,\mathcal{E},W)\) of a collection \(\mathcal{E}\) of pairs \(\{u,v\}\), called _edges_, of \(u,v\in V\) and an edge-weight map \(\mathcal{W}:\mathcal{E}\to(0,\infty)\) satisfying the following property: For every distinct pair \(u,v\in V\) there exists a unique sequence \(u=u_{0},\ldots,u_{i}=v\) of distinct _nodes_ in \(V\) such that the edges \(\{u_{0},u_{1}\},\ldots,\{u_{i-1},u_{i}\}\) belong to \(\mathcal{E}\); called a path from \(u\) to \(v\). Thus, \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\) is a _finite weighted tree_ with positive edge weights. Any latent tree structure \(\mathcal{T}\) on \(V\) induces a distance function, or metric, between the points of \(V\). This distance function, denoted by \(d_{\mathcal{T}}\), measures the _length_ of the shortest path between pairs of points \(u,v\in V\) and is defined by \[d_{\mathcal{T}}(u,v)\stackrel{{\text{\tiny def}}}{{=}}\inf\, \sum_{j=0}^{i-1}\,\mathcal{W}\big{(}\{u_{j},u_{j+1}\}\big{)}\] where the infimum is taken over all sequences of paths \(\{u_{0},u_{1}\},\ldots,\{u_{i-1},u_{i}\}\) from \(u=u_{0}\) to \(v=u_{i}\). If the weight function \(\mathcal{W}(\{u,v\})=1\) for any edges \(\{u,v\}\in\mathcal{E}\), then \(\mathcal{T}\) is called a _combinatorial tree_. In which case, the distance between any two nodes \(u,v\in V\) simplifies to the usual shortest path distance on an unweighted graph \[d_{\mathcal{T}}(u,v)=\inf\big{\{}i\,:\,\exists\,\{v,v_{1}\},\ldots\{v_{i-1},u \}\in\mathcal{E}\big{\}}. \tag{7}\] The _degree_ of any point, or _node/vertex_, \(v\in V\) is the number of edges emanating from \(v\); i.e. the cardinality of \(\{\{u,w\}\in\mathcal{E}:\,v\in\{u,w\}\}\). A node \(v\in V\) is called a _leaf_ of the tree Figure 3.3: Figures 3.2(a) and 3.2(b) illustrate pointclouds in \(\mathbb{R}^{2}\) with the same latent tree structure. Both of these trees seem different when comparing their structure using the _Euclidean distances_; however, instead, considering their latent tree structure reveals that they are identical as graphs. This illustrates how Euclidean geometry often fails to detect the true latent (relational) geometry describing the hierarchical structure between points in a pointcloud. \(\mathcal{T}\) if it has degree 1. E.g. in Figure 1.1a, all peripheral green points are leaves of the binary tree. ### Representations as Embeddings As in Kratsios et al. (2023), a representation, or encoding, of latent tree structure on \(V\) is simply a function \(f:V\to\mathcal{R}\) into a space \(\mathcal{R}\) equipped with a distance function \(d_{\mathcal{R}}\), the pair \((\mathcal{R},d_{\mathcal{R}})\) of which is called a _representation space_. As in Giovanni et al. (2022) representation \(f\) is considered "good" if it accurately preserves the geometry of the latent tree structure \(\mathcal{T}\) on \(V\). Following the classical computer science literature, Linial et al. (1995); Bartal (1996); Rabinovich and Raz (1998); Arora et al. (2009); Magen (2002); STO (2005), this means that \(f\) is injective, or 1-1, and its neither shrinks nor stretches the distances between pairs of nodes \(u,v\in V\) when compared by \(d_{\mathcal{R}}\). For each \(u,v\in V\) the following holds \[\alpha\,d_{\mathcal{T}}(u,v)\leq d_{\mathcal{R}}\big{(}f(u),f(v)\big{)}\leq \beta\,d_{\mathcal{T}}(u,v) \tag{8}\] where the constants \(0<\alpha\leq\beta<\infty\) are defined by \[\beta\stackrel{{\text{\tiny def.}}}{{=}}\max_{\begin{subarray}{ c}u,v\in V\\ u\neq v\end{subarray}}\,\frac{d_{\mathcal{R}}\big{(}f(u),f(v)\big{)}}{d_{V}(u, v)},\text{ and }\alpha\stackrel{{\text{\tiny def.}}}{{=}}\min_{ \begin{subarray}{c}u,v\in V\\ u\neq v\end{subarray}}\,\frac{d_{\mathcal{R}}\big{(}f(u),f(v)\big{)}}{d_{V}(u, v)}.\] These constants quantify the maximal shrinking (\(\alpha\)) and stretching (\(\beta\)) which \(f\) exerts on the geometry induced by the latent tree structure \(\mathcal{T}\) on \(V\), and Note that, since \(V\) is finite, then \(0<\alpha\leq\beta<\infty\) whenever \(f\) is injective. The total _distortion_ with which \(f\) perturbs the tree structure \(\mathcal{T}\) on \(V\) is, denoted by \(\operatorname{dist}(f)\), and defined by \[\operatorname{dist}(f)\stackrel{{\text{\tiny def.}}}{{=}}\begin{cases} \frac{\beta}{\alpha}&\text{: if $f$ is injective}\\ \infty&\text{: otherwise}\end{cases} \tag{9}\] We say that a tree \(\mathcal{T}\) can be _asymptotically isometrically represented_ in \(\mathcal{R}\) if there is a sequence \((f_{n})_{n=1}^{\infty}\) of maps from \(V\) to \(\mathcal{R}\) whose distortion is asymptotically optimal distortion; i.e. \(\lim\limits_{n\to\infty}\,\operatorname{dist}(f_{n})=1\). We note that a sequence of embeddings \(f_{n}\) need not have a limiting function mapping \(V\) to \(\mathcal{R}\) even if its distortion converges to 1; in particular, \((f_{n})_{n\in\mathbb{N}}\) need not converge to an isometry. ## 4 Graph Representation Learning Results This section contains our main result, which establishes the main motivation behind HNNs, namely the belief that they represent trees to arbitrary precision in a two dimensional hyperbolic space. ### Lower-Bounds on Distortion for MLP Embeddings of Latent Trees The power of HNNs is best appreciated when juxtaposed against the _lower_ bounds minimal distortion implementable by any MLP embedding any large tree into a low-dimensional Euclidean space. In particular, there cannot exist any MLP model, regardless of its depth, width, or (possibly discontinuous) choice of activation function, which can outperform the embedding of a sufficiently overparameterized HNN using only a two-dimensional representation space. **Theorem 4.1** (Lower-Bounds on the Distortion of Trees Embedded by MLPs): _Let \(L,n,d\in\mathbb{N}_{+}\), and fix an activation function \(\sigma:\mathbb{R}\to\mathbb{R}\). For any finite \(V\subset\mathbb{R}^{n}\) with a latent combinatorial tree structure \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\) having \(L>2^{d}\) leaves and if \(f:\mathbb{R}^{n}\to\mathbb{R}^{d}\) is an MLP with activation function \(\sigma\), satisfying_ \[\alpha\,d_{\mathcal{T}}(u,v)\leq\|f(u)-f(v)\|\leq\beta\,d_{\mathcal{T}}(u,v)\] _for all \(u,v\in V\) and some \(0<\alpha\leq\beta\) independent of \(u\) and \(v\) then, \(f\) incurs a distortion \(\operatorname{dist}(f)\) of at least_ \[\operatorname{dist}(f)\geq\Omega(L^{1/d}).\] _The constant suppressed by \(\Omega\), is independent of the depth, width, number of trainable parameters, and the activation function \(\sigma\) defining the MLP._ Theorem 4.1 implies that if \(V\subseteq\mathbb{R}^{n}\) is large enough and has a latent tree structure with \(\Omega(4^{d^{2}})\) leaves then any MLP \(f:\mathbb{R}^{n}\to\mathbb{R}^{d}\) cannot represent \((V,d_{\mathcal{T}})\) with a distortion of less than \(\Omega(4^{d})\). Therefore, if \(d\), \(\#V\), and \(L\) are large enough, any MLP must represent the latent tree structure on \(V\) arbitrarily poorly. We point out that their MLP's structure alone is not the cause of this limitation since we have not imposed any structural constraints on its depth, width, number of trainable parameters, or its activation function; instead, the incompatibility between the geometry of a tree and that of a Euclidean space, which no MLP can resolve. ### Upper-Bounds on the Complexity of HNNs Embeddings of Latent Trees Our main positive result shows that the HNN model 2 can represent any pointcloud with a latent tree structure into the hyperbolic space \(\mathbb{H}_{\kappa}^{d}\) with an arbitrarily small distortion by a low-capacity HNN. **Theorem 4.2** (HNNs Can Asymptotically Isometrically Represent Latent Trees): _Fix \(n,d,N\in\mathbb{N}_{+}\) with \(d\geq 2\) and fix \(\lambda>1\). For any \(N\)-point subset \(V\) of \(\mathbb{R}^{n}\) and any latent tree structure \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\) on \(V\) of degree at least \(2\), there exists a curvature parameter \(\kappa<0\) and an HNN \(f:\mathbb{R}^{n}\to\mathbb{H}_{\kappa}^{d}\) such that_ \[\frac{1}{\lambda}\,d_{\mathcal{T}}(u,v)\leq d_{\kappa}(f(u),f(v))\leq\lambda \,d_{\mathcal{T}}(u,v)\] _holds for each pair of \(u,v\in V\). Moreover, the depth, width, and number of trainable parameters defining \(f\) are independent of \(\lambda\); recorded in Table 1._ Theorem 4.2 considers HNNs with the typical ReLU activation function. However, using an argument as in (Yarotsky, 2017, Proposition 1) the result can likely be extended to any other continuous piece-wise linear activation function with at least one piece/break, e.g PReLU. Just as in (Yarotsky, 2017, Proposition 1), such modifications should only scale the network depth, width, and the number of its trainable parameters up by a constant factor depending only on the number of pieces of the chosen piece-wise linear activation function. Since the size of the tree in Theorem 4 did not constrain the embedding quality of an HNN, we immediately deduce the following corollary which we juxtapose against Theorem 4. [HNNs Can Asymptotically Embed Large Trees] Let \(L,n,d\in\mathbb{N}_{+}\) with \(d\geq 2\). For any finite \(V\subset\mathbb{R}^{n}\) with a latent combinatorial tree structure \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\) with \(L>2^{d}\) leaves, and any \(r>0\), there exists a curvature parameter \(\kappa<0\) and an HNN \(f:\mathbb{R}^{n}\to\mathbb{H}_{\kappa}^{d}\) satisfying \[\mathrm{dist}(f)\leq\frac{1}{L^{r}}.\] ### Experimental Illustrations To gauge the validity of our theoretical results, we conduct a performance analysis for tree embedding. We compare the performance of HNNs with that of MLPs through a sequence of synthetic graph embedding experiments. Our primary focus lies on binary, ternary, and random trees. For the sake of an equitable comparison, we contrast MLPs and HNNs employing an equal number of parameters. Specifically, all models incorporate 10 blocks of linear layers, accompanied by batch normalization and ReLU activations, featuring 100 nodes in each hidden layer. The training process spans 10 epochs for all models, employing a batch size of 100,000 and a learning rate of \(10^{-2}\). The \(x\) and \(y\) coordinates of the graph nodes in \(\mathbb{R}^{2}\) are fed into both the MLP and HNN networks, which are tasked to map them to a new embedding space. An algorithm is used to generate input coordinates, simulating a force-directed layout of the tree. In this simulation, edges are treated as springs, pulling nodes together, while nodes are treated as objects with repelling forces akin to an anti-gravity effect. This simulation iterates until the positions reach a state of equilibrium. The algorithm can be reproduced using the NetworkX library and the spring layout for the graph. Counting the neighbourhood hops as in 7 defines the distance between nodes, resulting in a scalar value. The networks must discover a suitable representation to estimate this distance. We update the networks based on the MSE loss comparing the actual distance between nodes, \(d_{true}\), to the predicted distance based on the network mappings, \(d_{pred}\): \[Loss=MSE(d_{true},d_{pred}). \tag{10}\] In the case of the MLP the predicted distance is computed using: \[d_{pred}=\|MLP(x_{1},y_{1})-MLP(x_{2},y_{2})\|_{2}, \tag{11}\] where \((x_{1},y_{1})\) and \((x_{2},y_{2})\) are the coordinates in \(\mathbb{R}^{2}\) of a synthetically generated latent tree (which may be binary, ternary or random). For the HNN we use the loss function \[d_{pred}=d_{-1}(HNN(x_{1},y_{1}),HNN(x_{2},y_{2})). \tag{12}\] In the case of the HNN, we use the hyperboloid model, with an exponential map at the pole, to map the representations to hyperbolic space. In particular, we do not even require any hyperbolic biases to observe the gap in performance between the MLP and HNN models, which are trained to embed the latent trees by respectively optimizing the loss functions 11 and 12. We conduct embedding experiments on graphs ranging from 1,000 to 4,000 nodes, and we assess the impact of employing various dimensions for the tree embedding spaces. Specifically, we explore dimensionalities multiples of 2, ranging from 2 to 8. In Figure 4.4, we can observe that HNNs consistently outperform MLPs at embedding trees, achieving a lower MSE error in all configurations. We now overview the derivation of our upper bounds on the embedding capabilities HNNs with ReLU activation function and our lower-bounds on MLPs with any activation function for pointclouds with latent tree structure. Figure 4.4: Tree embedding error surfaces for different trees using MLPs and HNNs. ## 5 Theoretical analysis We first prove Theorem 4.1 which follows relatively quickly from the classical results of Matousek (1999) and Gupta (2000) from the metric embedding theory and computer science literature. We then outline the proof of Theorem 4.2, which is much more involved, both technically and geometrically, the details of which are relegated to Section 5. Corollary 4.3 is directly deduced from Theorem 4.2. ### Proof of the Lower-Bound - Theorem 4.1 We begin by discussing the proof of the lower-bound. **Proof** [Proof of Theorem 4.1] Any MLP \(f:\mathbb{R}^{n}\to\mathbb{R}^{d}\) with any activation function \(\sigma:\mathbb{R}\to\mathbb{R}\) for which there exists constants \(0<\alpha\leq\beta\) satisfying \[\alpha\,d_{\mathcal{T}}(u,v)\leq\|f(u)-f(v)\|\leq\beta\,d_{\mathcal{T}}(u,v)\] defines a bi-Lipschitz embedding of the tree-metric space \((V,d_{\mathcal{T}})\) into the \(d\)-dimensional Euclidean space. Since \(L>2^{d}\) then we may apply (Gupta, 2000, Proposition 5.1), which is a version of a negative result of Bourgain (1986) and of Matousek (1999) for non-embeddability of large trees in Euclidean space for general trees. The result, namely (Gupta, 2000, Proposition 5.1), implies that any bi-Lipschitz embedding of \((V,d_{\mathcal{T}})\) into \((\mathbb{R}^{d},\|\cdot\|_{2})\) must incur a distortion no less than \(\Omega(L^{1/d})\); in particular, this is the case for \(f\). Therefore, \(\frac{\beta}{\alpha}\geq\Omega(L^{1/d})\). The proof of the lower-bound shows that there cannot be _any_ deep learning model which injectively maps \((V,d_{\mathcal{T}})\) into a \(d\)-dimensional Euclidean space with a distortion of less than \(\Omega(L^{1/d})\). ### Proof of Theorem 4.2 We showcase the three critical steps in deriving theorem 4.2. First, we show that HNNs can implement, i.e. memorize, arbitrary functions from any \(\mathbb{R}^{n}\) to any \(\mathbb{H}^{d}\), for arbitrary integer dimensions \(n\) and \(d\). Second, we construct a sequence of embeddings with, whose distortion asymptotically tends to \(1\), in a sequence of hyperbolic spaces \((\mathbb{H}^{d}_{\kappa},d_{\kappa})\) of arbitrarily large sectional curvature \(\kappa\). We then apply our memorization result to deduce that these "asymptotically isometric" embeddings can be implemented by HNNs, quantitatively. The proofs of both main lemmata are relegated to Section 5.3. **Step 1 - Exhibiting a Memorizing HNN and Estimating its Capacity** We first need the following quantitative memorization guarantee for HNNs. **Lemma 5.1** (Upper-Bound on the Memory Capacity of an HNN): _Fix \(N,n,d\in\mathbb{N}_{+}\). For any \(N\)-point subset \(V\subset\mathbb{R}^{n}\) and any function \(f^{\star}:\mathbb{R}^{n}\to\mathbb{H}^{d}\), there exists an HNN \(f:\mathbb{R}^{n}\to\mathbb{H}^{d}\) satisfying_ \[f(v)=f^{\star}(v)\] _for each \(v\in V\). Moreover, the depth, width, and number of trainable parameters defining \(f^{\star}\) are bounded-above in Table 1._ The capacity estimates for the HNNs constructed in Theorem 4.2 and its supporting Lemma 5.1 depend on the configuration of the pointcloud \(V\) in \(\mathbb{R}^{n}\) with respect to the Euclidean geometry of \(\mathbb{R}^{n}\). The configuration is quantified by the ratio of the largest distance between distinct points over the smallest distance between distinct points, called the _aspect ratio_ on (Krasios et al., 2023, page 9), also called the separation condition in the MLP memorization literature; e.g. (Park et al., 2021, Definition 1). \[\operatorname{aspect}(V)\stackrel{{\text{\tiny def.}}}{{=}} \frac{\max_{x,\tilde{x}\in V}\|x-\tilde{x}\|_{2}}{\min_{x,\tilde{x}\in V;x \neq\tilde{x}}\|x-\tilde{x}\|_{2}}.\] Variants of the aspect ratio have also appeared in computer science, e.g. (Goemans et al., 2001; Newman and Rabinovich, 2023) and the related metric embedding literature; e.g. (Krauthgamer et al., 2005). #### Step 2 - Constructing An Asymptotically Optimal Embedding Into \(\mathbb{H}^{2}_{\kappa}\) **Lemma 5.2** (HNNs Universally Embed Trees Into Hyperbolic Spaces): _Fix \(n,d,N\in\mathbb{N}_{+}\) with \(d\geq 2\) and fix \(\lambda>1\). For any \(N\)-point subset \(V\) of \(\mathbb{R}^{n}\) and any latent tree structure \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\) on \(V\) of degree at least \(2\), there exists a map \(f^{\star}:\mathbb{R}^{n}\to\mathbb{H}^{d}\) and a sectional curvature \(\kappa<0\) satisfying_ \[\frac{1}{\lambda}\,d_{\mathcal{T}}(u,v)<d_{\kappa}(f^{\star}(u),f^{\star}(v))< \lambda\,d_{\mathcal{T}}(u,v) \tag{13}\] _for each \(u,v\in V\). Furthermore, \(\kappa\) tends to \(-\infty\) as \(\lambda\) tends to \(1\)._ #### Step 3 - Memorizing the Embedding Into \(\mathbb{H}^{d}_{\kappa}\) with an HNN **Proof** [Proof of Theorem 4.2] Fix \(n,d,N\in\mathbb{N}_{+}\) with \(d\geq 2\) and \(\lambda>1\). Let \(V\) be an \(N\)-point subset of \(\mathbb{R}^{n}\) and \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\) be a latent tree structure on \(V\) of degree at least \(2\). By Lemma 5.2, there exists a \(\kappa<0\) and a \(\lambda\)-isometric embedding \(f^{\star}:(V,d_{\mathcal{T}})\to(\mathbb{H}^{d}_{\kappa},d_{\kappa})\); i.e. (13) holds. Since \(V\) is a non-empty finite subset of \(\mathbb{R}^{n}\), we may apply Lemma 5.1 to infer that there exists an HNN \(f:\mathbb{R}^{n}\to\mathbb{H}^{d}\) satisfying \(f(v)=f^{\star}(v)\) for each \(v\in V\). Furthermore, its depth, width, and the number of its trainable parameters is recorded in Table 1. This conclude our proof. \(\blacksquare\) **Proof** [Proof of Corollary 4.3] The the result follows upon taking \(\lambda=L^{-r/2}\) in Theorem 4.2. \(\blacksquare\) ### Details on the Proof of the Upper-Bound In Theorem 4.2 We now provide the explicit derivations of all the above lemmata used to prove Theorem 4.2. **Proof** [Proof of Lemma 5.1] **Overview:**_The proof of this lemma can be broken down into \(4\) steps. First, we linearized the function \(f^{\star}\) to be memorized by associating it to a function between Euclidean spaces. Next we memorize the transformed function in Euclidean using a MLP with \(\operatorname{ReLU}\) activation function which we then transform to an \(\mathbb{H}^{d}\)-valued function which memorizes \(f^{\star}\). We then show that this transformed MLP can be implemented by an HNN. Finally, we tally the parameters of this HNN representation of the transformed MLP. Step 1 - Standardizing Inputs and Outputs of the Function to be Memorized_ Fix any \(y\in\mathbb{H}^{d}\). Since \(\mathbb{H}^{d}_{\kappa}\) is a simply connected Riemannian manifold of non-positive curvature then the Cartan-Hadamard Theorem, as formulated in (Jost, 2017, Corollary 6.9.1), implies that the map \(\exp_{x}:T_{x}(\mathbb{H}^{d}_{\kappa})\to\mathbb{H}^{d}_{\kappa}\) is global diffeomorphism. Therefore, the map \(\log_{x}:\mathbb{H}^{d}_{\kappa}\to T_{x}(\mathbb{H}^{d}_{\kappa})\) is well-defined and a bijection. In particular, this is the case for \(x=\mathbf{1}_{d}\). Therefore, the map \(\pi_{d}\circ\log_{\mathbf{1}_{d}}:\mathbb{H}^{d}\to\mathbb{R}^{d}\) is a bijection. Consider the map \(\bar{f}:\mathbb{R}^{n}\to\mathbb{R}^{d}\) defined by \[\bar{f}\stackrel{{\text{\tiny def.}}}{{=}}\pi_{d}\circ\log_{ \mathbf{1}_{d}}\circ f^{\star}.\] Note that, since \(\pi_{d}:T_{\mathbf{1}_{d}}(\mathbb{H}^{d})\to\mathbb{R}^{d}\) is a linear isomorphism it is a bijection and \(\iota_{d}\) is its two-sided inverse. Therefore, the definition of \(\bar{f}\) implies that \[(\exp_{1_{d}}\circ\iota_{d})\circ\bar{f}=f^{\star}. \tag{14}\] _Step 2 - Memorizing the Standardized Function_ Since \(V\subseteq\mathbb{R}^{n}\) and \(\bar{f}:\mathbb{R}^{n}\to\mathbb{R}^{d}\) then we may apply (Kratsios et al., 2023, Lemma 20) to deduce that there is an MLP (feedforward neural network) with ReLU activation function \(\tilde{f}:\mathbb{R}^{n}\to\mathbb{R}^{d}\) that interpolates \(\tilde{f}\); i.e. there are positive integers \(I,n=d_{0},\ldots,d_{I+2}=d\in\mathbb{N}_{+}\), such that for each \(i=1,\ldots,I+1\) there is a \(d_{i+1}\times d_{i}\) matrix \(A^{(i)}\) and a vector \(b^{(i)}\in\mathbb{R}^{d_{i+1}}\) implementing the representation \[\begin{split}&\tilde{f}(u)=A^{(I+1)}\,u^{(I)}+b^{(I+1)}\\ & u^{(i)}\operatorname{ReLU}\bullet(A^{(i)}u^{(i-1)}+b^{(i)})\\ & u^{(0)}=u\end{split} \tag{15}\] for each \(v\in\mathbb{R}^{n}\), and satisfying the interpolation/memorization condition \[\tilde{f}(v)=\bar{f}(v) \tag{16}\] for each \(v\in V\). Furthermore, its depth, width, and number of non-zero/trainable parameters are 1. The _width_ of \(\tilde{f}\) is \(n(N-1)+\max\{d,12\}\), 2. The _depth_\((I)\) of \(\tilde{f}\) is \[\mathcal{O}\left(N\left\{1+\sqrt{N\log N}\left[1+\frac{\log(2)}{\log(n)}\left( C_{n}+\frac{\log\left(N^{2}\operatorname{aspect}(V,\|\cdot\|_{2}\right)}{\log(2)} \right)_{+}\right]\right\}\right),\] 3. The _number of (non-zero) trainable parameters_ of \(\tilde{f}\) is \[\mathcal{O}\left(N\left(\frac{11}{4}\max\{n,d\}N^{2}-1\right) \left\{d+\sqrt{N\log N}\left[1+\frac{\log(2)}{\log(n)}\right.\right.\right.\] \[\left.\left.\times\left(C_{n}+\frac{\log\left(N^{2}\operatorname{aspect }(V,\|\cdot\|_{2}\right)}{\log(2)}\right)_{+}\right]\max\{d,12\}(1+\max\{d,12 \})\right\}\right).\] **Comment:**_In the proof of the main result, the aspect ratio \(\operatorname{aspect}(V)\) will not considered with respect to the shortest path distance \(d_{\mathcal{T}}\) on \(V\), given by its latent tree structure, but rather with respect to the Euclidean distance \(\|\cdot\|_{2}\) on \(\mathbb{R}^{n}\). This is because, the only role of the MLP \(\tilde{f}\) is to interpolate pairs of points of the function \(\tilde{f}\) denoted between Euclidean spaces. The ability of an MLP to do so depends on how close those points are to one another in the Euclidean sense._ Combining (14) with (16), with the fact that \(\exp_{x}\circ_{d}\) is a bijection implies that the following \[\begin{split} f^{\star}(v)=&(\exp_{1_{d}}\circ_{d })\circ\tilde{f}(v)\\ =&(\exp_{1_{d}}\circ_{d})\circ\tilde{f}(v)\\ =&(\exp_{1_{d}}\circ_{d})\circ(\tilde{f}\circ\log_ {\mathbf{1_{n}}})\circ\exp_{\mathbf{1_{n}}}(v)\end{split} \tag{17}\] holds for every \(v\in V\). It remains to be shown that the function on the right-hand side of (17) can be implemented by an HNN. _Step 3 - Representing \(\exp_{1_{d}}\circ_{d_{d}}\circ\tilde{f}(x)\) as an HNN_ For \(i=0,\ldots,I+1\) set \(c^{(i)}\stackrel{{\mathrm{def}}}{{=}}\mathbf{1}_{d_{i}}\). Observe that, for each \(i=1,\ldots,I\), \(\exp_{c^{(i)}}\circ\log_{c^{(i)}}=1_{T_{c^{(i)}}(\mathbb{H}^{d_{i}})}\) and that the following holds \[\begin{split}\underline{\oplus}_{c^{(i)}}\circ\overline{\oplus} ^{c^{(i)}}=&\pi_{d^{(i)}}\circ P_{c^{(i)}\mapsto\mathbf{1}_{d_{i }}}\circ P_{\mathbf{1}_{d_{i}}\mapsto c^{(i)}}\circ\iota_{d^{(i)}}\\ =&\pi_{d^{(i)}}\circ P_{\mathbf{1}_{d_{i}}\mapsto \mathbf{1}_{d_{i}}}\circ P_{\mathbf{1}_{d_{i}}\mapsto\mathbf{1}_{d_{i}}} \circ\iota_{d^{(i)}}\\ =&\pi_{d^{(i)}}\circ 1_{T_{\mathbf{1}_{d_{i}}}(\mathbb{H}^{d_{i}})} \circ 1_{T_{\mathbf{1}_{d_{i}}}(\mathbb{H}^{d_{i}})}\circ\iota_{d^{(i)}}\\ =&\pi_{d^{(i)}}\circ\iota_{d^{(i)}}\\ =& 1_{\mathbb{R}^{d_{i}}}.\end{split} \tag{18}\] where (18) follows from our definition of \(c^{(i)}\) and (19) follows since parallel transport from \(T_{\mathbf{1}_{d_{i}}}(\mathbb{H}^{d_{i}})\) to itself along the unique distance minimizing curve (geodesic) \(\gamma:[0,1]\rightarrow\mathbb{H}^{d_{i}}\) emanating from and terminating at \(\mathbf{1}_{d_{i}}\); namely, \(\gamma(t)=\mathbf{1}_{d_{i}}\) for all \(0\leq t\leq 1\). Therefore, any HNN with representation of an HNN in Definition 2, with these specifications of \(c^{(0)}\), \(\ldots\), \(c^{(I+1)}\) can be represented as \[(\exp_{1_{d}}\circ_{d})\circ(g\circ\log_{\mathbf{1_{n}}})\circ\exp_{\mathbf{1 _{n}}}\] where the map \(g:\mathbb{R}^{n}\rightarrow\mathbb{R}^{d}\) is an MLP with ReLU activation function; i.e. it can be represented as \[\begin{split}& g(u)=\tilde{A}^{(I+1)}\left(u^{(I)}+\tilde{b}^{( \tilde{I}+1)}\right)\\ & u^{(i)}\operatorname{ReLU}\bullet\bigl{(}\tilde{A}^{(i)}+\tilde {b}^{(i)}\bigr{)}\\ & u^{(0)}=u\end{split} \tag{20}\] for some integers \(\tilde{I},n=d_{0},\ldots,d_{I+2}=d\in\mathbb{N}_{+}\), \(\tilde{d}_{i+1}\times\tilde{d}_{i}\) matrices \(\tilde{A}^{(i)}\) and a vectors \(\tilde{b}^{(i)}\in\mathbb{R}^{\tilde{d}_{i+1}}\). Setting \(g\stackrel{{\mathrm{def}}}{{=}}\tilde{f}\), implies that the map \((\exp_{x}\circ_{d})\circ(\tilde{f}\circ\exp_{\mathbf{1_{n}}})\circ\log_{ \mathbf{1_{n}}}\) in (17) defines an HNN. _Step 4 - Tallying Trainable Parameters_ By construction the depth and width of \(f\) respectively equal to the depth and width of \(\tilde{f}\). The number of parameters defining \(f\) equal to the number of parameters defining \(\tilde{f}\) plus \(I+1\), since \(\|c^{(i)}\|_{0}=1\) for each \(i=0,\ldots,I+1\). Our proof of Lemma 5 relies on some concepts from metric geometry, which we now gather here, before deriving the result. A Geometric realization of a (positively) weighted graph \(G\) can be seen as the metric space obtained by gluing together real intervals of lengths equal to corresponding weights at corresponding endpoints, according to the pattern of \(G\), with the shortest path distance. Following (Das et al., 2017, Definition 3.1.1), this is formalized as the following metric space. (Geometric Realization Of A Weighted Tree): _A geometric realization of a weighted tree \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\) is the metric space \((X_{\mathcal{T}},d_{X_{\mathcal{T}}})\) whose pointset \(X_{\mathcal{T}}=V\cup\left(\bigcup_{\{u,v\}\in\mathcal{E}}\left(\{(u,v)\} \times[0,W(u,v)]\right)\right)/\sim\), where \(\sim\) denotes the quotient defined by following identifications_ \[v \sim((v,u),0)\quad\text{ for all }\{u,v\}\in\mathcal{E}\] \[((v,u),t) \sim((u,v),W(\{u,v\})-t)\quad\text{ for all }\{u,v\}\in\mathcal{E} \text{ and all }t\in[0,W(\{u,v\})]\] _and with metric \(d_{X_{\mathcal{T}}}\) on \(X_{\mathcal{T}}\) maps any pair of (equivalence classes) \(((u_{0},u_{0}),t)\) and \(((v_{0},v_{1}),s)\) in \(X_{\mathcal{T}}\) to the non-negative number_ \[\min_{i,j\in\{0,1\}}|t-iW(u_{0},u_{1})|+d_{\mathcal{T}}(u_{i},v_{j})+|s-jW(v_{0 },v_{1})|.\] We call a metric space a _simplicial tree_ if it is essentially the same as a tree whose edges are finite closed real intervals, with the shortest path distance. Simplicial trees are a special case of the following broader class of well-studied metric spaces, which we introduce to synchronize with the metric geometry literature, since it formulated many results using this broader class. (\(\mathbb{R}\)-Tree): _A metric space \((X,d)\) is called an \(\mathbb{R}\)-tree if \(X\) is connected and for all \(x,y,z,w\in X\),_ \[(x,y)_{w}\geq\min\{(x,z)_{w},(z,y)_{w}\}\] _where \((x,y)_{w}\) denotes the Gromov product_ \[(x,y)_{w}=\frac{1}{2}[d(x,w)+d(w,y)-d(x,y)].\] [Valency]: _The valency of the geometric realization of a metric space \(X\) at a point \(x\) is defined as the cardinality of the set of connected components of \(X\backslash\{x\}\). (Proof of Lemma 5)_: _The proof of this lemma can be broken down into \(3\) steps. First, we isometrically embed the tree into an \(\mathbb{R}\)-tree thus representing our discrete space as a more tractable connected (uniquely) geodesic metric space. This \(\mathbb{R}\)-tree is then isometrically embedded into a canonical \(\mathbb{R}\)-tree whose structure is regular and for which embeddings are exhibited more easily. Next, we "asymptotically embed' this regular \(\mathbb{R}\)-tree into the boundary of the hyperbolic space, upon perturbing the embedding and adjusting the curvature of the hyperbolic space. We deduce the lemma upon composing all three embeddings. Step 1 - Isometric Embedding of \((V,d_{\mathcal{T}})\) Into an \(\mathbb{R}\)-Tree_ If \(V\) has only one point, then the result is trivial. Therefore, we will always assume that \(V\) has at least two points. For each vertex \(v\in V\), pick a different \(w^{v}\in V\) such that \(\{v,w^{v}\}\in\mathcal{E}\) and \(W(v,w^{v})\geq W(v,u)\) for all \(\{u,v\}\in\mathcal{E}\); i.e. \(w^{v}\) is adjacent to \(v\) in the weighted graph \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\). Consider the map \(\varphi_{1}:V\to X_{\mathcal{T}}\) defined for any vertex \(v\in V\) by \[\varphi_{1}:v\mapsto((v,w^{v}),0).\] By definition \((X_{\mathcal{T}},d_{X_{\mathcal{T}}})\) is a simplicial tree and therefore by (Das et al., 2017, Corollary 3.1.13) it is an \(\mathbb{R}\)-tree. In every \(\mathbb{R}\)-tree there is a unique shortest path (geodesic) connecting any pair of points. This is because, by (Das et al., 2017, Observation 3.2.6), all \(\mathbb{R}\)-trees satisfy the CAT\((-1)\) condition, as defined in (Bridson and Haefliger, 1999, Definition II.1.1), and in any metric spaces satisfying the CAT\((-1)\) there is exactly one shortest path connecting every pair of points by (Bridson and Haefliger, 1999, Chapter II.1 - Proposition 1.4 (1)). Moreover, (Chiswell, 2001, Chapter 3 - Lemma 1.4) implies that if \(x=x_{0},\ldots,x_{N}=y\) are (distinct) points in an \(\mathbb{R}\)-tree, for some \(N\in\mathbb{N}\), such as \((X_{\mathcal{T}},d_{X_{\mathcal{T}}})\), for lying on _the_ geodesic (minimal length path) joining \(x\) to \(y\) then \[d_{X_{\mathcal{T}}}(x,y)=\sum_{i=0}^{N-1}\,d_{X_{\mathcal{T}}}(x_{i},x_{i+1}). \tag{21}\] Since \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\) is a weighted tree, there is exactly one path joining any two nodes in a tree comprised of distinct points (a so-called reduced path in \(\mathcal{T}\)), independently of the weighting function \(\mathcal{W}\), see e.g. (Chiswell, 2001, Chapter 2 - Lemma 1.4). Therefore, for any \(v,u\in V\) there exist one such unique finite sequence \(u=u_{1},\ldots,u_{N}=v\) of distinct points (whenever \(u\neq v\) with the case where \(u=v\) being trivial). By definition of \(\varphi_{1}\) and the above remarks on \((X_{\mathcal{T}},d_{X_{\mathcal{T}}})\) being uniquely geodesic, we have that there exists exactly one geodesic (minimal length curve) \(\gamma:[0,1]\to X_{\mathcal{T}}\) satisfying \[\gamma(t_{i})=\varphi(u_{i})\] for some distinct "times" \(0=t_{0}<\cdots<t_{N}=1\) with constant speed. Therefore, (21) implies that \[d_{X_{\mathcal{T}}}(\varphi(u),\varphi(v))=\sum_{i=0}^{N-1}\,d_{X_{\mathcal{T }}}(\varphi(u_{i}),\varphi(u_{i+1})). \tag{22}\] Since, for \(i=0,\ldots,N-1\), \(u_{i}\) and \(u_{i+1}\) are adjacent in \(\mathcal{T}\), meaning that \(\{u_{i},u_{i+1}\}\in\mathcal{E}\), then \(d_{X_{\mathcal{T}}}(\varphi(u_{i}),\varphi(u_{i+1}))\) reduces to \[d_{X_{\mathcal{T}}}(\varphi(u_{i}),\varphi(u_{i+1}))= \min_{k=0,1,\,j=0,1}\big{|}0-kW(u_{i},w^{u_{i}})\big{|} \tag{23}\] \[+d_{\mathcal{T}}(w_{k},v_{j})\] \[+\big{|}0-jW(u_{i+1},w^{u_{i+1}})\big{|}\] \[= \min_{k=0,1,\,j=0,1}\big{|}0-0W(u_{i},w^{u_{i}})\big{|}\] \[+W(u_{i},u_{i+1})\] (24) \[+\big{|}0-0W(u_{i+1},w^{u_{i+1}})\big{|}\] \[= W(u_{i},u_{i+1}), \tag{25}\] where \(w_{0}=u_{i}\), \(w_{1}=w^{u_{i}}\), \(v_{0}=u_{i+1}\), \(v_{1}=w^{u_{i+1}}\) and (24) holds by definition of \(w^{u_{i}}\) and \(w^{u_{i+1}}\) together with the fact that \(\{u_{i},u_{i+1}\}\in\mathcal{E}\) which implies that \(\{u_{i},u_{i+1}\}\) is a geodesic in \((V,d_{\mathcal{T}})\). Combining the computation in (23)-(25) with (22) yields \[d_{X_{\mathcal{T}}}(\varphi(u),\varphi(v))=\sum_{i=0}^{N-1}\,W(u_{i},u_{i+1}) =d_{\mathcal{T}}(u,v), \tag{26}\] where the right-hand side of (26) holds since \((\{u_{i},u_{i+1}\})_{i=0}^{N-1}\) was the unique path in \(\mathcal{T}\) of distinct point from \(u\) to \(v\) and by definition of the shortest path distance in a graph. Consequentially, (26) shows that \(\varphi_{1}\) is an isometric embedding of \((V,d_{\mathcal{T}})\) into \((X_{\mathcal{T}},d_{X_{\mathcal{T}}})\). _Step 2 - Embedding \((X_{\mathcal{T}},d_{X_{\mathcal{T}}})\) Into A Universal \(\mathbb{R}\)-Tree_ Since \((X_{\mathcal{T}},d_{X_{\mathcal{T}}})\) has valency at-most \(1\leq\mu=\deg(\mathcal{T})<\#\mathbb{N}<2^{\aleph_{0}}\), then (Dyubina and Polterovich, 2001, Theorem 1.2.3 (i)) implies that there exists10 an \(\mathbb{R}\)-tree \((A_{\mu},d_{A_{\mu}})\) of valency at-most \(\mu\) and an isometric embedding \(\varphi_{2}:(X_{\mathcal{T}},d_{X_{\mathcal{T}}})\to(A_{\mu},d_{A_{\mu}})\). Footnote 10: The metric space \((A_{\mu},d_{A_{\mu}})\) is constructed explicitly in (Dyubina and Polterovich, 2001, Definition 1.1.1) but its existence dates back earlier to Nikiel (1989); Mayer et al. (1992). _Step 3 - The Universal \(\mathbb{R}\)-Tree At \(\infty\) In the Hyperbolic Space \((\mathbb{H}_{-1}^{d},d_{-1})\)._ By (Bridson and Haefliger, 1999, Proposition 6.17) the hyperbolic spaces \((\mathbb{H}_{-1}^{d},d_{-1})\) have the structure of a simply connected and geodesically complete Riemannian manifold and by the computations on (Jost, 2017, pages 276-277) it has constant negative sectional curvature equal to \(-1\). Now, since \(\mu<2^{\aleph_{0}}\) then the just-discussed properties of \((\mathbb{H}_{-1}^{d},d_{-1})\) guarantee that (Dyubina and Polterovich, 2001, Theorem 1.2.3 (i)). Now (Dyubina and Polterovich, 2001, Theorem 1.2.3 (i)), together with (Dyubina and Polterovich, 2001, Definition 1.2.1), imply that the following holds: There is a diverging sequence \((\lambda_{n})_{n=0}^{\infty}\) of positive real numbers such that for every \(x\in A_{\mu}\) there is a sequence \((x^{n})_{n=0}^{\infty}\) in \(\mathbb{H}^{d}\) such that for every \(\varepsilon>0\) there is an \(n_{\varepsilon}\in\mathbb{N}_{+}\) such that for every integer \(n\geq n_{\varepsilon}\) we have \[\sup_{x,y\in A_{\mu}}\Big{|}\frac{d_{-1}(x^{n},y^{n})}{\lambda_{n}}-d_{A_{\mu}} (x,y)\Big{|}<\varepsilon/2. \tag{27}\] In particular, (27) holds for all \(x,y\in\varphi_{2}\circ\varphi_{1}(V)\subseteq A_{\mu}\). Since \(V\) is finite, then so is \(\varphi_{2}\circ\varphi_{1}(V)\) and since \(\mathbb{H}^{d}\) is simply connected then for every \(x\in\varphi_{2}\circ\varphi_{1}(V)\) there exists a point \(\tilde{x}^{n_{\varepsilon}}\) for which \(d_{-1}(x^{n_{\varepsilon}},\tilde{x}^{n_{\varepsilon}})<\varepsilon/4\) and such that \(\{\tilde{x}^{n_{\varepsilon}}\}_{x\in\varphi_{2}\circ\varphi_{1}(V)}\) and \(\varphi_{2}\circ\varphi_{1}(V)\) have equal numbers of points. Since \(\varphi_{2}\) and \(\varphi_{1}\) are isometric embeddings then they are injective; whence, \(\{\tilde{x}^{n_{\varepsilon}}\}_{x\in\varphi_{2}\circ\varphi_{1}(V)}\) and \(V\) have equal numbers of points. Define \(\varphi_{3}:\varphi_{2}\circ\varphi_{1}(V)\to\mathbb{H}^{d}\) by \(x\mapsto\tilde{x}^{n_{\varepsilon}}\), for each \(x\in\varphi_{2}\circ\varphi_{1}(V)\). Therefore, the map \(\varphi_{3}:\varphi_{2}\circ\varphi_{1}(V)\to\mathbb{H}^{d}\) is injective and, by (27), it satisfies \[\max_{x,y\in\varphi_{2}\circ\varphi_{1}(V)}\left|\frac{d_{-1}(\varphi_{3}(x), \varphi_{3}(y))}{\lambda_{n}}-d_{A_{\mu}}(x,y)\right|<\varepsilon.\] Define the map \(f^{\star}:\mathbb{R}^{n}\to\mathbb{H}^{d}\) as _any_ extension of the map \(\varphi_{3}\circ\varphi_{2}\circ\varphi_{1}:V\to\mathbb{H}^{d}\). Thus, \(f^{\star}|_{V}=(\varphi_{3}\circ\varphi_{2}\circ\varphi_{1})|_{V}\) and therefore \(f^{\star}\) satisfies the following \[d_{\mathcal{T}}(u,v)-\varepsilon= d_{A_{\mu}}(\varphi_{2}\circ\varphi_{1}(u),\varphi_{2}\circ \varphi_{1}(v))-\varepsilon \tag{28}\] \[< \frac{d_{-1}(f^{\star}(u),f^{\star}(v))}{\lambda_{n}}\] \[< d_{A_{\mu}}(\varphi_{2}\circ\varphi_{1}(u),\varphi_{2}\circ \varphi_{1}(v))+\varepsilon\] \[= d_{\mathcal{T}}(u,v)+\varepsilon \tag{29}\] for each \(u,v\in V\); where the equalities (28) and (29) held by virtues of \(\varphi_{2}\) and \(\varphi_{1}\) being isometries and since the compositions of isometries is itself an isometry. _Step 4 - Selecting The Correct Curvature on \(\mathbb{H}^{d}_{\kappa}\) by Re-scaling The Metric \(d_{-1}\)._ Set \(\kappa_{\varepsilon}\stackrel{{\text{\tiny def.}}}{{=}}-\lambda_{ n_{\varepsilon}}^{2}\). By definition of \(d_{\kappa_{\varepsilon}}\), see (Bridson and Haefliger, 1999, Definition 2.10), the chain of inequalities in (28)-(29) imply that \[d_{\mathcal{T}}(u,v)-\varepsilon<d_{\kappa_{\varepsilon}}(f^{\star}(u),f^{ \star}(v))<d_{\mathcal{T}}(u,v)+\varepsilon. \tag{30}\] Set \(\delta\stackrel{{\text{\tiny def.}}}{{=}}\min_{x,\tilde{x}\in V; x\neq\tilde{x}}d_{\mathcal{T}}(x,\tilde{x})>0\) since \(V\) is finite. Note that for any \(0<\varepsilon<\delta\), the distortion of \(f^{\star}\) is at most \[\max_{\begin{subarray}{c}x,\tilde{x}\in V\\ x\neq\tilde{x}\end{subarray}}\frac{d_{\mathcal{T}}(x,\tilde{x})+\varepsilon }{d_{\mathcal{T}}(x,\tilde{x})-\varepsilon}=\frac{\delta+\varepsilon}{\delta-\varepsilon} \tag{31}\] and that the right-hand side of (31) tends to \(1\) as \(\varepsilon\to 0\). Thus, we may choose \(\varepsilon>0\) small enough to ensure that (30) holds; relabelling \(\kappa\stackrel{{\text{\tiny def.}}}{{=}}\kappa_{\varepsilon}\) accordingly. ## 6 Conclusion We have established lower bounds on the smallest achievable distortion by any Multi-Layer Perceptron (MLP) embedding a large latent metric tree into a Euclidean space, as proven in Theorem 4.1. Our lower bound holds true independently of the depth, width, number of trainable parameters, and even the (possibly discontinuous) activation function used to define the MLP. In contrast to this lower bound, we have demonstrated that Hyperbolic Neural Networks (HNNs) can effectively represent any latent tree in a 2-dimensional hyperbolic space, with a trainable constant curvature parameter. Furthermore, we have derived upper bounds on the capacity of the HNNs implementing such an embedding and have shown that it depends at worst polynomially on the number of nodes in the graph. To the best of the authors' knowledge, this constitutes the initial proof that HNNs are well-suited for representing graph structures, while also being the first evidence that MLPs are not. Thus, our results provide mathematical support for the notion that HNNs possess a superior inductive bias for representation learning in data with latent hierarchies, thereby reinforcing a widespread belief in the field of geometric deep learning. ## 7 Acknowledgment and Funding AK acknowledges financial support from the NSERC Discovery Grant No. RGPIN-2023-04482 and their McMaster Startup Funds. RH was funded by the James Steward Research Award and by AK's McMaster Startup Funds. HSOB acknowledges financial support from the Oxford-Man Institute of Quantitative Finance for computing support The authors also would like to thank Paul McNicholas and A. Martina Neuman their for their helpful discussions.
2303.17334
GAT-COBO: Cost-Sensitive Graph Neural Network for Telecom Fraud Detection
Along with the rapid evolution of mobile communication technologies, such as 5G, there has been a drastically increase in telecom fraud, which significantly dissipates individual fortune and social wealth. In recent years, graph mining techniques are gradually becoming a mainstream solution for detecting telecom fraud. However, the graph imbalance problem, caused by the Pareto principle, brings severe challenges to graph data mining. This is a new and challenging problem, but little previous work has been noticed. In this paper, we propose a Graph ATtention network with COst-sensitive BOosting (GAT-COBO) for the graph imbalance problem. First, we design a GAT-based base classifier to learn the embeddings of all nodes in the graph. Then, we feed the embeddings into a well-designed cost-sensitive learner for imbalanced learning. Next, we update the weights according to the misclassification cost to make the model focus more on the minority class. Finally, we sum the node embeddings obtained by multiple cost-sensitive learners to obtain a comprehensive node representation, which is used for the downstream anomaly detection task. Extensive experiments on two real-world telecom fraud detection datasets demonstrate that our proposed method is effective for the graph imbalance problem, outperforming the state-of-the-art GNNs and GNN-based fraud detectors. In addition, our model is also helpful for solving the widespread over-smoothing problem in GNNs. The GAT-COBO code and datasets are available at https://github.com/xxhu94/GAT-COBO.
Xinxin Hu, Haotian Chen, Junjie Zhang, Hongchang Chen, Shuxin Liu, Xing Li, Yahui Wang, Xiangyang Xue
2023-03-29T07:02:50Z
http://arxiv.org/abs/2303.17334v1
# GAT-COBO: Cost-Sensitive Graph Neural Network for Telecom Fraud Detection ###### Abstract Along with the rapid evolution of mobile communication technologies, such as 5G, there has been a drastically increase in telecom fraud, which significantly dissipates individual fortune and social wealth. In recent years, graph mining techniques are gradually becoming a mainstream solution for detecting telecom fraud. However, the graph imbalance problem, caused by the Pareto principle, brings severe challenges to graph data mining. This is a new and challenging problem, but little previous work has been noticed. In this paper, we propose a Graph Attention network with CQst-sensitive BOosting (GAT-COBO) for the graph imbalance problem. First, we design a GAT-based base classifier to learn the embeddings of all nodes in the graph. Then, we feed the embeddings into a well-designed cost-sensitive learner for imbalanced learning. Next, we update the weights according to the misclassification cost to make the model focus more on the minority class. Finally, we sum the node embeddings obtained by multiple cost-sensitive learners to obtain a comprehensive node representation, which is used for the downstream anomaly detection task. Extensive experiments on two real-world telecom fraud detection datasets demonstrate that our proposed method is effective for the graph imbalance problem, outperforming the state-of-the-art GNNs and GNN-based fraud detectors. In addition, our model is also helpful for solving the widespread over-smoothing problem in GNNs. The GAT-COBO code and datasets are available at [https://github.com/xchu94/GAT-COBO](https://github.com/xchu94/GAT-COBO). Telecom fraud detection, Graph neural network, Boosting, Cost sensitive learning, Graph imbalance ## 1 Introduction Telecom fraud has become increasingly rampant around the world over the past few years. In 2020, one-third of the U.S. population experienced telecom fraud, with losses of $19.7 billion. In the same year, mainland China handled 230 million fraudulent phone calls and 1.3 billion fraudulent text messages [1]. Not only is telecom fraud massive, but it is growing rapidly. Over the past six years, U.S. telecom fraud has grown at an average annual rate of 30% [2]. Compared to 2019, mainland China also saw a 10% increase in the number of telecom frauds disposed of in 2020. These scans can cause financial losses and even emotional trauma to a large number of victims [3]. As the commercialization of 5G mobile networks progresses, the number of fraudsters and victims will inevitably increase further. How to unearth telecom fraudsters has become an increasingly important research topic. Here, we take the operator's mobile network big data platform as an example to introduce telecom fraud and its detection pipeline (see Figure 1). In an operator's mobile network infrastructure, subscribers and their devices generate a large number of network behavior records every day due to their communication behavior, many of which are generated by fraudulent users. With the help of network data processing platform, operators can mine user Call Detail Records (CDR) to detect fraudsters, thereby assisting mobile network operation decisions. Data analysis is the most crucial aspect of the whole process, and it is also full of challenges. Subscribers' communication behaviors naturally constitute graphs, and the use of graph mining techniques for data Fig. 1: Illustration of mobile network big data platform and workflow. analysis has become an important trend. In recent years, graph neural network (GNN) [4, 5, 6] has gradually become the mainstream technology for graph data mining. Current graph-based data mining tasks mainly model the relationships between nodes from the perspective of topology and attribute content [7], making nodes of the same class more closely embedded in the embedding space and dissimilar nodes further away. A typical semi-supervised node classification task is performed as follows [8]: given a large graph with a small scale of node labels, a classifier is trained on those labeled nodes and used to classify other nodes during the testing process. These related works include graph convolutional networks (GCN) [9] and many of its variants proposed in recent years [10], which effectively utilize features in the spectral domain by using simplified first-order approximations. GraphSage [11] and Graph Attention Network(GAT) [12] utilize features in the spatial domain to better adapt to different graph topologies. GNNs have achieved remarkable performance in many application domains, such as text classification [13], image recognition [14], and recommender systems [15]. GNN-based graph data anomaly detection has also made great progress [16]. However, a significant challenge in graph anomaly detection is the imbalance between normal and abnormal classes. For example, only a tiny fraction of the huge amount of CDR data stored by telecom operators are fraudulent users. The same is true of fraudulent users in social networks and financial transaction networks. Existing graph anomaly detection methods usually assume that the class distribution in the input graph data is almost or completely balanced, or deliberately provide each class with a balanced labeled sample at training time. This ensures that the representation across multiple classes is balanced, thus completely avoiding the class imbalance problem. However, this artificial balance interference is obviously inconsistent with the distribution of real-world graph data. Because different parts in real-world graph-based systems often evolve asymmetrically and unrestrictedly, which makes the data naturally exhibit highly skewed class distributions. If the imbalance problem is not considered when designing the GNN model, the majority class may dominate the loss function. This makes the trained GNN overclassify these majority classes and fails to accurately predict samples from the minority class, which are the real focus of our attention. When generalizing models to graphs with imbalanced class distributions, existing GNN methods tend to overfit the majority class, resulting in suboptimal embedding results for the minority class. In the field of machine learning, in order to solve the class imbalance problem, researchers mainly adopt data-level methods, algorithm-level methods, and hybrid methods [17]. Data-level methods seek to operate on the data to make the class distribution more balanced, such as over-sampling or under-sampling [18, 19]. Algorithm-level methods typically introduce different misclassification penalties or prior probabilities for different classes [20, 21, 22, 23]. Hybrid methods [24, 25, 26] attempt to combine the above two. However, applying these methods directly to graph data mining may not yield optimal results. Because the vast majority of existing work on the imbalance problem is devoted to \(i.i.d\) data. The relational characteristics of graph data are obviously in conflict with the \(i.i.d\) assumption. In addition, GNN also has an over-smoothing problem. When the number of GNN layers is too large, the features learned by all nodes tend to be consistent, resulting in a sharp decline in the classification performance of the model, which puts forward more stringent conditions for the application of unbalanced methods to graph data mining. In order to solve the above problems, we combine GAT and ensemble learning to design a cost-sensitive GNN. Specifically, we first treat each GAT as a weak classifier and use it to learn the embedding representation for each node. Then the embedding is input to the cost-sensitive learner, which will calculate the classification bias. According to the obtained deviation, the misclassification cost and the updated node sampling weight are calculated, so the misclassified samples are given greater weight in the next weak classifier. Then, we use the updated weights to constrain the loss function of the next weak classifier, thus to retrain to obtain new embedding representations. By concatenating such multiple weak classifiers and summing the embedding, the final cost-sensitive embedding representation is obtained. Based on the above ideas, we propose a new model that performs anomaly detection by boosting GNN with cost-sensitive learning. Our contributions are as follows: * We reveal the graph imbalance problem in telecom fraud and design a novel semi-supervised GNN framework for its detection. * Combining Boosting and GNN, we try to embed GAT as a base classifier in the ensemble learning framework, which not only improves GNN performance but also overcomes the over-smoothing effect. * We design a cost-sensitive learning scheme for GNN to solve the graph imbalance problem and provide a theoretical proof. * Extensive experiments are conducted on two real-world telecom fraud datasets to demonstrate the effectiveness of the proposed method. The rest of the paper is organized as follows. Sec.2 reviews the related works. Sec.3 introduces definitions and the problem statement. In Sec.4, we give the details of GAT-COBO. In Sec.5, we conduct experiments to evaluate the effectiveness of GAT-COBO. In Sec.6, we conclude the paper. ## 2 Related work ### _GNN-based fraud detection_ In recent years, GNN, with its powerful graph data representation ability, has been widely used in fraud detection tasks. According to the application fields, these works can be divided into three main categories, namely, GNN-based telecom fraud detection [4, 27, 28, 29, 30, 31, 32], GNN-based social network fraud detection [5, 33, 34, 35, 36], and GNN-based financial fraud detection [37, 38, 39, 40, 41, 42, 43, 44]. (1) Telecom fraud detection. Liu et al. [28] and Ji et al. [29] used attention mechanism based graph neural network and Multi-Range Gated Graph Neural Network (MRG-GNN) for telecom fraud detection, respectively. Based on the constructed directed bipartite graph, Tseng et al. [4] learned the trust values of remote phone numbers by a weighted HITS algorithm. Zheng et al. [30] proposed a generative adversarial network (GAN) based model to calculate the probability that a bank transfer is a telecom fraud. Recently, Jiang et al. [27] detected telecom fraud by integrating the Hawkes process into an LSTM for historical impact learning. Based on the subscriber's CDR and payment record information, Chadyas et al. [45] used univariate outlier detection methods to identify fraudulent customers in mobile virtual network operators (MVNOs). Krasic et al. [46] combined machine learning and SMOTE oversampling to solve the highly unbalanced data distribution problem in telecom fraud detection. Yang et al. [47] disclosed the different behavioral characteristics of fraudsters and non-fraudsters in mobile networks, and designed a semi-supervised detection model based on factor graphs. (2) Social network fraud detection. Dou et al. proposed CARE-GNN [5], which augments the aggregation process of GNN with reinforcement learning to prevent fraudsters disguise for opinion fraud detection. With heterogeneous and homogeneous graph-based neural networks, Li et al. proposed GAS [33] to capture local context and global context information of comments. GraphConsis [34] studied the inconsistency of context, features and relationships in graph-based fraud detection. FdGars [35] employed graph convolutional networks for fraud detection in online application review systems. Liu et al. [36] proposed a novel rumor detection framework based on structure-aware retweeting graph neural network. (3) Financial fraud detection. Li et al. [37] proposed Temporal Transaction Aggregation Graph Network (TTAGN) for Ethereum phishing fraud detection. Ji et al. [40] introduced structural learning into large-scale risk graphs to solve the problem of prohibited item detection in e-commerce. Ao et al. [38] proposed to detect fraudulent accounts on transaction graphs constructed from Ethereum block transactions. GEM [41] adaptively detected malicious accounts from heterogeneous account device graphs. Wang et al. proposed Semi-GNN [42], a hierarchical attention GNN, for financial fraud detection. Unlike making changes in the GNN model, Zhao et al. [43] designed a graph anomaly loss function for training the anomaly node representation of GNNs. Besides, Zhong et al. proposed MAHINDER [44], which explores metapath on heterogeneous information networks of multi-view attributes for credit card fraud transaction detection. Saia et al. [48] argued the advantages of using proactive fraud detection strategies over traditional retrospective strategies. Furthermore, a comprehensive review of graph-based fraud detection techniques was provided by Pourhabibi et al [49]. Ma et al. [16] conducted a systematic and comprehensive review of contemporary deep learning techniques for graph anomaly detection. ### _Class imbalance learning_ Class imbalance learning is an important research direction in the field of data mining and machine learning [50, 51, 17, 8]. In practice, classes with a large number of instances are often called majority classes, and classes with fewer instances are often called minority classes. The methods to solve the class imbalance problem can be roughly divided into three categories, namely data level, algorithm level and hybrid level. (1) Data-level methods try to rebalance the previous class distribution through a preprocessing step. There are two data-level approaches. One is to oversample the minority class, such as SMOTE [18], which solves this problem by generating new samples, and performing interpolation between the minority class samples and their nearest neighbors. But oversampling methods can lead to overfitting. Another approach is to undersample the majority class [19], but this may discard valuable information. (2) Algorithm-level approaches try to modify existing algorithms to emphasize minority classes, such as designing new loss functions, or specifying cost matrices for models (also known as cost-sensitive learning [20]). Cost-sensitive learning [21, 22] usually builds a cost matrix to assign different misclassification penalties to different classes. Khan et al. [23] proposed a method that can automatically optimize the cost matrix by backpropagation. Lin et al. [52] balanced the model by designing a loss function with a threshold (Focal loss) to increase the loss weight of the minority class samples and reduce the weight of the majority class samples. (3) The last category is hybrid methods, which combine the data-level and algorithm-level approaches. Ando et al. [25] proposed the first deep feature oversampling method. Chawla et al. [26] combined boosting with the SMOTE method. The works [50, 51, 17, 8] also provided a review of solutions to the class imbalance problem. In the field of graph learning, graph imbalance is a novel problem that only a few works have considered. For example, an early work [53] proposed a Hopfield-based cost-sensitive neural network algorithm (COSNet). PC-GNN [54] performed imbalanced graph learning by oversampling and undersampling different category nodes. Graphsmote [55] combined SMOTE sampling algorithm and GNN to solve the graph imbalance problem. DR-GCN [56] employed conditional adversarial training and distribution alignment to learn robust node representations of majority and minority classes. However, these efforts are mainly to solve the imbalance problem in graph by sampling methods. As mentioned above, sampling methods have many limitations. There is no work yet to combine cost-sensitive learning with modern GNNs. In this work, we consider introducing cost-sensitive learning into graph neural networks in an ingenious way to efficiently address the graph imbalance problem. ## 3 Problem Definition In this section, we introduce the definition of Graph, Graph imbalance problem, cost-sensitive learning, and GNN-based telecom fraud detection. Table I summarizes all the basic symbols. **Definition 3.1**.: **Graph.** In general, a graph can be defined as \(\mathcal{G}=(\mathcal{V},\mathcal{X},\mathcal{A},\mathcal{E},\mathcal{Y})\), where \(\mathcal{V}=\{v_{1},v_{2},v_{3},...v_{N}\}\) is a set of nodes. \(\mathcal{X}=\{\mathbf{x}_{1},\mathbf{x}_{2},...\mathbf{x}_{N}\}\) is the set of node features, where \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) is the feature vector of node \(v_{i}\). Stacking these vectors into a matrix constitutes the feature matrix \(\mathbf{X}\in\mathbb{R}^{N\times d}\) of the graph \(\mathcal{G}\). \(\mathbf{A}\in\mathbb{R}^{N\times N}\) represents the adjacency matrix of \(\mathcal{G}\), where \(a_{i,j}=1\) means that there is an edge between node \(v_{i}\) and node \(v_{j}\), if not, \(a_{i,j}=0\). \(\mathcal{E}\) represents the set of edges, that is, \(\mathcal{E}=\{e_{1},e_{2},e_{3},...e_{M}\}\). \(e_{j}=(v_{s_{j}},v_{r_{j}})\in\mathcal{E}\) is an edge between node \(v_{s_{j}}\) and \(v_{r_{j}}\), where \(v_{s_{j}},v_{r_{j}}\in\mathcal{V}\). \(\mathcal{Y}=\{y_{1},y_{2},...y_{N}\}\) is the set of labels corresponding to all nodes in the set \(\mathcal{V}\). For the convenience of representation, we encode the label \(y_{i}\) as one-hot vector \(\mathbf{y}_{i}\). **Definition 3.2**.: **Graph imbalance problem.** Given the labels \(\mathcal{Y}\) of a set of nodes in a graph \(\mathcal{G}=(\mathcal{V},\mathcal{X},\mathcal{A},\mathcal{E},\mathcal{Y})\), there are \(K\) classes in \(\mathcal{Y}\), namely \(C=\{C_{1},...,C_{K}\}\). \(|C_{i}|\) is the size of the \(i\)-th class, that is, the number of samples belonging to class \(i\). We use \[IR=\frac{min_{i}(|C_{i}|)}{max_{i}(|C_{i}|)} \tag{1}\] to measure the class imbalance ratio. Therefore, \(IR\) lies in the range \([0,1]\). The smaller the \(IR\), the more severe the imbalance [51]. In particular, in a binary classification problem, \(C_{1}\) and \(C_{2}\) denote two classes in \(C\), where \(C_{1}\) is the minority class, \(C_{2}\) is the majority class. Then the imbalance ratio of \(C_{1}\) and \(C_{2}\) is defined as \(IR=|C_{1}|/|C_{2}|\). **Definition 3.3**.: **Cost-sensitive learning.** Cost-sensitive learning mainly considers how to train a classifier when different classification errors lead to different penalties. In scenarios such as fraud detection, medical diagnosis, network security, etc., misclassifying the minority class often leads to large losses. Different from traditional classification methods to minimize the misclassification rate, cost-sensitive learning mainly introduces misclassification costs into classification decisions to reduce the overall cost of misclassification. Without loss of generality, given a set of labels \(C\), there are \(K\) classes in total. Cost-sensitive learning measures the misclassification loss of the machine learning algorithm by defining the misclassification cost. Specifically, the misclassification cost matrix can be defined as the following form: \[\mathbf{C}=\left[\begin{array}{cccc}C_{10}&C_{11}&\cdots&C_{1K}\\ C_{20}&C_{21}&\cdots&C_{2K}\\ \vdots&\vdots&\ddots&\vdots\\ C_{K0}&C_{K1}&\cdots&C_{KK}\end{array}\right] \tag{2}\] where \(C_{ij}\) represents the misclassification cost of classifying the sample in class \(i\) as class \(j\). And the larger the value, the greater the loss caused by the misclassification. \(C_{ij}\subset[0,+\infty)\) is an associated cost item. In cost-sensitive learning, the cost matrix \(\mathbf{C}\) can be set in advance by domain experts based on experience, or obtained through parameter learning. In cost-sensitive learning, the cost matrix \(\mathbf{C}\) varies with specific application problems, and it can generally be determined by parameter learning methods or by domain experts based on experience. At the same time, \(\mathbf{C}\) should generally satisfy the constraint: the values of one row in the cost matrix cannot all be greater than the values of another row. For example, for row \(m\) and \(n\) (\(1\leq m,n\leq K\)) of \(\mathbf{C}\), if \(C_{mj}>C_{nj}\) holds in all columns \(j\), then the prediction result of the test sample is always class \(n\) according to minimum expected misclassification cost criteria. **Definition 3.4**.: **Graph-based fraud detection.** With subscribers in the operator's network as nodes, subscribers behavior as node features, and communication between subscribers as edges to build graph, we can then use GNN for fraudulent subscriber detection. Specifically, for a given subscriber behavior graph \(\mathcal{G}=(\mathcal{V},\mathcal{X},\mathcal{A},\mathcal{E},\mathcal{Y})\), the inter-layer transfer formula in the GNN-based telecom fraud detection model can be formally described as: \[\mathbf{h}_{v}^{(l)}=\sigma\left(\mathbf{h}_{v}^{(l-1)}\oplus\mathrm{Agg} \left(\left\{\mathbf{h}_{v^{\prime}}^{(l-1)}:(v,v^{\prime})\in\mathcal{E} \right\}\right)\right) \tag{3}\] where \(\mathbf{h}_{v}^{(l)}\) represents the embedding of node \(v\) at layer \(l\), \(\mathbf{h}_{v}^{(0)}=\mathbf{x}_{v}\), and \(v^{\prime}\) is a neighbor of node \(v\). \(Agg()\) is a message aggregation function for neighbor feature aggregation, such as mean aggregation, pooling aggregation, attention aggregation, etc. \(\oplus\) represents feature concatenation or summation operation. ## 4 The Proposed Method ### _Overview_ Our proposed model consists of two main parts, one is a GAT-based weak classifier and the other is a boosting-based cost sensitive learner. We illustrate the pipeline of the proposed method in Figure 2. To begin with, the node features and adjacency matrix of the graph are fed into a GAT-based weak classifier which can be trained to obtain the node embeddings by signals from the labels (Section 4.2). Then, we feed the embeddings into the cost-sensitive learner to compute the misclassification cost of this weak classifier and update the sampling weights of the corresponding nodes according to it. After that, the updated sampling weights are used to guide the training of the next GAT-based weak classifier. Finally, the embeddings of multiple concatenated weak classifiers are summed to obtain the final embedding of every node which can be used for node classification in downstream tasks (Section 4.3). The entire optimization steps and algorithm are presented in Section 4.4. ### _GAT-based weak classifier_ To improve the performance of GNNs, previous studies have tended to explore in terms of information transformation and aggregation approaches of GNNs [57]. But these models are difficult to work on graph imbalanced data. Here, we try to change the perspective and use the classical GAT as a base weak classifier for the proposed method. The reason for choosing it is that GAT, as a classic GNN \begin{table} \begin{tabular}{c l} \hline Symbol & Definition \\ \hline \(\mathbf{A}\) & Adjacency matrix \\ \(\mathbf{X}^{(l)}\) & Input feature matrix of the \(l\)-th layer, \(\mathbf{X}^{(0)}=\mathbf{X}\) \\ \(\alpha_{ij}\) & Attention coefficient between nodes \(i\) and \(j\) \\ \(\Omega^{(l)}\) & The attention matrix of layer \(l\) \\ \(\mathbf{W}\) & Weight matrix of neural network \\ \(\mathbf{h_{v}}\) & The embedding of node \(v\) in the GAT classifier \\ \(\mathbf{z}_{v}\) & Final embedding of node \(v\) in the GAT classifier \\ \(p_{k}^{(l)}(v)\) & Probability of \(v\) classified as class \(k\) in \(l\)-th GAT classifier \\ \(h_{k}^{(l)}(v)\) & Class \(k\) probability of node \(v\) in \(l\)-th cost-sensitive learner \\ \(w_{v}^{l}\) & The sample weight of node \(v\) at layer \(l\) \\ \(\mathbf{C}\) & Cost sensitive matrix \\ \(C_{ij}^{(l)}\) & The misclassification cost of node \(v\) at layer \(l\) \\ \(C_{ij}^{(l)}\) & Cost of classifying a node in class \(i\) as the class \(j\) \\ \(D^{(l)}(x_{i})\) & The distribution of node \(x_{i}\) in the \(l\)-th layer \\ \end{tabular} \end{table} TABLE I: Glossary of Notations model, has strong expressive ability and can learn different attention weights for different neighbors, which is critical for the imbalance problem. For example, the GNN that generally uses the \(mean()\) function as the aggregator can be regarded as a simplification of GAT (equivalent to setting the attention coefficient of all connected nodes to 1). There are two modules in each GAT-based weak classifier, namely the graph attention module and the feature update module. The design of these two modules is elaborated below. #### 4.2.1 Graph attention module Given an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{X},\mathcal{A},\mathcal{E},\mathcal{Y})\), according to vanilla GAT in [12], the attention coefficient between adjacent nodes can be expressed as follows: \[\alpha_{ij}=\frac{\exp{(\text{ LeakyReLU }\left(\mathbf{a}^{T}\left[\mathbf{Wh}_{i}\|\mathbf{Wh}_{j}\right] \right))}}{\sum_{k\in\mathcal{N}_{i}}\exp{(\text{LeakyReLU}\left(\mathbf{a}^{T }\left[\mathbf{Wh}_{i}\|\mathbf{Wh}_{k}\right]\right))}} \tag{4}\] where \(\mathbf{a}\) represents the attention function to be learned and is implemented by a single-layer MLP. \(\mathbf{W}\in\mathbb{R}^{hid\times d}\) represents the linear transformation of the node embedding. \(hid\) is the number of neurons, which is set manually as a model hyper-parameter, and \(d\) is the dimension of \(\mathbf{h}\). \(||\) represents the feature concatenation, and \(LeakyRel()\) is the activation function. Therefore, the new embedding \(\mathbf{h}_{i}^{\prime}\) of node \(i\) is calculated according to the attention coefficient and the old embedding \(\mathbf{h}_{i}\) as follows: \[\mathbf{h}_{i}^{\prime}=\sigma\left(\sum_{j\in\mathcal{N}_{i}}\alpha_{ij} \mathbf{Wh}_{j}\right) \tag{5}\] where \(\mathcal{N}_{i}\) is the set of neighbors of node \(i\). \(\sigma\) is activation function. In order to obtain a richer representation, a multi-head attention mechanism can be used as in [12]: \[\mathbf{h}_{i}^{\prime}=\|_{q=1}^{Q}\sigma\left(\sum_{j\in\mathcal{N}_{i}} \alpha_{ij}^{q}\mathbf{W}^{q}\mathbf{h}_{j}\right) \tag{6}\] where \(Q\) represents the number of heads, and \(\alpha_{ij}^{q}\) represents the attention coefficient of the \(q\)-th head. To train the GAT with the signals from the labels, we adopt the minimization of the cross-entropy loss function as the optimization objective: \[\mathcal{L}_{GAT}=-\sum_{v\in\mathcal{V}}\left(\mathbf{y}_{v}\log\mathbf{z}_{ v}^{(l)}\right) \tag{7}\] where \(\mathbf{z}_{v}^{(l)}=(\mathbf{h}_{v}^{\prime})_{final}\) is the final embedding of node \(v\) for the \(l\)-th weak classifier. By supervising the training process with the above loss function, the optimal attention function is learned. So, the attention coefficients of all nodes can be obtained to form the attention matrix as follows: \[\mathbf{\Omega}=\left[\begin{array}{cccc}\alpha_{11}&\alpha_{12}&\cdots& \alpha_{1N}\\ \alpha_{21}&\alpha_{22}&\cdots&\alpha_{2N}\\ \vdots&\cdots&\cdots&\alpha_{kk}\\ \alpha_{N1}&\cdots&\cdots&\alpha_{NN}\end{array}\right] \tag{8}\] where \(\alpha_{ij}\) is calculated through Eq. (4) if \(v_{i}\) and \(v_{j}\) are neighbors, otherwise \(\alpha_{ij}=0\). #### 4.2.2 Feature update module The Boosting algorithm requires concatenation between weak classifiers [58]. However, it is a novel challenge to implement multiple GAT-based classifiers in concatenation. An intuitive idea is to use the final embedding of nodes from the previous GAT weak classifier as the initial feature input to the next weak classifier. Unfortunately, this design is approximately equivalent to increasing the number of GAT layers, which not only fails to improve the model performance, but also leads to over-smoothing effect of the GNN model. As pointed out in [59], after stacking multiple layers in the GNN model, the nodes aggregate too many hops of neighborhood information, leading to a convergence of features of all nodes. Thus, the unique feature information of each node is lost and only the structural information remains, thus leading to an over-smoothing effect. To solve this problem, we include the original feature information \(\mathbf{X}\) in the input of each base classifier along with the attention matrix \(\mathbf{\Omega}\) learned in the previous base classifier, which avoids the loss of node feature information due to too deep GNN layers and also utilizes the attention matrix Fig. 2: The overview of our proposed model GAT-COBO. learned in the previous base classifier. Based on the above considerations, we design a feature update module for each GAT weak classifier.This module can feed the attention information learned in the \((l-1)\)th base classifier into the \(l\)-th one. In this method, the input feature of the \(l\)-th weak classifier becomes: \[\mathbf{X}^{(l)}=\left(\beta\mathbf{\Omega}^{(l-1)}\right)\cdot\left(\gamma \mathbf{X}^{(l-1)}\right) \tag{9}\] where \(\beta\) is a parameter used to constrain the attention weight, \(\gamma\) is a parameter used to constrain the input feature. It is beneficial to update features in this way, because effective information transfer between classifiers can be easily achieved by passing the attention matrix of the previous weak classifier to the next one. The next weak classifier can be fine-tuned based on the previous weak classifier, thus ensuring that the results of the last correct classification are retained, while focusing more on the nodes that were misclassified in the last iteration. To facilitate Adaboost ensemble learning, we input the node embeddings learned by each weak classifier into a mixed linear layer. A softmax operation is then performed to obtain the probability of each node being assigned to each class. In this method, the probability that the node \(v\) of the \(l\)-th weak classifier belongs to the \(k\)-th class can be formally described as: \[p_{k}^{(l)}(v) =Softmax\left[Mixliner\left(\mathbf{x}_{v}^{(l)}\right)\right] \tag{10}\] where \(\mathbf{x}_{v}^{(l)}\) is the updated feature of node \(v\) from updated feature matrix \(\mathbf{X}^{(l)}\) defined in Eq. (9). \(Relu()\) is activation function. \(\mathbf{W}\) represents the weight of the neural network to be learned, which is an independent weight parameter and does not share with the weights in Eq. 4, 5 and 6. The probability \(p_{k}^{(l)}(v)\) of each node \(v\) on \(K\) different classes constitutes a probability vector \(\mathbf{p}^{(l)}(v)\in\mathbb{R}^{K}\). We use the cross-entropy loss function to train the above process. \[\mathcal{L}_{MLP}=-\sum_{v\in V}\left(\mathbf{y}_{v}\log\mathbf{p}^{(l)}(v)\right) \tag{11}\] ### _Boosting-based cost sensitive learner_ Boosting is a cluster of simple, effective and well-explained ensemble learning methods, the best known of which is Adaboost [58]. Researchers have made many improvements to Adaboost since it was proposed. One of them is SAMME.R algorithm [60], a variant of Adaboost with fast convergence and high classification accuracy. In addition, to make Adaboost adaptable to the imbalance problem, Zhang et al. [61] proposed a cost-sensitive Adaboost. But how to combine cost-sensitive learning with SAMME.R algorithm, especially GNN-based weak classifier, is a new problem that has hardly been explored before. In this paper, we design a cost-sensitive algorithm based on SAMME.R to solve the graph imbalance problem in GNNs, which we call Boosting-based cost-sensitive learner. Its structure is shown in Figure 2. Now, we elaborate on the design details. #### 4.3.1 Boosting GNN with cost The SAMME.R algorithm requires that the output probability vector of each weak classifier needs to satisfy certain constraints in order for the final integrated result to be optimal. So we transform the node class probability vectors obtained in Equation (10) according to the algorithm in [60]. Given the output \(p_{k}^{(l)}(v)\) of the \(l\)-th GAT-based weak classifier, we can calculate the node class probability of the \(l\)-th cost-sensitive learner \(h^{(l)}(v)\): \[h_{k}^{(l)}(v)= (K-1)\Big{(}\log p_{k}^{(l)}(v) \tag{12}\] \[-\frac{1}{K}\sum_{k^{\prime}=1}^{K}\log p_{k^{\prime}}^{(l)}(v) \Big{)},k=1,\ldots,K\] Based on the result, we can get the predicted label of node \(v\) as \(\arg\max_{k}(h_{k}^{(l)}(v))\) in the \(l\)-th weak classifier. Subsequently, we can calculate the misclassification cost of node \(v\) as: \[C_{v}^{(l)}=\mathbf{C}[y_{v},\arg\max_{k}(h_{k}^{(l)}(v))] \tag{13}\] where \(\mathbf{C}\) is the cost matrix defined in Sec.3. The core idea of the Boosting algorithm is to combine multiple weak classifiers into one strong classifier. One of the most important steps to achieve this design is the weights updating between different weak classifiers. In order to make the classifier continuously optimize the classification results during training, the boosting algorithm assigns weight \(w_{v}\) to each node \(v\). If the classification result of node \(v\) is wrong in the previous base classifier \(h_{k}^{(l)}\), its weight \(w_{v}\) is increased in the next base classifier \(h_{k}^{(l+1)}\), so that node \(v\) is valued more in \(h_{k}^{(l+1)}\) and its probability of correct classification is increased. Otherwise, \(w_{v}\) is adjusted downward. \(w\) has an initial value of \(1/N\). Also considering the class imbalance problem in the graph, we introduce cost-sensitive factors here. According to the classification result of \(v\) in the weak classifier \(l\), the higher the misclassification cost, the higher the weight of the node in the next weak classifier. Thus, the node weight update can be described formally as follows: \[w_{v}^{(l+1)}=w_{v}^{(l)}\cdot\exp\left(-\frac{K-1}{K}\cdot C_{v}^{(l)}\cdot \mathbf{y}_{v}\cdot\log\mathbf{p}^{(l)}(v)\right),v\in\mathcal{V} \tag{14}\] where \(w_{v}^{(l)}\) represents the weight of node \(v\) at classifier \(l\), \(C_{v}^{(l)}\) represents the misclassifying cost of \(v\) calculated through Eq.(13). #### 4.3.2 Cost matrix calculation While we give the form of the cost matrix in Section 3, we do not specify how its value is calculated. However, the determination of the cost matrix is crucial for cost-sensitive learning. In practical problems, the cost matrix can be manually specified based on the actual loss caused by the sample being misclassified. However, the calculation of the cost matrix is still very challenging in many cases. In order to facilitate the calculation intuitively and ensure the reasonableness of the cost matrix, we elaborate on three general cost matrix calculation methods from the perspective of the sample imbalance rate \(IR\), which are _Uniform, Inverse and Log1p_, respectively. **Uniform:** Set the misclassification cost of each class to be the same as 1. Such a cost matrix does not treat different classes differently. As a variant of the proposed method, it can be used when the classes are balanced, and it can also be used to compare the difference in the performance of the proposed model when classes are unbalanced. **Inverse:** Set the misclassification cost to be the inverse of the sample class ratio. The cost \(C_{ij}\) that the sample of the actual class \(i\) is predicted to be \(j\) can be expressed as: \[C_{ij}=\frac{\left|C_{j}\right|}{\left|C_{i}\right|} \tag{15}\] where \(C_{j}\) represents the predicted class and \(C_{i}\) the actual class. \(\left|C_{i}\right|\) represents the number of samples of class \(i\). In this way, the cost of predicting the minority class as the majority class will be greater than 1, and the more unbalanced the sample, the higher the cost. However, the _Inverse_ variant has an obvious drawback, that is, when the samples are highly unbalanced, \(C_{ij}\rightarrow+\infty\), and the approximation speed is proportional to the sample unbalance ratio. Under such circumstances, the model will predict all majority classes as minority classes, resulting in low scores on metrics like Precision and F1. To reduce the approximation speed of \(C_{ij}\), we propose another scheme. **Log1p:** Perform the \(log1p()\) operation on the sample class ratio \(C_{ij}\) in the _Inverse_ variant, which can make the biased cost values in the cost matrix conform to the unbiased normal distribution. And the formal expression is as follows: \[C_{ij}=\log 1p\left(\frac{\left|C_{j}\right|}{\left|C_{i}\right|}\right)= \log\left(\frac{\left|C_{j}\right|}{\left|C_{i}\right|}+1\right) \tag{16}\] After this operation, the cost matrix \(\mathbf{C}\) will satisfy the constraints in Section 3. Even if the sample class ratio is relatively unbalanced, extreme cost values are less likely to occur. By choosing any one of the three cost value calculation methods above, we can update the node weights in the weak classifier with the help of Eq. (14). Combining the weak classifiers \(h^{(l)}\) of each layer, the ensemble classification result can be calculated as follows: \[H(v)=\arg\max_{k}\sum_{l=1}^{L}h_{k}^{(l)}(v) \tag{17}\] where \(h_{k}^{(l)}(v)\) is obtained through Eq. (12). #### 4.3.3 Theoretical proof For the Boosting algorithms, the choice of weight update parameters is crucial for converting a weak learning algorithm to a strong one. When the cost term is introduced into the weight update equation of the SAMME.R algorithm, the updated data distribution is affected by the cost term \(C_{ij}\). If the weight update parameter is not reintroduced, i.e., the cost term is considered for each cost-sensitive boosting algorithm, the boosting efficiency is not guaranteed. To justify the above scheme, we give the proof below. The core idea of our proof is that by minimizing the overall training error of the combined classifier, we generalize the weight update parameters for the proposed algorithm and derive an upper bound on the cumulative training error classification cost. **Theorem 1**.: _The following holds for the upper bound of the training cumulative misclassification cost. \(\mathbb{I}(\pi)\) returns 1 if predicate \(\pi=\text{true or 0}\) otherwise._ \[\sum_{i}C_{i}\mathbb{I}\left(H\left(x_{i}\right)\neq y_{i}\right) \leqslant nK^{L}d\cdot\prod_{l=1}^{L}Z_{l},\] \[where\quad d=\sum_{i}\frac{D^{(L+1)}(x_{i})}{C_{i}^{L-1}}\] \(Z_{l}\) _is a normalization factor chosen so that \(D_{l+1}\) will be a distribution._ Proof.: From Eq. (17), we kown that \[H(v)=\arg\max_{k}\sum_{l=1}^{L}h_{k}^{(l)}(v)\] Now, combining Eq. (12), we let \[f_{k}(x_{i}) =\sum_{l=1}^{L}h_{k}^{(l)}(v_{i})\] \[=\sum_{l=1}^{L}(K-1)\left(\log p_{k}^{(l)}\left(x_{i}\right)-\frac {1}{K}\sum_{k^{\prime}}\log p_{k^{\prime}}^{(l)}\left(x_{i}\right)\right) \tag{18}\] In this method, the K-dimensional vector \(\mathbf{f}\left(x_{i}\right)\) composed of \(f_{k}(x_{i})(where\)\(k=1,\ldots,K)\) can be represented as follows: \[\mathbf{f}\left(x_{i}\right) =\left[\sum_{l=1}^{L}(K-1)\left(\log p_{1}^{(l)}\left(x_{i} \right)-\frac{1}{K}\sum_{k^{\prime}}\log p_{k^{\prime}}^{(l)}\left(x_{i} \right)\right)\right.,\] \[\sum_{l=1}^{L}(K-1)\left(\log p_{2}^{(l)}\left(x_{i}\right)-\frac {1}{K}\sum_{k^{\prime}}\log p_{k^{\prime}}^{(l)}\left(x_{i}\right)\right),\] \[\vdots\] \[\sum_{l=1}^{L}(K-1)\left(\log p_{K}^{(l)}\left(x_{i}\right)-\frac {1}{K}\sum_{k^{\prime}}\log p_{k^{\prime}}^{(l)}\left(x_{i}\right)\right) \right]^{T} \tag{19}\] Similar to vanilla SAMME.R, we recode the label of each node with a \(K\)-dimensional vector \(\mathbf{y}\), where all entries equal to \(-\frac{1}{K-1}\) except a 1 in position \(k\) if \(c=k\), i.e. \(\mathbf{y}=(y_{1},...,y_{K})\), and \[y_{k}=\left\{\begin{array}{ll}1,&\text{if }c=k\\ -\frac{1}{K-1},&\text{if }c\neq k.\end{array}\right. \tag{20}\] Therefore, combining Eq. (19) and (20), we can get the inner product of \(\mathbf{y}_{i}\) and \(\mathbf{f}\left(x_{i}\right)\) as: \[-\mathbf{y}_{i}\mathbf{f}\left(x_{i}\right)\] \[= \sum_{k\neq\epsilon}\sum_{l=1}^{L}\left(\log p_{k}^{\left(l\right) }\left(x_{i}\right)-\frac{1}{K}\sum_{k^{\prime}}\log p_{k^{\prime}}^{\left(l \right)}\left(x_{i}\right)\right)\] \[-\sum_{l=1}^{L}(K-1)\left(\log p_{c}^{\left(l\right)}\left(x_{i} \right)-\frac{1}{K}\sum_{k^{\prime}}\log p_{k^{\prime}}^{\left(l\right)}\left( x_{i}\right)\right)\] \[= \sum_{l=1}^{L}\sum_{k\neq\epsilon}\left(\log p_{k}^{\left(l \right)}\left(x_{i}\right)-\log\left(\prod_{k^{\prime}}p_{k^{\prime}}^{\left( l\right)}\left(x_{i}\right)\right)^{\frac{1}{K}}\right)\] \[-\sum_{l=1}^{L}(K-1)\cdot\left(\log p_{c}^{\left(l\right)}\left( x_{i}\right)-\log\left(\prod_{k^{\prime}}p_{k^{\prime}}^{\left(l\right)}\left(x_{i} \right)\right)^{\frac{1}{K}}\right)\] \[= \sum_{l=1}^{L}\left(\sum_{k\neq c}\log p_{k}^{\left(l\right)} \left(x_{i}\right)-(K-1)\log\left(\prod_{k^{\prime}}p_{k^{\prime}}^{\left(l \right)}\left(x_{i}\right)\right)^{\frac{1}{K}}\right)\] \[-\sum_{l=1}^{L}(K-1)\cdot\left(\log p_{c}^{\left(l\right)}\left( x_{i}\right)-\log\left(\prod_{k^{\prime}}p_{k^{\prime}}\left(x_{i}\right) \right)^{\frac{1}{K}}\right)\] \[= \sum_{l=1}^{L}\left(\sum_{k\neq c}\log p_{k}^{\left(l\right)} \left(x_{i}\right)-\log\left(\prod_{k^{\prime}}p_{k^{\prime}}^{\left(l\right) }\left(x_{i}\right)\right)^{K-1}\right)\] \[= \sum_{l=1}^{L}\log\frac{\prod_{k\neq c}p_{k}^{\left(l\right)} \left(x_{i}\right)}{\left(p_{c}^{\left(l\right)}\left(x_{i}\right)\right)^{K-1}} \tag{21}\] Taking the exponent for both sides of Eq. (21), we can obtain \[e^{-\mathbf{y}_{i}\mathbf{f}^{\prime}\left(x_{i}\right)} =\prod_{l=1}^{L}e^{\log\frac{\prod_{k\neq c}p_{k}^{\left(l\right) }\left(x_{i}\right)}{\left(p_{c}^{\left(l\right)}\left(x_{i}\right)\right)^{K -1}}}\] \[=\prod_{l=1}^{L}\frac{\prod_{k\neq c}p_{k}^{\left(l\right)}\left( x_{i}\right)}{\left(p_{c}^{\left(l\right)}\left(x_{i}\right)\right)^{K-1}} \tag{22}\] Unravelling the weight update rule of Eq. (14), using the iterative method and normalizing it with the normalization factor \(Z_{l}\), we can obtain \[D^{\left(L+1\right)}\left(x_{i}\right)\] \[= D^{\left(L\right)}\left(x_{i}\right)\cdot\frac{\left(-\frac{K-1}{K }\cdot C_{i}\cdot\mathbf{y}_{i}\cdot\log\mathbf{p}^{\left(L\right)}\left(x_{ i}\right)\right)}{Z_{l}}\] \[= \frac{1}{n}\cdot\left(-\frac{K-1}{K}C_{i}\right)^{L}\left(\prod_{l =1}^{L}\left(\sum_{k\neq c}\left(-\frac{1}{K-1}\right)\log p_{k}^{\left(l \right)}\left(x_{i}\right)\right.\right.\] \[\left.\left.+\log p_{c}^{\left(l\right)}\left(x_{i}\right)\right) \cdot\frac{1}{\prod_{l=1}^{L}Z_{l}}\] \[= \frac{1}{n}\cdot\left(\frac{1}{K}C_{i}\right)^{T}\cdot\left(\prod_{ l=1}^{L}\left(\log_{k\neq c}\prod_{k}p_{k}^{\left(l\right)}\left(x_{i}\right)\right.\right.\] \[\left.\left.-\left(K-1\right)\log p_{c}^{\left(l\right)}\left(x_{i }\right)\right)\right)\cdot\frac{1}{\prod_{l=1}^{L}Z_{l}}\] \[= \frac{1}{n}\left(\frac{1}{K}C_{i}\right)^{L}\cdot\frac{\prod_{l=1}^ {L}\left(\log\frac{\prod_{k\neq c}p_{k}^{\left(l\right)}\left(x_{i}\right)} \right)}{\prod_{l=1}^{L}Z_{l}} \tag{23}\] Rearranging the above equation, we can obtain \[\prod_{l=1}^{L}\left(\log\frac{\prod_{k\neq c}p_{k}^{\left(l\right)} \left(x_{i}\right)}{\left(p_{c}^{\left(l\right)}\left(x_{i}\right)\right)^{K -1}}\right)=n\cdot\left(\frac{K}{C_{i}}\right)^{L}\cdot D^{\left(L+1\right)} \left(x_{i}\right)\prod_{l=1}^{L}Z_{l} \tag{24}\] Furthermore, when \(H\left(x_{i}\right)\neq y_{i}\), \(y_{i}H(x_{i})<0\). Therefore, \(e^{-\mathbf{y}_{i}\mathbf{f}^{\prime}\left(x_{i}\right)}\geq 1\).When \(H\left(x_{i}\right)=y_{i}\), \(y_{i}H(x_{i})>0\). Therefore, \(e^{-\mathbf{y}_{i}\mathbf{f}^{\prime}\left(x_{i}\right)}\geq 0\). In summary, the following inequality always holds: \[\mathbb{I}\left(H\left(x_{i}\right)\neq y_{i}\right)\leq e^{-\mathbf{y}_{i} \mathbf{f}\left(x_{i}\right)} \tag{25}\] Bringing Eq.(22) into Eq. (25), and multiplying both sides of the equation by \(C_{i}\), we get the following inequality: \[C_{i}\mathbb{I}\left(H\left(x_{i}\right)\neq y_{i}\right)\leq C_{i}\prod_{l=1}^ {L}\frac{\prod_{k\neq c}p_{k}^{\left(l\right)}\left(x_{i}\right)}{\left(p_{c}^{ \left(l\right)}\left(x_{i}\right)\right)^{K-1}} \tag{26}\] Combining the right-hand sides of Eq. (25) and Eq. (26), bringing in Eq. (24), and summing over all nodes, we get \[\sum_{i}C_{i}e^{-\mathbf{y}_{i}\cdot\mathbf{f}\left(x_{i}\right)} =\sum_{i}C_{i}\prod_{l=1}^{L}\frac{\prod_{k\neq c}p_{k}^{\left(l \right)}\left(x_{i}\right)}{\left(p_{c}^{\left(l\right)}\left(x_{i}\right) \right)^{K-1}}\] \[=\sum_{i}n\cdot\frac{K^{L}}{C_{i}^{L-1}}\cdot D^{\left(L+1\right)} \left(x_{i}\right)\cdot\prod_{l=1}^{L}Z_{l}\] \[=n\cdot K^{L}\cdot\prod_{l=1}^{L}Z_{l}\sum_{i}\frac{D^{\left(L+1 \right)}\left(x_{i}\right)}{C_{i}^{L-1}}\] \[=nK^{L}d\cdot\prod_{l=1}^{L}Z_{l} \tag{27}\] So, we can get \[\sum_{i}C_{i}\mathbb{I}\left(H\left(x_{i}\right)\neq y_{i}\right)\leq nK^{L}d \cdot\prod_{l=1}^{L}Z_{l}\] where \(d=\sum_{i}\frac{D^{\left(L+1\right)}\left(x_{i}\right)}{C_{i}^{L-1}}\). ### _Proposed GAT-COBO_ Given an imbalanced fraud detection graph \(\mathcal{G}\) and a set of training nodes \(\mathcal{V}_{train}\), we train the weak classifier to minimize the cross-entropy loss: \[\mathcal{L}_{GAT-COBO}=w_{v}\left(\mathcal{L}_{GAT}+\lambda_{1}\mathcal{L}_{MLP} \right)+\lambda_{2}\|\theta\|_{2} \tag{28}\] where \(\lambda_{1}\) and \(\lambda_{2}\) are the weight parameters, and \(\|\theta\|_{2}\) is the \(L_{2}\)-norm of all model parameters. \(w_{v}\) represents the sampling weight defined in Eq.(14). The complete training algorithm is summarized in Algorithm 1. First, we feed the node features and adjacency matrix to the GAT-based weak classifier to get the learned attention matrix. Then the attention matrix and node features are input into MLP for feature update. After softmax transformation, the node class probability \(p_{k}^{(l)}(v)\) is obtained. We input it into the Boosting-based cost sensitive learner and calculate the classification result \(h_{v}\). Based on this, the misclassification cost at this layer can be obtained. The calculated cost is then used to calculate the updated sampling weights. We use this weight to constrain the classification loss of the next GAT-based weak classifier, so that the misclassified nodes from previous weak classi- fier have a higher probability of being correctly classified by the next one. Finally, we sum all the layers to get the final cost-sensitive classification result \(H(v)\). ``` Input: An undirected graph with node features and labels: \(\mathcal{G}=(\mathcal{V},\mathcal{X},\mathcal{A},\mathcal{E},\mathcal{Y})\) ; The number of node classes \(K\); Number of layers, epochs: \(L,E\); Output: Classification result \(C(v)\), \(v\in\mathcal{V}_{train}\). 1// Initialization; 2\(w_{v}^{(1)}=1/N\), \(\forall v\in\mathcal{V}_{train}\); \(\mathbf{X}^{(0)}=\mathbf{X}\); 3// Train GAT-COBO; 4for\(l=1,2...L\)do 5for\(e=1,2,...E\)do 6// Train weak classifier; 7Forward propagation \(\mathbf{h}_{v}^{\prime}\leftarrow\) Eq.(5) or Eq.(6), \(\forall v\in\mathcal{V}_{train}\) ; 8Calculate GAT loss \(\mathcal{L}_{GAT}\leftarrow\) Eq.(7) ; 9 Calculate attention matrix \(\mathbf{\Omega}^{(l)}\leftarrow\) Eq.(8) ; 10// Feature update; 11 Update the input features for the next weak classifier \(\mathbf{X}^{(l)}\leftarrow\) Eq.(9) ; 12 Calculate the probability vector \(p_{k}^{(l)}(v)\leftarrow\) Eq.(10), \(\forall v\in\mathcal{V}_{train}\) ; 13 Calculate feature update loss \(\mathcal{L}_{MLP}\leftarrow\) Eq.(11) ; 14 Calculate the overall loss of the model \(\mathcal{L}_{GAT-COBO}\leftarrow\) Eq.(28) ; 15// Cost sensitive learner; 16Prepare cost matrix \(\mathbf{C}\leftarrow\) Eq.(15) or Eq.(16) ; 17Output of weak classifier \(h_{k}^{(l)}(v)\leftarrow\) Eq.(12), \(\forall k\in\{1,\dots K\},\forall v\in\mathcal{V}_{train}\) ; 18Calculate classification cost \(C_{v}^{(l)}\leftarrow\) Eq.(13) ; 19Update node weight \(w_{v}^{(l+1)}\leftarrow\) Eq(14) ; 20Re-normalize \(w_{v}^{(l+1)}\) ; 21 22Ensemble classification result \(H(v)\leftarrow\) Eq.(17); 23 24**RQ1**: Does GAT-COBO outperform the state-of-the-art methods for graph-based anomaly detection? 25**RQ2**: How does GAT-COBO perform with respect to the graph imbalance problem? 26**RQ3**: What is the performance with respect to GNN over-smoothing problem? 27**RQ4**: What is the hyperparameter sensitivity and its impact on model design? 28**RQ5**: What is the computational complexity of the proposed model? ### _Experimental setup_ #### 5.1.1 Dataset We use two real-world telecom fraud detection datasets to validate the proposed method. The first one1 is released in the 2020 Sichuan Big Data Competition. It was collected and anonymized by China Mobile Sichuan Company, which covers the CDR data of 6,106 users in 23 cities of Sichuan Province, with a time span of August 2019 to March 2020. The content includes call records (call object, duration, type, time, location), text message records (SMS object, text message type, communication time), Internet access records (traffic consumption, APP name), expense records (monthly consumption amount, attribution), etc. We performed raw data processing in the same way as [62]. All samples are divided into two categories, namely fraudsters and the benign. In this dataset, the imbalance rate \(IR=1962/4144=0.4735\). Footnote 1: [https://aistudio.baidu.com/aistudio/datasetdetail/40690](https://aistudio.baidu.com/aistudio/datasetdetail/40690). The second dataset2 was released in 2019 by Ming Liu et al. [28] of Beijing University of Posts and Telecommunications. The dataset includes one week's CDR data of users in a Chinese city. The author performed feature extraction on the original CDR data, and obtained a total of 39-dimensional features. According to the definition in Section 3, the imbalance rate of this dataset is \(IR=8074/99861=0.0809\). Detailed descriptions are summarized in Table II. Footnote 2: [https://github.com/khznxn/TF-Dataset](https://github.com/khznxn/TF-Dataset). #### 5.1.2 Baselines To verify the learning ability of GAT-COBO on imbalanced graph data, we compare it with various GNN baselines in a semi-supervised learning setting. We choose GCN [9], GAT [12], GraphSAGE [11] as general GNN models. We choose FdGars [35], Player2Vec [63], GraphConsis [34], GEM [41] and CARE-GNN [5] as state-of-the-art GNN-based fraud detectors. * GCN: GNN that aggregates neighbor information by spectral graph convolution * GAT: GNN that uses an attention mechanism to aggregate neighbor node information * GraphSAGE: An inductive GNN with a fixed number of sampled neighbors * FdGars: A GCN-based opinion fraud detection system * Player2vec: A GNN for modeling heterogeneous relationships of homogeneous nodes using heterogeneous information networks and meta-Paths * GraphConsis : A heterogeneous graph neural network for graph inconsistency * GEM: A heterogeneous graph fraud detection GNN based on attention mechanism * CARE-GNN: A heterogeneous GNN with multi-modal aggregation based on reinforcement learning neighbor selection * GAT-COBO\({}_{uni}\): our proposed method in which the cost matrix is calculated in **uniform** manner * GAT-COBO\({}_{inv}\): our proposed method in which the cost matrix is computed in **inverse** manner * GAT-COBO\({}_{log}\): our proposed method in which the cost matrix is calculated in **log1p** manner #### 5.1.3 Experiment implementation In the experiments, we randomly select training samples and keep the ratio of positive and negative samples in the training set the same as the whole dataset. For our proposed GAT-COBO method, we use Adam optimizer for parameter optimization, and the specific configuration is as follows. For the Sichuan dataset, we set hit embedding size (128), learning rate (0.002), dropout (0), adj_dropout (0.4), model layers (2), attention loss weight (0.5), attention weight (0.1), feature weight (0.1). For the BUPT dataset, we set hit embedding size (64), learning rate (0.01), dropout (0.2), the adj_dropout (0.1), model layers (2), the attention loss weight (0.01), the attention weight (0.5), feature weight (0.6). For GCN, GAT, CARE-GNN, we use the open-source implementation of Deep Graph Library(DGL) 3. For GraphSAGE, FdGars, Player2Vec, GraphConsis, and GEM, we use the open-source implementation of DGFraud-TF2 4. We implement the proposed method by Pytorch; all models are running on python3.7.10, 1 GeForce RTX 3090 GPU, 64GB RAM, 16 cores Intel(R) Xeon(R) Gold 5218 CPU @2.30GHz Linux Server. Footnote 3: [https://github.com/dmlc/dgl](https://github.com/dmlc/dgl) Footnote 4: [https://github.com/safe-graph/DGFraud-TF2](https://github.com/safe-graph/DGFraud-TF2) #### 5.1.4 Evaluation metrics For the imbalanced problems, the evaluation metric is very critical. Because it needs to reasonably assess the classification results of all classes, especially the minority, and reflect them in the scores [39]. To be unbiased, we adopt four widely used metrics to measure the performance of all comparison methods: Macro AUC, Marco F1, Macro recall, and G-mean. **Recall** is very important for the imbalance problem, which can accurately measures the proportion of minority class being detected. It is defined as follows: \[recall=\frac{TP}{TP+TN}\] where \(TP\) and \(TN\) represent the number of true positive and true negative samples in the confusion matrix, respectively. Macro recall is the arithmetic mean of multiple classes, which treats all classes equally, regardless of the importance of different classes. **F1** is another comprehensive matrix for evaluating imbalanced problem, which is defined as follows: \[F1=\frac{2}{\frac{1}{\text{precision}}+\frac{1}{\text{recall}}}=2\frac{\text{ precision}\ \times\ \text{recall}}{\text{precision}\ +\ \text{recall}}\] and macro-f1 is to calculate the arithmetic mean of F1 scores for each class. **AUC** is the area under the ROC curve, defined as: \[\text{AUC}=\frac{\sum_{u\in\mathcal{U}^{+}}\text{rank}_{u}-\frac{|\mathcal{U }^{+}|\times(|\mathcal{U}^{+}|+1)}{2}}{|\mathcal{U}^{+}|\times|\mathcal{U}^{-}|}\] Here, \(\mathcal{U}^{+}\) and \(\mathcal{U}^{-}\) indicate the minority and majority class set in the testing set, respectively. And \(rank_{u}\) indicates the rank of node \(u\) via the score of prediction. **G-Mean** calculates the geometric mean of True Positive Rate (TPR) and True Negative Rate (TNR). \[\text{G-Mean}\ =\sqrt{\text{TPR}\cdot\text{TNR}}=\sqrt{\frac{\text{TP}}{ \text{TP}+\text{FN}}}\cdot\frac{\text{TN}}{\text{TN}+\text{FP}}\] For the above metrics, the higher the score, the better the performance of the model on imbalanced problems. ### _Performance comparison(RQ1)_ We evaluate the performance of all the comparison methods on the above mentioned telecom fraud detection datasets. And the best testing results of GAT-COBO and baseline methods for Macro AUC, Marco F1, Macro recall and G-mean metrics are reported in Table III. The compared methods can be further divided into three groups, namely general-purpose GNNs, GNN-based fraud detectors, our methods. All comparison methods use the same training set, validation set, and test set division, and the ratios are 20%, 20%, and 60%, respectively. We observe that GAT-COBO outperforms other baselines under all metrics on both datasets. Moreover, the performance of the inverse and \(log1p\) variants of the proposed method is significantly better \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Dataset name** & **Nodes (fraud ratio)** & **Edges** & **Number of classes** & **Feature dimension** & **IR** \\ \hline \multirow{3}{*}{**Sichuan**} & \multirow{3}{*}{6106 (32.1\%)} & \multirow{3}{*}{838528} & Benign: 4144 & \multirow{3}{*}{55} & \multirow{3}{*}{0.4735} \\ \cline{3-4} \cline{6-6} & & Fraud: 1962 & & & \\ \hline \multirow{3}{*}{**BUPT**} & \multirow{3}{*}{116,383 (7.3\%)} & \multirow{3}{*}{350751} & Benign: 99861 & \multirow{3}{*}{39} & \multirow{3}{*}{0.0809} \\ \cline{3-4} \cline{6-6} & & Fraud : 8848 & & & \\ \cline{1-1} \cline{4-6} \cline{6-6} & & Courier : 8074 & & & \\ \hline \end{tabular} \end{table} TABLE II: Dataset and graph statistics. than that of the \(uniform\), which indicates that considering the imbalanced class cost is crucial to the performance of GAT-COBO on imbalanced datasets. In addition, we observe the following experimental results. First of all, GCN, GAT, and GraphSAGE are three classic general GNNs, and have more moderate performance on both datasets. However, for such imbalanced fraud detection datasets, general GNNs cannot take into account the minority class. Because they treat all classes of nodes equally, which makes their performance lower than our proposed GAT-COBO. This difference in performance shows that GNN can perform better on imbalanced problems by designing sensitive to the cost of different classes. In contrast, the performance of GNN-based fraud detectors varies greatly. CARE-GNN outperforms most similar methods and outperforms general GNNs on the two original datasets. From the G-Mean score, it can be observed that CARE-GNN is only lower than our method, which shows that it has strong adaptability to imbalanced datasets. This is mainly due to the selection of similar neighbors by the reinforcement learning module in the model, as well as mini-batch training and undersampling techniques. While other GNN-based fraud detectors perform poorly. The reasons are mainly two-fold, one is that these models do not take into account the class imbalance in the data. Another aspect is that they are designed for multi-relationship graphs, whereas both datasets in this paper are single-relational graphs. Furthermore, by comparing the performance of all methods on the two fraud detection datasets, we can find that most methods perform much better on Sichuan than BUPT. An important reason is that the IR of Sichuan is larger than that of BUPT ( 0.4735\(>\)0.0809 ), which indicates that the class imbalance of BUPT is more serious than Sichuan. Moreover, the more unbalanced the dataset, the greater the performance gap of the same model, which shows the significant impact of data class imbalance on model performance. However, the performance of our method on these two datasets is not very different and both are at a high level. This further illustrates the important role played by this cost-sensitive design in our proposed method. ### _Influence of imbalance ratio(RQ2)_ In this subsection, we test the performance of comparison algorithms under different imbalance rates to evaluate their robustness. In the experiment, we randomly sample different class of samples in the two telecom fraud detection datasets according to the IR (ranging from 0.1 to 1) to form a new dataset. Then we select three well-performing baselines in Table III, namely GCN, GAT, CARE-GNN and three variants of the GAT-COBO to test under the new dataset. Their scores on G-mean, AUC, macro Recall are recorded, and the result is shown in Figure 3. From the figure, we draw the following observations: Our proposed method achieves the best results under almost all IRs, regardless of which metric. In particular, the \(lop1p\) variant of GAT-COBO has an excellent performance in most situations. This fully demonstrates the effectiveness of the cost-sensitive boosting method in dealing with the problem of graph imbalance. In addition, we also notice that other methods decay faster as IR decreases, while our proposed method decays relatively slowly, which further illustrates the role of cost sensitivity when the classes are extremely unbalanced. CARE-GNN is an effective GNN-based fraud detection model against fraudster camouflage, achieving SOTA on social datasets. However, we observe that its performance on two different telecom fraud datasets is quite different. In particular, on the BUPT dataset, the performance of CARE-GNN gradually decreases when \(IR\) converges to 1. This may be due to the fact that the reinforcement learning-based sampling mechanism does not sample effective neighbors when the node neighbors gradually become more numerous. This suggests the limitation of the sampling method in solving the graph imbalance problem. It also gives us a lesson that GNN model needs to be specially designed when solving the graph imbalance problem. As classic GNN, GCN and GAT have stable performance and fast calculation speed, and can achieve good results in most scenarios. But they also lack the ability to deal with imbalanced data. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{**Dataset**} & \multicolumn{4}{c|}{**Sichuan**} & \multicolumn{4}{c|}{**BUPT**} \\ \cline{2-11} & **Metric** & Macro AUC & Macro recall & Macro F1 & G-mean & Macro AUC & Macro recall & Macro F1 & G-mean \\ \hline \multirow{3}{*}{General} & **GCN** & 0.9263 & 0.8597 & 0.8755 & 0.8530 & 0.8932 & 0.5706 & 0.6265 & 0.4380 \\ \cline{2-11} & **GAT** & 0.9243 & 0.8585 & 0.8725 & 0.8529 & 0.9102 & 0.6152 & 0.6803 & 0.5267 \\ \cline{2-11} & **Graphsage** & 0.9159 & 0.8564 & 0.8631 & 0.8447 & 0.8928 & 0.6715 & 0.6918 & 0.5823 \\ \hline \multirow{3}{*}{GNN-based} & **FdGars** & 0.7887 & 0.7082 & 0.6499 & 0.6914 & 0.6462 & 0.4357 & 0.4027 & 0.3855 \\ \cline{2-11} & **Player2vec** & 0.7467 & 0.5618 & 0.4097 & 0.4181 & 0.5227 & 0.3206 & 0.3239 & 0.2502 \\ \cline{2-11} & **GraphConsis** & 0.7985 & 0.7288 & 0.7331 & 0.7187 & 0.6211 & 0.3311 & 0.3074 & 0.1602 \\ \cline{2-11} & **GEM** & 0.8619 & 0.8209 & 0.8294 & 0.8153 & 0.6788 & 0.3344 & 0.3100 & 0.0117 \\ \cline{2-11} & **CARE-GNN** & 0.9384 & 0.8717 & 0.8659 & 0.8711 & 0.9065 & 0.7642 & 0.5345 & 0.7538 \\ \hline \multirow{3}{*}{Ours} & **GAT-COBO\({}_{uni}\)** & 0.9292 & 0.8880 & 0.9014 & 0.8844 & 0.9041 & 0.7015 & 0.7428 & 0.6572 \\ \cline{2-11} & **GAT-COBO\({}_{inv}\)** & 0.9385 & **0.8894** & **0.9022** & **0.8860** & 0.8919 & 0.7590 & **0.7564** & 0.7421 \\ \cline{1-1} \cline{2-11} & **GAT-COBO\({}_{log}\)** & **0.9391** & 0.8867 & 0.9003 & 0.8829 & **0.9109** & **0.7823** & 0.7557 & **0.7658** \\ \hline \end{tabular} \end{table} TABLE III: Performance comparison on two real-world telecom fraud datasets. ### _Avoidance of over-smoothing effects(RQ3)_ It is well known that with the stacking of more graph operation layers, classic GNN models such as GCN and GAT will suffer from over-smoothing effects [64]. Different from the residual connection solution, in our method, we circumvent the over-smoothing effect through an ensemble learning method while improving the model performance. In order to verify the effect of our proposed method on the GNN over-smoothing problem, we compared the performance of the proposed method with GCN and GAT on G-mean, AUC, recall. During the experiment, we set the number of layers of GNN to vary from 1 to 21, and the results obtained are shown in Figure 4. From the figure, we can see that the performance of GCN and GAT is severely weakened by the over-smoothing effect. As the number of network layers increases, the performance of the models on all three metrics drops significantly. The over-smoothing effect of GCN is more severe than that of GAT. This is because the convolution operation treats all neighbors equally, while GAT partially mitigates the over-smoothing effect by weighting the neighbors differently. In contrast, the three different variants of GAT-COBO remain stable on all three metrics, and the model performance is largely unaffected by the number of network layers. It shows that our proposed method can effectively avoid the over-smoothing effect of GNN. In particular, the performance of the \(inverse\) and \(log1p\) variants increases slightly with the number of network layers. This is because we use ensemble learning that integrates multiple different weak learners, which allows the model to both maintain the initial information and learn the depth information. ### _Hyperparameter Sensitivity(RQ4)_ In the hyperparameter sensitivity experiment, we tested the proposed model under the \(log1p\) variant on the BUPT dataset. The three important hyperparameters are train size, embedding size, and learning rate. For each hyperparameter we recorded the model's score on G-mean, Macro AUC, Macro recall. The experimental results are shown in Figure 5. From Figure 5(a), we observe that the model basically maintains stable performance as the training rate changes. Among them, the best effect is when the training rate is around 0.3. Therefore, we conclude that GAT-COBO is robust to training rate. Figure 5(b) shows the effect of different embedding sizes on the model. The model has comparable performance when the embedding sizes are 32, 64 and 128. Figure 5(c) illustrates the effect of different learning rates on the model. It can be observed that when the learning rate is around 0.001 and 0.01, the model shows two small peaks in performance. Therefore, choosing an appropriate learning rate has a certain impact on model optimization. ### _Computational Complexity(RQ5)_ Computational complexity is important for neural network models. This is because too high computational overhead can hinder model deployment in real-world scenarios, especially those that are more sensitive to timeliness. Figure 6 shows the training time per epoch for GAT-COBO and the comparison baselines after five runs on two telecom fraud detection datasets (note that the vertical axis is in logarithmic coordinates). It can be observed that the computational complexity of GCN, GAT, and GAT-COBO is in the same order of magnitude, around 10 ms, while Fig. 3: Performance comparison of baseline methods with different IR. The top row is on the Sichuan dataset, and the bottom is on BUPT. that of the remaining methods is around 1000 ms. The computational complexity of GAT-COBO is only slightly higher than that of GCN and GAT, and much lower than that of the other comparison baselines, which indicates that GAT-COBO requires less computational overhead. This does not seem to be consistent with our intuition, because GAT-COBO with the Boosting algorithm does not have much computational overhead. The reason for this phenomenon is that the computational overhead of the Boosting algorithm depends mainly on the base classifier. The base classifier of GAT-COBO uses a simplified GAT with low computational overhead, which contributes significantly to the overall computational complexity reduction of the model. The rest of the baselines with higher computational overhead mostly involve node neighbor sampling operations, such as Graph-sage's neighbor sampling and CARE-GNN's reinforcement learning-based neighbor sampling. While these operations can reduce unnecessary information aggregation, they can significantly increase the model computational overhead. In addition, we also observe that the computational overhead of almost all models on BUPT is larger than that on Sichuan. This is because the number of nodes in BUPT is about 20 times larger than that in Sichuan, thus leading to a larger computational overhead of the models on BUPT. ## 6 Conclusion Graph imbalance problem can significantly affect the performance of telecom fraud detectors, but it's rarely noticed by previous work. In this paper, we propose a novel cost-sensitive graph neural network based on attention mechanism and ensemble learning to solve this problem. Concretely, we first learn node embeddings using graph attention network as base classifiers. The learned embeddings are then fed into the corresponding cost-sensitive learners for further training, and new node weights are calculated. This weight is then fed into the next GNN weak classifier as a constraint on the loss. Integrating the embeddings learned from such multiple base classifiers yields the final cost-sensitive classification result. We conduct extensive experiments on two real-world telecom Fig. 4: Performance comparison of baseline methods with different GNN layers on BUPT dataset. Fig. 5: Hyperparameter sensitivity of the paoposed GAT-COBO. Fig. 6: Per-epoch training time of GAT-COBO and baselines under 5 runs on two telecom fraud detection datasets. -fraud detection datasets to evaluate the proposed method. Experimental results demonstrate that the proposed GAT-COBO model outperforms state-of-the-art baseline methods and can effectively handle graph data with imbalanced class distributions. In addition, experiments also show that the proposed model can overcome the over-smoothing problem that is widespread in GNNs. Given that the graph imbalance problem is widely present in real-world tasks, GAT-COBO can be applied not only in telecom fraud detection, but also in scenarios such as social network bot user detection, financial fraud detection, and malicious machine detection, etc. ## Acknowledgements We would like to thank the anonymous reviewers for their valuable comments.
2302.05419
Gauge-equivariant neural networks as preconditioners in lattice QCD
We demonstrate that a state-of-the art multi-grid preconditioner can be learned efficiently by gauge-equivariant neural networks. We show that the models require minimal re-training on different gauge configurations of the same gauge ensemble and to a large extent remain efficient under modest modifications of ensemble parameters. We also demonstrate that important paradigms such as communication avoidance are straightforward to implement in this framework.
Christoph Lehner, Tilo Wettig
2023-02-10T18:34:54Z
http://arxiv.org/abs/2302.05419v1
# Gauge-equivariant neural networks as preconditioners in lattice QCD ###### Abstract We demonstrate that a state-of-the art multi-grid preconditioner can be learned efficiently by gauge-equivariant neural networks. We show that the models require minimal re-training on different gauge configurations of the same gauge ensemble and to a large extent remain efficient under modest modifications of ensemble parameters. We also demonstrate that important paradigms such as communication avoidance are straightforward to implement in this framework. ## I Introduction Our current understanding of nature at the most fundamental level is to a large extent based on quantum field theories. In particle physics, Quantum Chromodynamics (QCD) explains, for example, how the proton is made up of smaller constituents, quarks and gluons. To describe current and future experiments, and to search for physics beyond the Standard Model, we need to be able to solve QCD to high precision. Lattice QCD constitutes a systematically improvable tool to solve QCD in the nonperturbative regime by numerically simulating the theory on a finite space-time lattice. It has evolved over more than four decades and is now of direct phenomenological relevance, see [1] and references therein. It is also very compute-intensive and employs the largest supercomputers worldwide [2]. Therefore much research is focused on improving the algorithms that dominate the run time of these simulations. The most time-consuming element, both in the generation of gauge-field configurations and in the computation of physical observables, is typically the solution of the Dirac equation in the presence of a given gauge field. For physical values of the light quark masses and large lattice volumes, the condition number of the matrix representing the Dirac operator becomes very large, and consequently very sophisticated methods are required to solve the Dirac equation in a feasible time frame. The current state of the art is to use a suitable preconditioner inside a Krylov subspace solver. The construction of the preconditioner is a complicated problem whose solution requires deep knowledge of the underlying physics. The aim of this paper is to reformulate the problem in the language of gauge-equivariant neural networks and to show that such networks can learn the general paradigms of state-of-the-art preconditioners and efficiently reduce the iteration count of the outer solver. We also provide a flexible implementation interface in the Grid Python Toolkit (GPT) [3] that allows for experimentation and further studies. We briefly relate this paper to previous work. We will concentrate on multi-grid preconditioners [4; 5; 6; 7; 8; 9; 10] and refer to [11] for an introduction. The idea of learning the elements of multi-grid preconditioners with neural networks has been pursued in a number of earlier publications, see, e.g., [12; 13; 14; 15; 16; 17; 18]. These works differ in the details of their approaches, e.g., the choice of the loss function, the network architecture, and the kind of learning (supervised or unsupervised). The main difference to our work is that we have to address the gauge degrees of freedom. More precisely, our approach must be gauge-equivariant, i.e., the map implemented by the neural network must commute with local gauge transformations [19; 20]. A number of papers have introduced gauge-equivariant neural networks in the context of lattice quantum field theory: Refs. [21; 22; 23] mainly addressed the question of gauge-field sampling in several different theories, while Ref. [24] showed how any gauge-covariant function on the lattice can be approximated by neural networks. Our work builds on and extends these papers. The structure of this paper is as follows. In Sec. II, we introduce gauge-equivariant layers as the building blocks of the models we study in this work. In Sec. III, we discuss the problem of solving the preconditioned Dirac equation with the Wilson-clover Dirac operator. In Sec. IV, we construct preconditioner models that address the high-mode component of the Dirac operator. In Sec. V, we discuss a model to address the low-mode component of the Dirac operator. In Sec. VI, we combine the specialized models to a multi-grid model that addresses both the low-mode and high-mode components. We conclude in Sec. VII, where we also give an outlook to future work. ## II Gauge-equivariant layers In this section, we define the building blocks of the gauge-equivariant neural networks considered in this work and explain their properties in detail. We begin with a discussion of the concepts of parallel transport and gauge equivariance. ### Parallel transport and gauge equivariance We consider a discrete \(d\)-dimensional space-time lattice with \(L_{\mu}\) sites in dimension \(\mu\in\{1,\ldots,d\}\) and \(d\in\mathbb{N}\). The canonical unit vector in dimension \(\mu\) is denoted by \(\hat{\mu}\). The set of all lattice sites shall be \(S=\{(x_{1},\ldots,x_{d})\,|\,x_{\mu}\in\{1,\ldots,L_{\mu}\}\}\). Consider a field \(\varphi:S\to V_{I},x\mapsto\varphi(x)\) with internal vector space \(V_{I}\). The internal vector space shall be a product of a gauge vector space \(V_{G}=\mathbb{C}^{N}\) and a non-gauge vector space \(V_{G}=\mathbb{C}^{N}\) with \(N,N\in\mathbb{N}\), i.e., \[V_{I}=V_{G}\otimes V_{G}\,. \tag{1}\] We also consider gauge fields \(U_{\mu}:S\to\mathrm{SU}(N),x\mapsto U_{\mu}(x)\) with \(\mathrm{SU}(N)\) acting on \(V_{G}\). The set of fields \(\varphi\) shall be \(\mathcal{F}_{\varphi}\), and the set of fields \(U_{\mu}\) shall be \(\mathcal{F}_{U}\). We define the parallel-transport operator \(T_{p}:\mathcal{F}_{\varphi}\to\mathcal{F}_{\varphi},\varphi\mapsto T_{p}\varphi\) as \[T_{p}=H_{p_{n_{p}}}\cdots H_{p_{2}}H_{p_{1}} \tag{2}\] for a path \(p\) defined as the sequence \(p_{1},\ldots,p_{n_{p}}\) with \(n_{p}\in\mathbb{N}\) and \(p_{i}\in\{\pm 1,\pm 2,\ldots,\pm d\}\). The operator \(H_{p_{i}}:\mathcal{F}_{\varphi}\to\mathcal{F}_{\varphi},\varphi\mapsto H_{p_{ i}}\varphi\) acts on a field according to1 Footnote 1: Note that the operator \(H_{p_{i}}\) does not act on the numerical value \(\varphi(x)\). Rather, it acts on the field \(\varphi\), resulting in the new field \(H_{p_{i}}\varphi\), which is then evaluated at \(x\). Note also that in Eq. (3), the information is transported from \(x-\hat{p}_{i}\) to \(x\). \[H_{p_{i}}\varphi(x)=U_{p_{i}}^{\dagger}(x-\hat{p}_{i})\varphi(x-\hat{p}_{i}) \tag{3}\] so as to transport information by a single hop in direction \(\hat{p}_{i}\). Here, we introduced the convention \(\hat{\nu}=-\hat{\mu}\) for \(\nu=-\mu\), and we identify \(U_{-\mu}(x)=U_{\mu}^{\dagger}(x-\hat{\mu})\). Addition and subtraction of coordinate tuples are defined component-wise. Note that a single path \(p\) defines the transport for any site \(x\in S\) to \[x^{\prime}=x+\sum_{i=1}^{n_{p}}\hat{p}_{i} \tag{4}\] and may be illustrated using a representative starting point. If \(x^{\prime}=x\), the path is closed. Note that the trivial path \(0\) with \(n_{0}=0\) and \(T_{0}=\mathbb{1}\) is allowed as well. A field \(\varphi\in\mathcal{F}_{\varphi}\) acquires a phase \(\theta_{\mu}\) when translated by \(L_{\mu}\) in direction \(\hat{\mu}\), i.e., \[\varphi(x+L_{\mu}\hat{\mu})=e^{i\theta_{\mu}}\varphi(x) \tag{5}\] for any coordinate tuple \(x\). A gauge field \(U_{\mu}\in\mathcal{F}_{U}\) is periodic in all dimensions, i.e., \[U_{\mu}(x+L_{\nu}\hat{\nu})=U_{\mu}(x) \tag{6}\] with \(\nu\in\{1,\ldots,d\}\). These equations define \(\varphi(x)\) and \(U_{\mu}(x)\) for all sites \(x\) outside of \(S\). In Fig. 1, we illustrate the transport from the red starting point along a path \(p\) to the black site. This path corresponds to \[T_{p}=H_{-1}H_{-2}H_{-1}H_{2}H_{2}\,, \tag{7}\] where \(\hat{1}\) and \(\hat{2}\) is the horizontal and vertical unit vector, respectively, in Fig. 1. A gauge transformation is parametrized by a field \(\Omega:S\to\mathrm{SU}(N),x\mapsto\Omega(x)\) that acts on all \(\varphi\in\mathcal{F}_{\varphi}\) and \(U_{\mu}\in\mathcal{F}_{U}\) by \[\varphi(x) \to\Omega(x)\varphi(x)\,, \tag{8}\] \[U_{\mu}(x) \to\Omega(x)U_{\mu}(x)\Omega^{\dagger}(x+\hat{\mu})\,. \tag{9}\] It is straightforward to show that under such a gauge transformation we have \[T_{p}\varphi(x)\to\Omega(x)T_{p}\varphi(x) \tag{10}\] for any path \(p\), i.e., the parallel-transport operator \(T_{p}\) commutes with gauge transformations, and thus it is a gauge-equivariant operator. For a comprehensive discussion of gauge equivariance we refer to Ref. [20]. ### Parallel-transport convolutions The models discussed in this work will be composed of individual layers that map \(n\) input features \(\varphi_{1},\ldots,\varphi_{n}\in\mathcal{F}_{\varphi}\) to \(m\) output features \(\psi_{1},\ldots,\psi_{m}\in\mathcal{F}_{\varphi}\). We consider a parallel-transport convolution (PTC) layer defined by2 Footnote 2: Equation (11) is a convolution with kernel \(W\) and input \(\varphi\), whose argument is shifted by \(T_{p}\). \[\psi_{a}(x)\stackrel{{\mathrm{PTC}}}{{=}}\sum_{b=1}^{n}\sum_{p\in P }W_{a}^{bp}T_{p}\varphi_{b}(x) \tag{11}\] for \(a=1,\ldots,m\), with a set of paths \(P\) and an endomorphism \(W_{a}^{bp}\in\mathrm{End}(V_{\tilde{G}})\). This extends the definition of Ref. [23] from nearest-neighbor hops to a sum over arbitrary paths. For closed paths \(p\), we recover the case discussed in Ref. [24]. Note that in lattice QCD \(W_{a}^{bp}\) is a \(4\times 4\) spin matrix. We also consider a local parallel-transport convolution (LPTC) layer defined by \[\psi_{a}(x)\stackrel{{\mathrm{LPTC}}}{{=}}\sum_{b=1}^{n}\sum_{p\in P }W_{a}^{bp}(x)T_{p}\varphi_{b}(x) \tag{12}\] Figure 1: The path \(p\) defining a parallel-transport operator \(T_{p}\) can be visualized as a sequence of hops from a starting point (red) to an end point (black). with \(W_{a}^{bp}:S\rightarrow\mathrm{End}(V_{\tilde{G}}),x\mapsto W_{a}^{bp}(x)\). Such a layer is also gauge equivariant and may be able to better address localized features. In the following we refer to the elements of \(W\) as layer weights. Since we intend to learn a _linear_ preconditioner in this work, we do not apply an activation function in these layers. The expressivity of a deep network composed of such layers is therefore equivalent to a single layer with a larger set \(P\). Nevertheless, it may be computationally more efficient for a given problem to compose multiple layers with smaller sets \(P\). In Fig. 2, we provide a graphical representation of a (L)PTC layer with two input features and one output feature and \(P=\{p_{1},p_{2}\}\) with \[T_{p_{1}}=H_{-1}H_{-2}H_{-1}\,,\qquad T_{p_{2}}=H_{-2}H_{1}\,. \tag{13}\] ### Restriction and prolongation layers In order to let information propagate efficiently over long distances in terms of sites \(x\in S\), we make use of the multi-grid paradigm [4; 5]. To this end, we consider a coarse grid with lattice sites \(\tilde{S}\) and a coarse field \(\tilde{\varphi}:\tilde{S}\rightarrow\tilde{V}_{I},y\mapsto\tilde{\varphi}(y)\) with coarse internal vector space \(\tilde{V}_{I}\). The set of such fields is denoted by \(\mathcal{F}_{\tilde{\varphi}}\). Note that there are no gauge degrees of freedom in \(\tilde{V}_{I}\). We define a restriction layer mapping a \(\varphi\in\mathcal{F}_{\varphi}\) to a \(\tilde{\psi}\in\mathcal{F}_{\tilde{\varphi}}\) by \[\tilde{\psi}(y)\stackrel{{\mathrm{RL}}}{{=}}\sum_{x\in B(y)}W(y,x )\varphi(x) \tag{14}\] with \(W:\tilde{S}\times S\rightarrow\mathrm{Hom}(V_{I},\tilde{V}_{I})\) and block map \(B:\tilde{S}\rightarrow\mathcal{P}(S)\), where \(\mathcal{P}\) denotes the power set. We also define a corresponding prolongation layer mapping a \(\tilde{\varphi}\in\mathcal{F}_{\tilde{\varphi}}\) to a \(\psi\in\mathcal{F}_{\varphi}\) by \[\psi(x)\stackrel{{\mathrm{PL}}}{{=}}W(y,x)^{\dagger}\tilde{ \varphi}(y) \tag{15}\] for \(x\in B(y)\). In practice, we choose \(B\) corresponding to a blocking in all dimensions. The linear maps \(W\) satisfy \[\sum_{x\in B(y)}W(y,x)W(y,x)^{\dagger}=\mathbb{1}_{\tilde{V}_{I}}\,, \tag{16}\] where \(\mathbb{1}_{\tilde{V}_{I}}\) is the identity in \(\tilde{V}_{I}\). These layers are straightforward to extend to the case of multiple input and output features. The linear maps \(W\) can be considered layer weights and are constructed from a list of vectors that are blockwise orthonormal, see Sec. V for details. The restriction and prolongation layers are gauge equivariant if \[W(y,x)\to W(y,x)\Omega(x)^{\dagger} \tag{17}\] under a gauge transformation. Note that since \(\tilde{V}_{I}\) does not have gauge degrees of freedom there is no \(\Omega(y)\) on the coarse grid. We provide a graphical representation of the restriction and prolongation layers in Fig. 3. ### Parallel and identity layers In this work, we consider models that act on a given input feature with multiple layers in parallel. Consider applying a layer \(L_{i}\) to input features \(\varphi_{1},\dots,\varphi_{n}\) mapping to output features \(\psi_{i1},\dots,\psi_{im_{i}}\). For several layers \(L_{1},\dots,L_{\ell}\), we concatenate the output features \(\psi_{11},\dots,\psi_{1m_{1}},\dots,\psi_{\ell 1},\dots,\psi_{\ell m_{\ell}}\). The combination of layers \(L_{1},\dots,L_{\ell}\) being applied in parallel can then be considered to be a single layer that maps features \(\varphi_{1},\dots,\varphi_{n}\) to features \(\psi_{11},\dots,\psi_{1m_{1}},\dots,\psi_{\ell 1},\dots,\psi_{\ell m_{\ell}}\). We also introduce an identity layer that maps the input features without modification to output features (which implies \(m=n\)). Such a layer is represented graphically by a single dashed arrow pointing from the input features to the output features. We provide a graphical representation for the case of \(n=1\), \(\ell=2\), and \(m_{1}=m_{2}=1\) in Fig. 4. Figure 3: Graphical representation of the restriction layer (left) and prolongation layer (right) for a single feature. The layers are represented by the gray square frustums, while the input and output features are represented by the planes. Figure 2: Graphical representation of a (L)PTC layer with two input features and one output feature. The planes represent the features. The layer is represented by the paths drawn and the dashed arrow. ### Communication avoidance In practice, the performance of a given model in terms of execution time is crucial. For problem sizes of interest to the lattice QCD community, a single problem will be distributed over multiple compute nodes that are connected by a communication network. It is not uncommon that the time needed to exchange information between nodes exceeds the time each node spends performing floating-point operations. Therefore it is an important paradigm in lattice QCD to investigate approaches that avoid communication between nodes even if it possibly increases the computational effort within a given node [25; 26; 27; 28]. In this work, we also investigate layers which do not communicate between different sub-volumes that would typically be mapped to multiple nodes in an MPI job. We perform such investigations by setting the gauge links \(U_{\mu}\) that connect one such sub-volume to another to zero. For such a modified model, we can then avoid the communication step between nodes altogether. ## III The Wilson Dirac operator The main objective of this work is to precondition the Dirac equation \[Du=b \tag{18}\] with Dirac operator \(D:\mathcal{F}_{\varphi}\to\mathcal{F}_{\varphi}\), source \(b\in\mathcal{F}_{\varphi}\), and solution \(u\in\mathcal{F}_{\varphi}\). It is useful to interpret Eq. (18) as a matrix equation with \(u,b\in\mathbb{C}^{k}\) and invertible complex \(k\times k\) matrix \(D\) with \[k=L_{1}\cdots L_{d}N\bar{N}\,. \tag{19}\] We train a model to play the role of an invertible complex \(k\times k\) preconditioner matrix \(M\) in \[(DM)M^{-1}u=b\,, \tag{20}\] where we attempt to improve the condition number of \(DM\) compared to \(D\). Ideally, \(DM\) is close to the identity matrix up to a trivial scaling factor. The Dirac matrix transforms as \[D\to\Omega D\Omega^{\dagger} \tag{21}\] under a gauge transformation with block-diagonal matrix \(\Omega=\oplus_{x\in\mathcal{S}}\Omega(x)\otimes\mathbb{1}_{V_{G}}\), which motivates the use of gauge-equivariant layers to construct \(M\). We first consider the Wilson Dirac operator [29] \[D_{\mathrm{W}} =\frac{1}{2}\sum_{\mu=1}^{4}\gamma_{\mu}(H_{-\mu}-H_{+\mu})+m\] \[-\frac{1}{2}\sum_{\mu=1}^{4}(H_{-\mu}+H_{+\mu}-2) \tag{22}\] with mass \(m\in\mathbb{R}\) and Euclidean gamma matrices \(\gamma_{1},\ldots,\gamma_{4}\) satisfying the anti-commutation relation \(\gamma_{\mu}\gamma_{\nu}+\gamma_{\nu}\gamma_{\mu}=2\delta_{\mu\nu}\) with Kronecker delta \(\delta_{\mu\nu}\). This operator can be mapped to a single PTC layer with a zero-hop path and eight one-hop paths. We add a clover term that includes closed paths consisting of four hops using \[Q_{\mu\nu} =H_{-\mu}H_{-\nu}H_{+\mu}H_{+\nu}+H_{-\nu}H_{+\mu}H_{+\nu}H_{-\mu}\] \[\quad+H_{+\nu}H_{-\mu}H_{-\nu}H_{+\mu}+H_{+\mu}H_{+\nu}H_{-\mu}H_{ -\nu} \tag{23}\] to obtain the Wilson-clover Dirac operator [30] \[D_{\mathrm{WC}}=D_{\mathrm{W}}-\frac{c_{\mathrm{sw}}}{4}\sum_{\mu,\nu=1}^{4} \sigma_{\mu\nu}F_{\mu\nu} \tag{24}\] with \(c_{\mathrm{sw}}\in\mathbb{R}\), \[F_{\mu\nu}=\frac{1}{8}(Q_{\mu\nu}-Q_{\nu\mu})\,, \tag{25}\] and \[\sigma_{\mu\nu}=\frac{1}{2}(\gamma_{\mu}\gamma_{\nu}-\gamma_{\nu}\gamma_{\mu} )\,. \tag{26}\] The operator \(D_{\mathrm{WC}}\) can also be mapped to a single PTC layer, however, paths up to four hops are needed. For the numerical experiments presented in the following sections, we use gauge group SU(3) and the \(D_{\mathrm{WC}}\) operator tuned to near criticality, i.e., the mass parameter is chosen such that the real part of the smallest eigenvalue is close to zero. This provides a challenging problem even for the small lattice volume with \(L_{1}=L_{2}=L_{3}=8\) and \(L_{4}=16\) used in this work. We set \(m=-0.6\) and \(c_{\mathrm{sw}}=1\) on a pure Wilson gauge configuration [29] with coupling parameter \(\beta=6.0\). We use periodic boundary conditions also for the fields in \(\mathcal{F}_{\varphi}\), i.e., \(\theta_{\mu}=0\) in Eq. (5). We show the spectrum of \(D_{\mathrm{WC}}\) on a representative single gauge configuration in Fig. 5. We quantify the improvement achieved using the preconditioner \(M\) by the reductio Figure 4: Graphical representation of two parallel layers \(L_{1}\) and \(L_{2}\) being applied to a single input feature and mapping to two output features. As before, the features are represented by planes. An identity layer (i.e., a copy operation) is represented by a dashed arrow. In this example, the only nontrivial layer is \(L_{1}\), which includes a single path in (11) or (12). to solve Eq. (20) to \(10^{-8}\) precision in the preconditioned FGMRES [31]. We quote the iteration count gain defined as the iteration count of the unpreconditioned solve divided by the iteration count of the preconditioned solve. The methods developed in this work also extend to other Dirac matrices. However, particular challenges exist in some cases. For example, in the case of domain-wall fermions [32; 33] the spectrum encircles the origin [9; 10], which limits the convergence of unpreconditioned solves of \(Du=b\) using Krylov-subspace methods. ## IV High-mode preconditioners We want to learn a preconditioner \(M\) that approximates \(D^{-1}\). For this purpose it is useful to consider an eigendecomposition of \(D\) and first construct optimal models for the high-mode and low-mode components separately. We study the high-mode component in this section and the low-mode component in Sec. V. We then combine the corresponding models in Sec. VI. ### Model setup and training strategy The high-mode part of the spectrum of \(D_{\rm WC}\) is related to the short-distance behavior. Therefore we expect a single layer with paths up to one hop to already show a gain in iteration count. We consider a linear model \(M\) mapping a vector \(x\) to \(Mx\). We employ a supervised learning approach and describe a single training step in the following. We first pick a random vector \(v\) with components drawn from a Gaussian distribution with mean zero and unit standard deviation. We then construct the cost function3 Footnote 3: Note that in Eq. (20) we use \(DM\), while in Eq. (27) we use \(MD\). If \(DM\) is close to the identity, then so is \(MD\), and thus Eq. (27) is a suitable cost function. \[C=|MD_{\rm WC}v-v|^{2} \tag{27}\] and its derivatives with respect to the model weights using backpropagation. This corresponds to a batch of a single training tuple \((D_{\rm WC}v,v)\), where the model learns to map the first to the second component. This cost function is dominated by the high modes of \(D_{\rm WC}\) and is therefore similar in spirit to using the spectral radius [12; 14]. Since we use a different random vector at every iteration our training data set is unbounded in size and there is no need to add a regulator. This holds even for LPTC layers with a large number of model weights. We then apply a single iteration of the Adam optimizer [34] with parameters \(\beta_{1}=0.9\), \(\beta_{2}=0.98\), and \(\alpha=10^{-3}\) that gave good performance for the models considered in this work. This process is repeated until the model weights are converged sufficiently. All layers and optimizers are implemented in the Grid Python Toolkit (GPT) [3], and corresponding code samples are provided in App. A. ### Locality and communication avoidance In Fig. 6 we compare the performance of single-layer models with a maximum of one hop. They correspond to a version of Fig. 2 with a single input and output feature and nine paths corresponding to \[T_{0} =\mathbb{1}\,,\] \[T_{1} =H_{1}\,, T_{2} =H_{2}\,,\] \[T_{3} =H_{3}\,, T_{4} =H_{4}\,,\] \[T_{5} =H_{-1}\,, T_{6} =H_{-2}\,,\] \[T_{7} =H_{-3}\,, T_{8} =H_{-4}\,. \tag{28}\] We also investigate communication-avoiding versions with local volume \(4^{3}\times 8\). We find that the LPTC models do not perform better in terms of iteration count gain than the PTC models. However, the LPTC models require more training compared to the PTC models. The slower convergence is expected due to the much larger number weights in the LPTC models. We find that eliminating communication between sub-volumes, as described in Sec. II.5, only leads to a modest reduction in Figure 5: Eigenvalues \(\lambda\) of the Wilson-clover Dirac operator with \(m=-0.6\) and \(c_{\rm sw}=1\) on a pure-Wilson-gauge configuration with \(\beta=6\), \(L_{1}=L_{2}=L_{3}=8\), and \(L_{4}=16\). The mass \(m\) is tuned to near criticality for the experiments in this work. We computed the boundaries of the spectrum using the Arnoldi method applied to \((D-\lambda)^{-1}\) for several carefully selected values of \(\lambda\) and filled in the bulk of the spectrum by hand for illustrative purposes. performance. After translating the iteration count gain to a reduction in time-to-solution, we may therefore find the communication-avoiding models to perform best. ### Multiple hops and deep networks In Fig. 7, we investigate models with multiple hops either in a single layer or distributed over two layers. We use one-hop layers with paths defined in Eq. (28) as well as a two-hop layer extending this set by all combinations \[H_{a}H_{b} \tag{29}\] for \(a,b\in\{-4,-3,-2,-1,1,2,3,4\}\) with \(a\neq-b\). The two-hop layer therefore has 65 distinct paths compared to the 9 paths of the one-hop layer. The first model that we investigate stacks two one-hop layers with one input and one output feature back-to-back. We denote this model as "2 layers (\(1\to 1\to 1\)), 1 hop." The second model is similar but has two output features in the first layer and correspondingly two input features in the second layer. We denote this model as "2 layers (\(1\to 2\to 1\)), 1 hop." The third model consists of a single two-hop layer as described above. We find that the second model performs best and gives approximately twice the iteration count gain of the corresponding single-layer models with a maximum of one hop shown in Fig. 6. Since the layers are linear, the two-layer models are not more expressive compared to the single-layer model with two hops. We therefore expect the third model to be able to match the performance of the second model with a sufficiently improved training procedure. It is not surprising that the second model can be trained more efficiently compared to the third model given that it has a smaller number of weights. We conclude that while deep models do not increase expressivity, the computational effort needed to train deep models may be reduced compared to a corresponding shallow model with more paths. ### Transfer learning In Fig. 8 we investigate how well the one-layer one-hop PTC model of Fig. 6 that was trained on a given gauge configuration with \(\beta=6.0\) and \(m=-0.6\) performs when it is used in the case of (i) a different gauge configuration of the same gauge ensemble, (ii) a gauge configuration of a different ensemble with \(\beta=5.9\), and (iii) the same gauge configuration but with a different mass \(m=-0.55\) Figure 6: Convergence of the cost function (27) and iteration count gain for one-layer and one-hop high-mode preconditioners. The lattice volume is \(8^{3}\times 16\), and the local volume for the communication-avoiding version is \(4^{3}\times 8\). Figure 7: Convergence of the cost function (27) and iteration count gain for two-layer and two-hop high-mode preconditioners. In all cases, we investigate the performance without re-training and after additional re-training steps following the same procedure as for the initial training. We find that the high-mode preconditioner model does not require re-training to efficiently perform in all three cases. Once such a model is trained, it can be used efficiently for different gauge configurations of the same and similar ensembles. We note that the maximum iteration count gain for mass \(m=-0.55\) is significantly reduced. In this case, however, the spectrum is not well tuned to criticality and the initial problem is therefore less challenging. Comparing with Fig. 6, we also observe a modest fluctuation in iteration count gain between different configurations. ## V Low-mode preconditioners We now turn to the low-mode component in the eigen-decomposition of \(D\). Since the low-mode component corresponds to the long-distance behavior of the Dirac operator \(D\), it is not efficient to use the layers discussed in Sec. IV since a rather deep network composed of such layers would be needed to propagate information over sufficiently long distances. The multi-grid paradigm, however, is ideally suited to address this issue. In this section, we focus solely on the low-mode component and then combine low modes and high modes in Sec. VI. ### Model setup and training strategy In the multi-grid approach, we define an additional coarser version of the lattice as well as restriction and prolongation operations that map between the fine and coarse lattices. These operations must preserve the low-mode component of \(D\)[35]. To achieve this, we first find vectors \(u_{1},\ldots,u_{s}\) in the near-null space of \(D\), i.e., vectors that satisfy \[Du_{i}\approx 0 \tag{30}\] with null vector \(0\) and \(i\in\{1,\ldots,s\}\) for \(s=\dim(\tilde{V}_{I})\). These vectors are then blocked such that one site \(y\in\tilde{S}\) on the coarse lattice corresponds to a set of sites, or block, \(B(y)\subset S\) on the fine lattice. Let us denote such a blocked vector, which lives on the sites \(B(y)\), by \(u_{i}^{y}\). One then defines an inner product within each block \(B(y)\) and orthonormalizes the vectors \(u_{1}^{y},\ldots,u_{s}^{y}\) within each block according to this inner product. The resulting vectors are labeled \(\bar{u}_{1}^{y},\ldots,\bar{u}_{s}^{y}\). The linear map \(W^{\dagger}\) discussed in Sec. II.3 is then defined as \[W(y,x)^{\dagger}=\sum_{i=1}^{s}\bar{u}_{i}^{y}(x)\hat{e}_{i}^{\dagger} \tag{31}\] with standard basis \(\hat{e}_{1},\ldots,\hat{e}_{s}\) of \(\tilde{V}_{I}\) and \(x\in B(y)\). In practice a good approximation of such vectors \(u_{i}\) can be found by applying the FGMRES solver for matrix \(D\) with source vector \(0\) and a random vector as initial guess. This procedure removes high-mode components in \(u_{i}\), leaving a linear combination of low-modes. We follow this approach in the numerical experiments presented in the following. While high precision is not needed, we solve to \(10^{-8}\) precision to avoid an additional tuning step. We use a coarse grid of size \(2^{3}\times 4\) and a list of 12 near-null vectors \(u_{1},\ldots,u_{12}\). We define a coarse-grid operator \[\tilde{D}=RD_{\text{WC}}P \tag{32}\] with restriction matrix \(R\) and prolongation matrix \(P\) that are defined according to Eqs. (14) and (15). We then train a coarse-grid model \(\tilde{M}\) that contains a single LPTC layer with gauge fields \(U_{\mu}=\mathbbm{1}\), \(V_{G}=\mathbbm{C}^{1}\), \(V_{\tilde{G}}=\tilde{V}_{I}\), and use only zero-hop and one-hop paths corresponding to \(\{H_{1},H_{2},H_{3},H_{4},H_{-4}\}\). We omit the \(H_{-1}\), \(H_{-2}\), and \(H_{-3}\) paths since they are redundant on a \(2^{3}\times 4\) coarse grid with periodic boundary conditions. The gauge fields are replaced with the identity since the coarse fields do not have a gauge degree of freedom. We refer to this special case of the LPTC layer as cLPTC in the following. Figure 8: Convergence of the cost function (27) and iteration count gain for one-layer and one-hop high-mode preconditioners. We re-train the model of Fig. 6 for a different gauge configuration in the same ensemble, for a different value of \(\beta=5.9\), and for a different mass value of \(m=-0.55\). The network performs well in all cases even without re-training. We follow the training procedure described in Sec. IV.1 but replace the cost function with \[C=\left|\tilde{M}\tilde{D}v-v\right|^{2}. \tag{33}\] It is worth noting that one could have considered a different cost function \[C^{\prime}=|\tilde{M}v-\tilde{D}^{-1}v|^{2} \tag{34}\] in order to project more strongly on the low modes of \(\tilde{D}\). In this case, however, the training tuples require the somewhat costly inversion of \(\tilde{D}\). We find that the cost function Eq. (33) is sufficient for the purpose of training the coarse-grid model. This point will be revisited when we train a combined multi-grid model in Sec. VI. Note that the gauge equivariance of the restriction and prolongation layers is guaranteed if every vector \(u_{i}\) is a linear combination of eigenmodes of \(D\) with gauge-invariant coefficients. In our procedure the coefficients are gauge invariant in the statistical average over random initial guess vectors. Furthermore, note that the weights \(W\) of the restriction and prolongation layers could also be learned directly [12; 14]. We leave the systematic study of learning the restriction and prolongation layers, including explicitly gauge-equivariant versions, to future work. ### Results In Fig. 9, we show the cost function (33) and the iteration count gain for the training of the coarse-grid model \(\tilde{M}\). In this case, we consider the iteration count gain for the inverse of \(\tilde{D}\). We find that a significantly longer training process is needed compared to the high-mode preconditioner models of Sec. IV. We also investigate using the fully trained model from a given gauge configuration and applying it to a different gauge configuration. We use the same definition of the restriction and prolongation layers on the different gauge configuration to preserve the definition of \(\tilde{D}\). For the same reason we also use the same seeds for the random number generator to generate the initial guess for the fields \(u_{1},\dots,u_{12}\). We find that after a modest amount of re-training the model performs very well on the different gauge configuration. The re-training phase is significantly shorter compared to the initial training phase. We note that the maximum iteration count gain again differs to some degree between configurations. ## VI Multi-Grid Preconditioners In the previous sections we successfully trained separate models \(M\) to approximate the short-distance and long-distance features of \(D^{-1}\). In this section we combine them to obtain a model that approximates \(D^{-1}\) over a wide range of distances. ### Smoother model setup and training strategy We first create a version of the short-distance model that accepts a second input feature, which provides an initial guess. This model plays the role of a smoother in the multi-grid paradigm. The initial guess is provided by the long-distance model acting on the coarse grid. Concretely, we aim to find a sequence of \(u_{k}\) that approximately solve \(Du=b\) such that the equation becomes exact in the \(k\to\infty\) limit. The smoother then maps the tuple \((u_{k},b)\) to \(u_{k+1}\). If we have a high-mode model \(M_{\text{h}}\) that approximates \(D^{-1}\) sufficiently well this can be achieved by the iterative relaxation approach \[u_{k+1} =(\mathbb{1}-M_{\text{h}}D)u_{k}+M_{\text{h}}b\] \[=u_{k}+M_{\text{h}}(b-Du_{k})\,. \tag{35}\] This approach is also commonly referred to as defect correction with defect \(b-Du_{k}\). Since both \(D\) and the high-mode model \(M_{\text{h}}\) can be represented by (L)PTC layers we should be able to train a model \(M_{\text{s}}\) only composed of (L)PTC layers to map \((u_{k},b)\) to a \(u_{k+r}\) for \(r\in\mathbb{N}^{+}\). Such a model has two input Figure 9: Convergence of the cost function (33) and iteration count gain for one-layer and one-hop low-mode preconditioners. We show both the initial training in blue as well as the performance of the trained model on a different gauge field of the same gauge ensemble in orange. We find that after a moderate amount of re-training, the model performs well on a different gauge configuration. features and one output feature. We may construct \(M_{\text{s}}\) using \(2r\) (L)PTC layers stacked back-to-back since each iteration of Eq. (35) corresponds to two (L)PTC layers. All but the final layer need two output features. In order to choose a reasonable value for \(r\), we studied the performance of the final multi-grid preconditioner described below and found that \(r=2\) performed significantly better than \(r=1\). We therefore train the model \(M_{\text{s}}\) for \(r=2\) using the cost function \[C=|M_{\text{s}}(u_{k},b)-u_{k+r}|^{2} \tag{36}\] with random vectors \((u_{k},b)\) and \(u_{k+r}\) given by Eq. (35). We use the same optimizer as in Secs. IV and V. In Fig. 10, we show the training progress. The iteration count gain is obtained by using \(M_{\text{s}}\) with initial guess zero as a preconditioner for \(Du=b\). We use both PTC and LPTC layers with zero-hop and one-hop paths. We expect these models to yield an iteration count gain of approximately twice the iteration count gain of the corresponding high-mode models shown in Fig. 6 because of \(r=2\). We find that this expectation is satisfied by our data. In Fig. 10, we first train the PTC model and then use the model weights as initial values for the LPTC model (using the same value for every site \(x\)). We find no additional benefit by using the LPTC model. ### Multi-grid model setup and training strategy We are now ready to combine the individual models to a complete multi-grid model \(M\) as shown in Fig. 11. We start by duplicating the input feature. One copy is preserved for the smoother, while the other copy is restricted to the coarse grid, where we apply the coarse-grid model of Sec. V. The result is then prolonged to the fine grid, and both the copy of the initial feature and the result of the coarse-grid model are combined to two input features for the last four layers. These layers are the smoother that we have learned in Sec. VI.1. We may expect this combined model to work well by using the weights obtained in the training of the respective model components. The model performance may, however, be further improved by continued training of the complete multi-grid model \(M\). For such additional training, we need to modify the cost function of Secs. V and IV such that both the low-mode and high-mode components of \(D\) constrain the model in the training phase. To this end, we use \[C=|Mb_{h}-u_{h}|^{2}+|Mb_{\ell}-u_{\ell}|^{2} \tag{37}\] with \(b_{h}=D_{\text{WC}}v_{1}\), \(u_{h}=v_{1}\), \(b_{\ell}=v_{2}\), and \(u_{\ell}=D_{\text{WC}}^{-1}v_{2}\). Here, \(v_{1}\) and \(v_{2}\) are random vectors normalized such that \(|b_{h}|=|b_{\ell}|=1\). We therefore use a batch size of two with one training tuple geared towards the high-mode component and the other training tuple geared towards the low-mode component of \(D_{\text{WC}}\). We can shift the focus of the training between both components by adding a relative weight factor to Eq. (37). ### Results In Fig. 12, we show the performance of the multi-grid (MG) model with initial weights taken from the trained model components as well as progress achieved by continued training of the combined model \(M\). From the start, the model performs substantially better than the smoother by itself. Continued training of the combined model further improves the iteration count gain to approximately 40. Such continued training converges within the first 20 training steps. We also study using the multi-grid model trained on one configuration applied to a different gauge configuration of the same gauge ensemble. In Fig. 12, we show that after a brief re-training phase of only 20 training steps, the model performs optimally on the different gauge configuration as well. Note that for concreteness we only present results for a two-level multi-grid preconditioner in this work. The extension to multiple levels is straightforward. In Fig. 11, one merely has to replace the coarse-grid layer limited by the blue features by the entire model as presented in Fig. 11. By repeating this process \(n\) times, one obtains an \((n+2)\)-level multi-grid preconditioner. Figure 10: Convergence of the cost function (36) and iteration count gain for four-layer and one-hop smoother. The iteration count gain is studied for the case of zero initial guess. We first train the PTC model and use the result as initial weights for the LPTC model. Also note that we use a rather small lattice volume of \(8^{3}\times 16\) in this work. In future work, we will investigate multi-grid models in more challenging large-volume simulations, where even larger iteration count gains should be achievable. ## VII Summary and Outlook In this paper we have initiated a program to use gauge-equivariant neural networks to learn preconditioners in lattice QCD. We introduced a number of building blocks from which suitable models can be constructed: (i) parallel-transport convolution layers that can include arbitrary paths, with either global or local weights, (ii) restriction and prolongation layers that implement the multi-grid paradigm, and (iii) parallel layers that act on a single input feature. To solve the Dirac equation for the Wilson-clover Dirac operator we have first constructed models that approximate the high-mode and low-mode component of the operator separately. We then combined these models in a two-level multi-grid model, which can be extended straightforwardly to an arbitrary number of levels. In all cases we found that the models reduce the iteration count of the outer solver significantly, e.g., by up to \(O(40)\) in the multi-grid model. We also found that transfer learning works: If we consider another gauge configuration (for the same or a slightly different value of \(\beta\)) or a slightly different quark mass, only a modest amount of re-training (or none at all) is required for the model to perform efficiently again. We also introduced a communication-avoiding algorithm in which layers do not transfer information between sub-volumes assigned to different MPI processes. In our numerical experiments we found that the performance, i.e., the iteration count gain, of the corresponding model is only slightly reduced. We expect that on large supercomputers, the wall-clock time saved by avoiding Figure 11: The combined two-level multi-grid model studied in this work. The use of the multi-grid paradigm allows for the efficient transport of information over both short and long-distances. Additional levels can be introduced by recursively replacing the coarse-grid layer (limited by the blue features) by the entire model as presented above. Figure 12: Convergence of the cost function (37) and iteration count gain for the complete multi-grid model. We use the weights of the individually trained model components as starting point and show further improvement by training the combined model. The model also performs well on a different gauge configuration and quickly converges to optimum performance after a modest amount of re-training. communication more than compensates for this modest reduction. There are many interesting directions which we plan to explore in future work. For example, we will attempt to learn the weights \(W\) of the restriction and prolongation layers directly, without computing the near-null vectors explicitly. Also, we will investigate the space of possible models that can be constructed from our building blocks in a more comprehensive manner. Furthermore, we plan to perform benchmarks that measure the cost of (re-) training and applying our models and compare the overall wall-clock time to standard state-of-the-art multi-grid methods. It would also be worthwhile to apply our ideas to Dirac operators whose spectrum encircles the origin, such as in the case of domain-wall fermions. Finally, our finding that very little, if any, re-training is needed between configurations suggests that the present approach could also be beneficial in the generation of gauge-field configurations by Markov chain Monte Carlo. ## Appendix A GPT code listings In this appendix, we provide Grid Python Toolkit (GPT) [3] code listings to implement the models used in this work. We first import the library and load a gauge field \(U\): ``` 1importgptasg 2 3#loadgaugefield 4U=g.load("gauge_field") 5grid=U[0].grid ``` The layer drawn in Fig. 2 corresponds to ``` 1#objecttypesforQCD 2ot_i=g.ot_vector_spin_color(4,3) 3ot_w=g.ot_matrix_spin(4) 4 5#twodistinctpaths 6paths=[ 7g.path().f(0).f(1).f(0), 8g.path().f(1).b(0) 9] 10 11#defineanabbreviation 12l=g.ml.layer 13 14#definethelayerofFig.2 15fig2=layer.parallel_transport_convolution( 16grid,U,paths,ot_i,ot_w,2,1 17) ``` in the case of lattice QCD. Next, we define restriction and prolongation layers to a coarse grid of size \(4^{4}\) defined using vectors \(\bar{u}_{i}\) as ``` 1#definecoarsegrid 2coarsegrid=g.grid([4,4,4,4],g.double) 3 4#load\bar{u}_ivectors 5u_bar=g.load("u_bar") 6 7#createblockingmap 8b=g.block.map(coarse_grid,basis) 9 10#createrestrictionandprolongationlayers 11restrict=l.block.project(b) 12prolong=l.block.promote(b) ``` Note that in the numerical work in this paper, we used a \(2^{3}\times 4\) coarse grid, while we present the \(4^{4}\) case here since it lifts the degeneracy of paths mentioned in Sec. V. The complete multi-grid preconditioner model of Fig. 11 corresponds to ``` 1#defineabbreviations 2lptc=l.local_parallel_transport_convolution 3ptc=l.parallel_transport_convolution 4 5#identiessoncoursegrid 6one=g.complex(coarse_grid) 7one[:]=1 8 9I=[g.copy(one)foriinrange(4)] 10 11 12#coarse-gridvectorspace 13cot_i=g.ot_vector_complex_additive_group( 14len(u_bar) 15cot_w=g.ot_matrix_complex_additive_group( 16len(u_bar) 17} 18 19#consideronlynearest-neighborhops 20paths=[ 21g.path().forward(i) 22foriinrange(4) 23]+[ 24g.path().backward(i) 25foriinrange(4) 26] 27 28#coarse-gridlayer 29defcoarse_lptc(n_in,n_out): 30returnlptc( 31coarse_grid,I,paths, 32cot_i,cot_w,n_in,n_out 33) 34 35#fine-gridlayer 36define_ptc(n_in,n_out): 37returnptc( 38grid,U,paths,ot_i, 39ot_w,n_in,n_out 40) 41 42#combinedmulti-gridmodel 43model_multi_grid=g.ml.model.sequence( 44l.parallel( 45l.sequence(), 46l.sequence( 47restrict, 48coarse_lptc(1,1), 49prolong 50), 51fine_ptc(2,2), 52fine_ptc(2,2), 53fine_ptc(2,1),
2303.15245
Comparison between layer-to-layer network training and conventional network training using Deep Convolutional Neural Networks
Title: Comparison between layer-to-layer network training and conventional network training using Deep Convolutional Neural Networks Abstract: Convolutional neural networks (CNNs) are widely used in various applications due to their effectiveness in extracting features from data. However, the performance of a CNN heavily depends on its architecture and training process. In this study, we propose a layer-to-layer training method and compare its performance with the conventional training method. In the layer-to-layer training approach, we treat a portion of the early layers as a student network and the later layers as a teacher network. During each training step, we incrementally train the student network to learn from the output of the teacher network, and vice versa. We evaluate this approach on VGG16, ResNext, and DenseNet networks without pre-trained ImageNet weights and a regular CNN model. Our experiments show that the layer-to-layer training method outperforms the conventional training method for both models. Specifically, we achieve higher accuracy on the test set for the VGG16, ResNext, and DeseNet networks and the CNN model using layer-to-layer training compared to the conventional training method. Overall, our study highlights the importance of layer-wise training in CNNs and suggests that layer-to-layer training can be a promising approach for improving the accuracy of CNNs.
Kiran Kumar Ashish Bhyravabhottla, WonSook Lee
2023-03-27T14:29:18Z
http://arxiv.org/abs/2303.15245v2
Comparison between layer-to-layer network training and conventional network training using Deep Convolutional Neural Networks ###### Abstract Convolutional neural networks have been widely deployed in almost all applications. It reached every boundary and scenario. Now, there has been significant development in neural architectures such as transfer learning, generative networks, diffusion models, and so forth. But each network's base is the convolutional neural architecture. In today's scenario, accuracy plays a crucial role. In general, accuracy mainly depends on the features. The features are extracted through the convolutional filters inside hidden layers. So, the layer in any architecture has a very vital role to play in the training process. In this research, we propose a comparative analysis of layer-to-layer training and the conventional training of the network. In layer-to-layer training, the portion of the first layers is treated as a student network and the last layers are treated as a teacher network. During each step of training, the portions keep incrementing in the forward layers or student network and decrementing in the last layers or teacher network. This layer-to-layer comparison is tested on the VGG16, ResNext and DenseNet networks without using any pre-trained ImageNet weights and on a normal CNN model. The results are then compared with the conventional training method with VGG16 ResNext, DenseNet and the normal CNN model respectively. Convolutional neural networks, VGG16, ResNext, DenseNet, ImageNet, layer-to-layer training. + Footnote †: footnote][https://github.com/ashish-AIML/LIIILab](https://github.com/ashish-AIML/LIIILab) ## 1 Introduction Convolutional neural networks have gained momentum in image classification, object detection, and image segmentation applications. For certain real-world scenarios, traditional machine learning still has limitations despite its success and application in many practical applications. The problem, however, is that obtaining sufficient training data can be costly, time-consuming, or even impossible in many cases. This problem can be partially addressed by semi-supervised learning, which does not require mass-labeled data. For improved learning accuracy, semi-supervised approaches utilize a large amount of unlabeled data instead of a limited amount of labeled data. The resultant traditional models are usually unsatisfactory because unlabeled instances are also challenging to collect. Hence, transfer learning came into existence intending to transfer knowledge across domains with limited labeled data. In simple words, it is learning to transfer the generalization of experience. This creates a scenario of the ability to realize the situations through experiences. The commonly used transfer learning methodology is the ImageNet weights [2]. The idea of implementing ImageNet as pre-trained model weights is inspired by human beings' ability to transfer knowledge across domains. It leverages the knowledge from the source, i.e., ImageNet [2] data to improve the performance of the model and to minimize the number of labeled data required in the target domain. Now, the main research focus is on improving accuracy. Critical applications required a very high amount of accuracy equaling nearly 99%. Hence, the growing intolerance of having accuracies less than even 97% is gaining momentum. This research is about exploring layer-to-layer training with a simple convolutional network as a teacher-student mechanism and analyzing its memory consumption, training speed, and performance with the normal conventional training methods. ## 2 Background and Motivation Modern neural networks are composed of dozens or hundreds of layers that perform mathematical operations. These layers take a feature tensor as input and output activations corresponding to those features. The training algorithm iterates over a large dataset many times and minimizes the loss function. The full dataset is partitioned into mini-batches and iterated through the full dataset. This process is called an epoch. The training of a neural network consists of (a) forward pass, (b) backward pass, and (c) parameter synchronization. The forward pass (FP) analyses the model layer-by-layer in each iteration to determine the loss to the target labels and the loss function. GPU computing is needed for the forward and return passes. We determine the parameter gradients from the last layer to the first layer in the backward pass (BP) using the chain rule of derivatives for the loss [5]. We update the model parameters utilizing an optimization procedure, such as stochastic gradient descent, after each iteration (SGD) [4]. Since the datasets are complicated in today's scenario, several intense layer-based algorithms have been proposed to acquire higher accuracies. Many techniques such as optimizing parameters for existing algorithms to achieve better accuracies have been executed. In this research, we explore a different approach to training the neural network. Recent efforts have shown that front layers extract general features and deeper layers are more task-specific feature extractors [5]. Our research aims at exploring layer-wise training within a network without using any pre-trained residual network's weights [1] or any sort of pre-trained weights rather than training from the scratch. ## 3 Technical Approach **Architecture:** Our architecture is a simple and normal convolutional neural network. It's a sequential training network. The base network is a _12-layered network_. The first layer is a 2D convolutional layer with _32_ filters, each with a kernel size of _3x3,'same'_ padding, and _ReLU_ activation [7]. The input shape is the shape of a single image in the training data. The second layer is another 2D convolutional layer with _32_ filters, also with a kernel size of _3x3_, '_same_' padding, and _ReLU_ activation. The third layer is a 2D max pooling layer with a pool size of _2x2_. The fourth layer is a dropout layer with a rate of _0.25_, which randomly drops 25% of the inputs during training to prevent overfitting. The fifth layer is a 2D convolutional layer with _64_ filters, each with a kernel size of _3x3_, _'same'_ padding, and _ReLU_ activation. The sixth layer is another 2D convolutional layer with _64_ filters, also with a kernel size of _3x3_, _'same'_ padding, and _ReLU_ activation. The seventh layer is another 2D max pooling layer with a pool size of _2x2_. The eighth layer is another dropout layer with a rate of _0.25_. The ninth layer is a flattened layer that flattens the output of the previous layer into a 1D array. The tenth layer is a fully connected layer with _512_ units and _ReLU_ activation. The eleventh layer is another dropout layer with a rate of _0.5_. The final layer is another fully connected layer with _num_classes_ units and a _softmax_ activation function. **VGG16 Architecture:** The first convolutional layer has _32_ filters, each with a kernel size of _3x3_. So, the number of parameters in this layer is (_3 * 3 * input channels + 1) * 32_, where _input_channels_ is the number of channels in the input image (usually 3 for RGB images). In this case, the input shape is _(32, 32, 3)_, so the number of parameters in this layer is _(3 * 3 * 3 + 1) * 32 = 896_. The second convolutional layer has the same parameters as the first, so it also has _896_ parameters. The max pooling layers and dropout layers do not have any parameters. The third convolutional layer has _64_ filters, so the number of parameters in this layer is _(3 * 3 * 32 + 1) * 64 = 18496_. The fourth convolutional layer has the same parameters as the third, so it also has _18496_ parameters. The first fully connected layer has _512_ units, so the number of parameters in this layer is _(previous layer_size + 1) * 512_, where _previous layer_size_ is the flattened size of the previous layer. In this case, the previous layer has a flattened size of _4096 (64 * 8 * 8)_, so the number of parameters in this layer is _(4096 + 1) * 512 = 2097664_. The second fully connected layer has _num_classes_ units, so the number of parameters in this layer is _(previous layer_size + 1) * num_classes_. In this case, the previous layer has a size of _512_, so the number of parameters in this layer is _(512 + 1) * num_classes_. **ResNext Architecture:** The ResNext architecture takes an input tensor of shape specified by input_shape and produces an output tensor of shape which is number of classes, i.e., 100. The architecture consists of four groups of convolutional layers, each group containing two convolutional layers with the same number of filters. The number of filters is doubled in each group, starting from 64 in the first group. After each convolutional layer, batch normalization is performed, followed by the ReLU activation function. Max pooling is applied after each group to reduce the spatial size of the feature maps. The final layers of the network consist of a global average pooling layer followed by a fully connected layer with num_classes neurons and a softmax activation function to produce a probability distribution over the classes. This architecture is based on the ResNet architecture, which introduces residual connections to address the vanishing gradient problem in deep neural networks. However, the ResNext architecture extends ResNet by introducing a split-transform-merge strategy for the residual connections, which allows for more diverse representations to be learned by the network. **DenseNet Architecture:** The architecture consists of a convolutional layer followed by batch normalization and ReLU activation, a series of dense blocks, and a global average pooling layer and a fully connected softmax output layer. Each dense block consists of a series of bottleneck layers and convolutional layers with concatenation of feature maps. The architecture ends with a global average pooling layer and a fully connected softmax output layer. ## 4 Experiments We evaluate our model with the standard CIFAR100 [3] dataset with a normal CNN network and on dense networks such as VGG16 [6], ResNext [1], DenseNet [6] networks. For each training, the number of epochs is set to **300.** All the experiments are implemented on the Google Colab GPU notebooks. The performance metrics to evaluate the model are: \begin{tabular}{l l} **(i)** & **total training time** \\ **(ii)** & **accuracy** \\ **(iii)** & **total memory consumption** \\ \end{tabular} ### Benchmark Datasets **CIFAR100:** CIFAR-100 is a popular image classification dataset that contains 60,000 32x32 color images in 100 classes, with 600 images per class. The dataset is split into 50,000 training images and 10,000 testing images. The 100 classes in CIFAR-100 are grouped into 20 super classes, each containing five fine-grained classes. ### Layer-to-Layer Training The model is trained with layer-to-layer training. In this mechanism, we declare a single network with _'n'_layers. In the first step, the _1st_ layer and the (_n-1)th_ layer is trained, freezing the rest of the layers. Here, the 1st layer acts as a student network and the (n-1)th layer acts as a teacher network. In the second step, _2nd_ layer is a student network and _(n-2)nd_ layer is a teacher network that is trained and the rest layers are frozen. In the third step, _3rd_ layer as a student network and _(n-3)rd_ layer as a teacher network is trained, freezing the rest of the layers. This process is continued till _(n-i)_ layers as _(i+1, (n-(i+1))), (i+2, (n-(i+2))), (i+3, (n-(i+3))),......... _((i+n/2), (n-(i+n/2)),_ where 'n' is the number of layers, and 'i' is the student layers and value of _i=0_. After training with all the layer pairs, we perform an ensemble of all the layer pairs to print the final accuracy. ### Standard Training The performance is compared with the standard training of the networks. Normal training is a standard sequential training of all the layers at once. ### Results and Discussion In this section, we discuss the performance of layer-to-layer training compared with the standard training of both architectures respectively. The tabular comparison is shown in table 1. **Standard CNN:** Since the accuracy factor plays an important role in critical applications, layer-to-layer training outperforms the standard training methods. As shown in Table 1, the accuracy of layer-to-layer training is **80%,** and that of standard layered training is **78%.** Crucial applications can utilize the layer-to-layer training method which is critical for accuracy. The other two performance metrics performed better than the layer-to-layer training. The total training time for standard training is **72.7 seconds** compared to **309.19 seconds** of layer-to-layer training. The total memory consumption of standard training is **5.27 GB** compared to **8.86 GB** of layer-to-layer training. Whenever the systems are RAM critical, standard layered training can be used, but with better RAM systems, layer-to-layered training is better since higher accuracies can be achieved. **VGG16:** VGG16 [6] architecture is trained without pre-trained ImageNet weights but rather trained from scratch. The scenario of VGG16 is similar to that of the standard CNN. Accuracy is greater in layer-to-layer training than the standard training. But when compared with the standard CNN architecture, the accuracy of VGG16 in both training methods are almost negligible. Even after performing the ensemble method in the layer-to-layer training, the accuracy did not increase. The accuracy of layer-to-layer training is at **10%** and that of standard training is at **9.62%**. The total memory consumption of standard VGG16 is **5.4 GB** and that of layer-to-layer training is **6 GB**. The total training time of standard layers training is 265.5 seconds compared to that of 2311.3 seconds of layer-to-layer training. **DenseNet:** Since the accuracy factor plays an important role in critical applications, layer-to-layer training outperforms the standard training methods. As shown in table 1, the accuracy of layer-to-layer training is **63.98%,** and that of standard layered training is **60.25%**. Crucial applications can utilize the layer-to-layer training method which is critical for accuracy. The other two performance metrics performed better than the layer-to-layer training. The total training time for standard training is **36045.163185596466 seconds** compared to **68081.966414779316 seconds** of layer-to-layer training. The total memory consumption of standard training is **5.26 GB** compared to **7.77 GB** of layer-to-layer training. Whenever the systems are RAM critical, standard layered training can be used, but with better RAM systems, layer-to-layered training is better since higher accuracies can be achieved. **ResNet:** ResNet architecture is trained without pre-trained ImageNet weights but rather trained from scratch. Accuracy is greater in layer-to-layer training than the standard training Even after performing the ensemble method in the layer-to-layer training, the accuracy did not increase. The accuracy of layer-to-layer training is at **56.85%** and that of standard training is at **55.28%**. The total memory consumption of standard training of ResNet is **4.9 GB** and that of layer-to-layer training is **6.9 GB**. The total training time of standard layers training is **3930.0629668235779 seconds** compared to that of **8790.185438156128 seconds** of layer-to-layer training. The second half of the network is chosen as a teacher model since it is processed with intense filter sizes hence more knowledge can be extracted from these layers. It is generally believed that the last layers typically involve global pooling operations and fully connected layers act as classifiers. These layers are responsible for extracting a high-level feature from the input images, which can be used to classify the image into different classes. Therefore, in this sense, the last layers of a CNN can be considered global feature extractors, as they take into account the entire image and produce a summary of its features that can be used for classification. Hence, they are more knowledgeable than the initial layers. On the other hand, the earlier layers of a CNN typically perform local feature extraction by detecting low-level visual features such as edges, corners, and textures in different regions of the input image. These features are gradually combined and transformed by subsequent layers to form higher-level features that are increasingly global. Hence, accuracy is greater in layer-to-layer training than the standard layer training. ## 5 Conclusion and Future Work We have researched layer-to-layer training within the same network and shown its advantage by comparing it with normal standard training. The second section of the architecture is treated as teacher network because of its ability to extract dense features. Layer-to-layer training resulted on greater accuracies for both normal CNN and transfer learning architectures. As the epochs increased, the accuracy increases in layer-to-layer training. Although, the layer-to-layer training resulted in higher accuracies, its training speed and memory consumption is higher compared to the normal conventional training. But as the dataset quantity increases, the RAM consumption for layer-to-layer training increases drastically and requires more than 15 GB of RAM for denser architectures such as ResNext V1 and ResNext V2. Therefore, more RAM requirement is recommended for layer-to-layer training. We can conclude that as the dataset classes increases and number of layers increases, more hyper-tuning is required, and pre-trained weights are required to improve the accuracy. In the future, the dense architectures can be trained with higher RAM such as multi-GPUs and train with much higher datasets to see the variation of accuracy between small datasets and very large datasets. Further, instead of computing multi-training methods and increasing the number of epochs consuming memory, we can deploy a layer-to-layer method to improve the performance of the network resulting in higher accuracy. In future, the experiments can be based on increasing the layers, epochs and training with high quality and quantity datasets to evaluate the performance of \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Network Architecture** & **Training Method** & **Total Training Time** & **Accuracy** & **Total Memory Consumption** \\ \hline \multirow{4}{*}{Standard CNN} & Layer-to-Layer training & 309.1986117362976 seconds & **80\%** & 8862.59765625 MB \\ \cline{2-5} & Standard & **72.70218634605408** & 78\% & **5277.4296875 MB** \\ \hline \multirow{4}{*}{VGG16} & Layer-to-Layer training & 2311.3207755088806 seconds & **10\%** & 6035.421875 MB \\ \cline{2-5} & Standard training & **265.588809967041** & 9.62\% & **5436.65234375 MB** \\ \cline{2-5} & \multirow{2}{*}{\begin{tabular}{c} Layer-to-Layer training \\ \end{tabular} } & 68081.966414779316 seconds & **63.98\%** & **7.77 GB** \\ \cline{2-5} & Standard training & **36045.163185596466** & 60.25\% & 5.2 GB \\ \hline \multirow{4}{*}{ResNet} & Layer-to-Layer training & 8790.185438156128 seconds & **56.85\%** & 6.9 GB \\ \cline{2-5} & \multirow{2}{*}{ \begin{tabular}{c} Standard training \\ \end{tabular} } & **3930.0629668235779** & 55.28\% & **5.5 GB** \\ \cline{1-1} \cline{2-5} & & **seconds** & & \\ \hline \end{tabular} \end{table} Table 1: Performance metrics of training methods on the CIFAR100 dataset layer-to-layer training. Further, the experiments can be conducted with multi-GPUs and multi-threading to check the training speed of both normal training and layer-to-layer training. Since, the layer-to-layer training resulted in better accuracy than the normal conventional training, the future work can even focus on extending this methodology to object detection and image segmentation models to evaluate the performance.
2308.15137
Abdominal Multi-Organ Segmentation Based on Feature Pyramid Network and Spatial Recurrent Neural Network
As recent advances in AI are causing the decline of conventional diagnostic methods, the realization of end-to-end diagnosis is fast approaching. Ultrasound image segmentation is an important step in the diagnostic process. An accurate and robust segmentation model accelerates the process and reduces the burden of sonographers. In contrast to previous research, we take two inherent features of ultrasound images into consideration: (1) different organs and tissues vary in spatial sizes, (2) the anatomical structures inside human body form a relatively constant spatial relationship. Based on those two ideas, we propose a new image segmentation model combining Feature Pyramid Network (FPN) and Spatial Recurrent Neural Network (SRNN). We discuss why we use FPN to extract anatomical structures of different scales and how SRNN is implemented to extract the spatial context features in abdominal ultrasound images.
Yuhan Song, Armagan Elibol, Nak Young Chong
2023-08-29T09:13:24Z
http://arxiv.org/abs/2308.15137v1
# Abdominal Multi-Organ Segmentation ###### Abstract As recent advances in AI are causing the decline of conventional diagnostic methods, the realization of end-to-end diagnosis is fast approaching. Ultrasound image segmentation is an important step in the diagnostic process. An accurate and robust segmentation model accelerates the process and reduces the burden of sonographers. In contrast to previous research, we take two inherent features of ultrasound images into consideration: (1) different organs and tissues vary in spatial sizes, (2) the anatomical structures inside human body form a relatively constant spatial relationship. Based on those two ideas, we propose a new image segmentation model combining Feature Pyramid Network (FPN) and Spatial Recurrent Neural Network (SRNN). We discuss why we use FPN to extract anatomical structures of different scales and how SRNN is implemented to extract the spatial context features in abdominal ultrasound images. Medical Imaging and Processing, Diagnostic Ultrasound, Image Segmentation, Feature Pyramid Network. + Footnote †: c) 2023 the authors. This work has been accepted to IFAC for publication under a Creative Commons Licence CC-BY-NC-ND a]Yuhan Song, A]Armagan Elibol, b]Nak Young Chong ## 1 Introduction As many countries face the challenges of population aging with healthcare staff shortages, the demand for remote patient monitoring drives the development of AI-assisted diagnosis. In clinical practice, ultrasound imaging is one of the most common imaging modalities due to its effectiveness, non-invasive, and non-radiation nature. Medical ultrasound imaging requires an accurate delineation or segmentation of different anatomical structures for various purposes, like guiding the interventions. However, compared with other modalities, it is relatively harder to process because of low contrast, acoustic shadows, and speckles, to name a few (Almajalid et al. (2018)). It can be challenging even for experienced sonographers to detect the exact contour of tissues and organs, not to mention that it usually takes years of study and practice to train a qualified sonographer. Therefore, an automated and robust ultrasound image segmentation method is expected to help with locating and measuring important clinical information. Along the lines, we are developing a control algorithm for the robot arm to perform automatic ultrasound scans (see Fig. 1). Because this system is expected to operate automatically without human intervention, an evaluation metric for the robot's movement is necessary. To this end, a segmentation algorithm needs to be incorporated into the robot trajectory control system. Traditional ultrasound image segmentation methods focus on the detection of textures and boundaries based on morphological or statistical methods. Mishra et al. (2003) proposed an active contour solution using low pass filters and morphological operations to make a prediction of the cardiac contour. Mignotte et al. (2001) developed a boundary estimation algorithm based on a Bayesian framework, where the estimation problem was formulated as an optimization algorithm to maximize the posterior possibility of being a boundary. Previously Mignotte and Meunier (2001) used statistical external energy in a discrete activate contour for the segmentation of short-axis parasternal images, in which a shifted Rayleigh distribution was used to model gray-level statistics. Boukerroui et al. (2003) also proposed a Bayesian framework to conduct robust and adaptive region segmentation, taking the local class mean with a slow spatial variation into consideration to compensate for the nonuniformity of ultrasound echo signals. Ultrasound image segmentation is time-consuming and prone to irregular anatomical structure shapes, and requires manual initialization operations. Compared with morphological and statistical methods, convolutional neural network (CNN) based solutions are powerful and flex Figure 1: Our robot-assisted ultrasound imaging system ible because of their strong nonlinear learning ability. Zhang et al. (2016) conducted coarse and fine lymph node segmentation based on two series-connected fully convolutional networks. Huang et al. (2021) proposed a modified deep residual U-Net model to predict the contour of abdominal organs and tissues. They train their model initially on a tendon dataset, then fine-tune it on a breast tumor dataset. After getting a pre-trained model, they adapt the model to detect different anatomical structures using transfer learning. Lei et al. (2021) proposed a male pelvic multi-organ segmentation method on transrectal ultrasound images. In their research, a fully convolutional one-state (FCOS) object detector originally designed for generalized object detection is adapted for ultrasound image segmentation. In the context of abdominal ultrasound image segmentation, most of the existing methods are targeted at specific organs or anomalies. Chen et al. (2022) designed a multi-scale and deep-supervised CNN architecture for kidney image segmentation. They implemented a multi-scale input pyramid structure to capture features at different scales, and developed a multi-output supervision module to enable the network to predict segmentation results from multi-scales. Huang et al. (2019) developed a detection algorithm for pulmonary nodules based on deep three-dimensional CNNs and ensemble learning. However, the importance of multi-organ segmentation is still ignored. On one hand, the segmentation result can be used as a guide for the remote ultrasound scan system, which is the cornerstone of realizing an automatic remote diagnostic system. An automatic diagnostic system will reduce burdens on sonographers, enabling them to concentrate on the analysis of pathology. On the other hand, abdominal organ segmentation can also provide information on specific organs and tissues, which can be used to assist in the diagnosis of certain diseases. Therefore, in this paper, an abdominal multi-organ segmentation method is proposed. Our contribution can be listed as follows: (a) We proposed a multi-organ segmentation method based on an FPN structure; (b) We combined the FPN model with an SRNN module, which helps improve the performance significantly. ## 2 Related Methods Based on CNNs ### ResNet We use ResNet (He et al. (2016)) as the feature extractor backbone, a deep residual learning framework to solve the degradation problem of deep networks. A common issue in deep learning is that deeper neural networks are harder to train. With the layers going deeper, accuracy would drop rapidly. In other words, appending more layers to a suitably deep model will increase the training error. He et al. (2016) addressed the degradation problem by reformulating some of the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. ResNet makes use of feedforward networks with "shortcut connections", which makes the network easier to optimize and able to gain extra accuracy from considerably increased depth. In the aforementioned paper, the authors let the shortcut connections simply perform identity mapping and produce the sum of the output from original layers and the lateral layers as illustrated in Fig. 2, where \(x\) represents the input, \(F(x)\) the abstract representation of the residual block, and relu the activation function. \(x\) is passed directly to the output and called "identity shortcut connection". Inserting the shortcut connection to the plain backbone, they managed to train models with over 1000 layers. ResNet's capacity of extracting deep features made it possible in our work to combine both semantic and abstract information for analyzing ultrasound images. ### Feature Pyramid Network In the abdominal section, different anatomical structures vary in shape and size, which may cause class imbalance for the traditional CNN segmentation algorithms like U-Net in Ronneberger et al. (2015). Although the total amount of instances may be almost equal in training, the relatively large organs and tissues occupy much more pixels in the ultrasound images. As shown in Fig. 3, the violet part is the liver which occupies most of the pixels, and the green part is the gallbladder. This will make the algorithm classify as many pixels into the liver as possible, because a majority class has a much bigger influence on the final score than the minority class. Fig. 4 shows the segmentation result of an ultrasound image containing liver (violet) and kidney (yellow). Compared with the ground truth, the result tends to ignore the kidney to focus on drawing the true mask of the liver. This example illustrates the necessity of introducing the FPN structure proposed by Lin et al. (2017). FPN takes the feature maps from multiple layers of the encoder rather than only from the deepest output. This pyramid network structure is scale-invariant in the sense that an object's scale changes with shifting its level in the feature pyramid. In other words, smaller objects are usually easier to be detected from smaller yet deeper feature maps, and vice versa. Compared with other pyramid Figure 3: Example of class imbalance problem Figure 2: Shortcut connection in ResNet network structures, FPN not only utilizes the relation between scale and layer depth, but also uses a top-down pathway to construct higher-resolution layers from a semantic layer. This solves the problem that feature maps composed of low-level structures (closer to the original level) are too naive for accurate object detection. As the reconstructed layers are semantically strong, but the locations of objects are not precise after all the down-sampling and up-sampling, the authors then added lateral connections between reconstructed layers and the corresponding feature maps to help the decoder predict the locations better. The overall structure of FPN is illustrated in Fig. 5. ### Spatial Recurrent Neural Network One important property of ultrasound images is that the anatomical structures form a constant spatial relationship under the same scan pattern. For example, in the midsagittal plane scanning, the mesenteric artery is usually at the bottom of the liver, and the pancreas is located at the side of the liver. Experienced sonographers rely heavily on such spatial context information to locate the target organs. This prior knowledge inspired us to take spatial context information into consideration. To extract context information, the spatial recurrent neural network (SRNN) is introduced. Many studies have explored the utilization of RNNs to gather contextual information. Schuster and Paliwal (1997) proposed a bidirectional recurrent neural network (BRNN) that passes both forward and backward across a time map to ensure the information is propagated across the entire timeline. When it comes to the context of spatial information, Graves and Schmidhuber (2008) proposed a multi-dimensional RNN to recognize handwriting. Byeon et al. (2015) built a long short-term memory (LSTM) RNN structure for scene labeling. Bell et al. (2016) proposed an object detection network structure called Inside-Outside Net (ION). Besides taking the information near an object's region of interest, the introduction of contextual information has improved the performance, for which a module of four directional RNNs is implemented. Fig. 6 shows the propagation of the RNNs. The structures are placed laterally across the feature maps, and move independently in four cardinal directions: right, left, down, up. The outputs from the RNNs are then concatenated and computed as a feature map containing both local and global contextual information. ## 3 Multi Organ Segmentation Network ### Network Structure Overview Fig. 7 shows the structure of the proposed model. On the left side is the ResNet-101 backbone as the feature extractor. The input image is propagated from bottom to top, with the network generating feature maps of lower resolution and richer semantic information. We define the layers producing feature maps of the same size as one stage. We choose the output of the last layer of each stage to represent the output of the entire stage except the shallowest stage, because it is computationally time-consuming and of a low semantic feature. Each of the blue cubes represents an output of the stage called {res2, res3, res4, res5}, respectively. The feature maps go separately through a 1x1 convolution layer and the SRNN module. The green cubes represent the feature maps after the convolution operation, and the red cubes represent the context feature maps. The deep feature map is concatenated with the context feature map, then it will be concatenated with the spatial context feature map. And the concatnated feature map will go through a normalization operation and be compressed to reduce depth channels. The semenite feature map from the upper layer, spatially coarser but semantically stronger, is upsampled by a scale factor of 2. Then the upsampled feature map from the upper pyramid level and the feature map from the current pyramid level are added together as the new feature map to be concatenated with the spatial feature map. The yellow cubes are the final outputs of the entire feature extractor. After extracting semantic and spatial features, these pyramid feature maps are then sent to region proposal networks (RPN) and region-based detectors (Fast R-CNN). Unlike the classic object detectors, the FPN attaches RPN and Fast R-CNN to each of the output layers. The parameters of the heads are shared across all feature pyramid levels for simplicity, but the accuracy is actually very close with or without sharing parameters (Lin et al. (2017)). This is indirect proof that all the levels of the pyramid share similar semantic levels. After that, a DeepMask framework is used to generate masks. The structure of proposers and anchor/mask generators are omitted in the graph, since it is not our main interest. ### SRNN Structure The SRNN module follows the idea of the ION network structure. Fig. 8 shows how the RNNs extract the contextual information. We first perform a 1x1 convolution to simulate the input-to-hidden data translation in the RNN. Then, four RNNs are propagated through the different directions mentioned above. The outputs from the RNNs are fused into an intermediate feature map. Until this step, each pixel contains the context information aiming at its Figure 4: U-Net segmentation example Figure 5: Feature pyramid network structure four principal directions: right, left, up, down. Another round of the process is then repeated to further propagate the aggregated spatial context information in each principal direction. Finally, a feature map containing the overall context information is generated. For comparison, in the feature map on the left in Fig. 8, each pixel only contains information about itself and its neighbors (depending on the perspective field). After the first round of RNN propagation, the pixels get the context information from its 4 directions. Finally, RNNs propagate through the context-rich pixels to extract the full-directional context information. Therefore, the last feature map is globally context-rich. ### _Ironn_ An RNN is specialized for processing sequential data. The data fed into the input nodes is propagated through the hidden nodes, updating the internal states using past and present data. There are variants of RNN such as gated recurrent units (Cho et al. (2014)), LSTM (Hochreiter and Schmidhuber (1997)), and plain tanh RNNs. The RNN in this work follows the model in Le et al. (2015) due to its efficiency and simplicity of training. This RNN structure is called IRNN, because the recurrent weight matrix is initialized to the identity matrix. IRNN has a good performance for long-range data dependencies (Bell et al. (2016)). IRNN is composed of the rectified linear unit, and the recurrent weight matrix is initialized to the identity matrix. Therefore, gradients are propagated backward with full strength at initialization. We adapt four independent IRNNs that propagate through four different directions. Given below is the update function for Fig. 8: Illustration of the IRNN propagation Fig. 6: Spatial RNN module Fig. 7: Proposed network structure the IRNN moving from left to right. The rest IRNNs follow a similar equation according to the propagation direction: \[h_{i,j}^{right}\gets max(W_{hh}^{right}h_{i,j-1}^{right}+h_{i,j}^{ right},0), \tag{1}\] where \(W\) is the hidden transition matrix and \(h_{i,j}\) is the feature at \(\text{pixel}(i,j)\). Each direction on independent rows or columns is computed in parallel, and the output from the IRNN is computed by concatenating the hidden state from the four directions at each spatial location. ## 4 Experiments ### Dataset A dataset of high quality is one of the key factors to train a neural network. Unfortunately, there are few open-source abdominal ultrasound image datasets. Most of the relevant researchers have not made their dataset public for the protection of patients' privacy. In this work, we use the dataset provided by Vitale et al. (2019) containing both artificial ultrasound images which are translated from CT images, and a few images generated from real ultrasound scans. In the work of Vitale et al. (2019), they applied generative neural networks trained with a cycle consistency loss, and successfully improved the realism in ultrasound simulation from computed tomography (CT). We use 926 labeled artificial ultrasound images and 61 labeled real ultrasound scans, in which we can have the annotations of the liver, kidney, gallbladder, spleen, and vessels. Different organs are assigned segmentation masks of different colors. Table 1 shows the name of the anatomical structures and the corresponding instance number. We mixed and separated the dataset into 3 subsets: 787 images for training, 100 images for testing, and 100 images for validation. Since there is a huge difference in the image quality between CT-generated ultrasound images and real ultrasound images(see Fig. 9), we believe the performance of the proposed model can still be improved if we can have access to high-quality ultrasound image datasets. ### Detectron2 Wu et al. (2019) from FaceBook research team released a powerful object detection tool called _detectron2_ containing many network architectures and training tools. We build the backbone framework based on the implementation of FPN in _detectron2_. And we develop our SRNN structure inserted into the FPN framework as a new feature extractor. The standardized region proposal network(RPN), Fast R-CNN, and Mask R-CNN heads are attached after the feature extractor as the proposal generators. Specifically, the output feature maps are from {res2, res3, res4, res5} of the ResNet layers. The size of the anchor generators are set to \(32\times 32,64\times 64,128\times 128\), and \(256\times 256\). For each feature map, FPN gives 1000 proposals. The region of interests(ROI) box head follows the structure of Fast R-CNN with 2 fully convolutional layers and \(7\times 7\) pooler resolution. The Mask R-CNN head has 4 convolutional layers and a pooler resolution of \(14\times 14\). The ROI heads score threshold is set to 0.5 for both box and mask heads. We have made some modifications to the model, enabling it to run under the _detectron2_ framework. For example, we use a small learning rate to ensure there will not be NaN and infinity scores in the final result and reduce the ROI head batch size from 512 to 128, which is computationally efficient while the accuracy is nearly the same. ### Loss Functions Multiple loss functions are included in our training procedure, some of which are listed here: Objectness lossFor detection of the object appearance, the binary cross-entropy loss is used in the RPN head. This loss is only for the classification of object and background. In RPN, this loss will be computed on the objectness logits feature map and the ground truth objectness logits. If the pixel is from some target object, it will be marked as "1", otherwise "0". The formulation is: \[L_{obj}=-\frac{1}{n}\sum_{i=1}^{n}y_{i}\log\left(p(y_{i})\right)+\left(1-y_{i }\right)\log\left(1-p(y_{i})\right), \tag{2}\] where \(y\) is the label ("1" for foreground, "0" for background), and \(p(y)\) is the predicted probability of instance existence for all the grid points in the feature map. Fig. 10. Loss curves \[L_{CE}=-\sum_{i=1}^{n}y_{i}log(p_{i}) \tag{5}\] where \(y_{i}\) is the true label and \(p_{i}\) is the softmax probability for the \(i^{th}\) class. Mask lossThe mask loss is defined as the average binary cross-entropy loss. Eq. 6 computes the mask loss for the \(k^{th}\) class: \[L_{mask}=-\frac{1}{m^{2}}\sum_{1\leq i,j\leq m}[y_{i,j}\log\hat{y}_{i,j}^{k}+(1 -y_{i,j})\log(1-\hat{y}_{i,j}^{k})] \tag{6}\] where \(y_{i,j}\) is the label of a cell(\(i\),\(j\)) in the true mask for the region of size \(m\times m\), and \(\hat{y}_{i,j}^{k}\) represents the value of the same cell in the predicted mask. ### Experiment Setup Our experiment builds upon _detectron2_ framework in PyTorch environment. We modified the original FPN in _detectron2_ via adding the SRNN module. We train the model on a single GPU (RTX3080). The batch size is set to 1 since this GPU has relatively limited memory and the dataset is again relatively small. The learning rate is set to 0.0025. The model is trained for 300k epochs, taking around 30 hours to converge. We also tried to explore deeper layers in the backbone, adding 2 extra pyramid layers on the top of the backbone, but the accuracy fails to increase. Fig. 10 shows the converge curve of the proposed model, namely the loss curve of (a) total loss, (b) classification loss, (c) bounding box region loss, (d) and mask loss. ## 5 Results ### Quantitative Result The performance of the trained model is evaluated by the following dice coefficient: \[D=\frac{2|X\cap Y|}{|X|+|Y|} \tag{7}\] Dice coefficientSorensen (1948) is one of the most widely used evaluation methods in the research field of image segmentation. We use dice coefficient as our evaluation metric for the convenience of comparison with other research. The dice coefficient is twice the number of elements common to two sets \(X\) and \(Y\), divided by the sum of the number of elements in each set. In our work, \(X\) and \(Y\) are the predicted classification map and the ground truth. Therefore, the numerator is regarded as the intersection pixels of the predicted mask and the ground truth, and the denominator the sum of mask pixels in both. Considering we have 5 object classes (background not included), the coefficient score is computed separately and then averaged as the final score. As there might be no appearance of certain classes, we added a smoothing parameter \(\epsilon\) to avoid zeros in the denominator. This smoothing parameter can be arbitrarily small, and we set it to \(1\times 10^{-6}\). The influence on the evaluation result from the smoothing parameter can be ignored as long as it is smaller than the given setting. The modified equation is given in (8), where \(n\) is the number of classes: \[D=\frac{\sum_{i=1}^{n}\frac{2|X_{i}\cap Y_{i}|}{|X_{i}|+|Y_{i}|+\epsilon}}{n} \tag{8}\] There are few similar researches surrounding abdominal multi-organ segmentation. Neither any relevant benchmark nor competition exists. Therefore, we separately pick some comparable results from different researches aiming at single organ segmentation. Respectively, the segmentation result of liver is compared with the work of Man et al. (2022). Marsousi et al. (2015) proposed a segmentation network targeting at kidney. Their result is taken into comparison as well. And the segmentation performance of gallbladder and spleen are compared with the work of Lian et al. (2017) and Yuan et al. (2022). The segmentation performance of vessels is not compared with other works, because most of the relevant research are focused on the segmentation of cardiac arteries. The numerical result may not seem encouraging compared with those well-aimed researches. On one hand, the segmentation performance is limited by our lack of high-quality data. In our research, we have only 172 instances of training spleen samples, not to mention that most of the ultrasound images are pseudo ultrasound images interpreted from CT image. On the other hand, the specifically targeted researches usually introduced some prior knowledge into their segmentation algorithm like the detection of boundaries. Meanwhile, we trained a pure FPN model for comparison to demonstrate the improvement brought by SRNN. Table 2 shows the dice score of each class, where we can see that the improvement of performance by SRNN is significant. The proposed model outperformed the pure FPN model. ### Qualitative Result We have tested the proposed model on the artificial and real ultrasound images from the evaluation data. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Organ/Tissue & Related Work & FPN & FPN+SRNN \\ \hline Liver & 0.821 by Man et al. & 0.907 & 0.924 \\ \hline Kidney & 0.5 by Marsousi et al. & 0.806 & 0.836 \\ \hline Gallbladder & 0.893 by Lian et al. & 0.799 & 0.815 \\ \hline Vessels & – & 0.801 & 0.825 \\ \hline Spleen & 0.93 by Yuan et al. & 0.810 & 0.859 \\ \hline Average & – & 0.840 & 0.865 \\ \hline \end{tabular} \end{table} Table 2: Evaluation Result Figure 10: Loss curves Fig. 11 shows an example of the semantic segmentation on ultrasound image. (a) is the original ultrasound image, (b) is the corresponding ground truth. (c) and (d) are the segmentation result generated by the pure FPN and our proposed model. We can see that the proposed model outperforms the pure FPN. Furthermore, our proposed model was tested on the ultrasound images collected manually from an abdominal phantom in our laboratory. Fig. 12 shows an outstanding performance of our model: (a) is an ultrasound image collected from the phantom, (b) is the semantic masks generated. ## 6 Conclusions and Future Work We proposed an FPN based multi-organ/tissue segmentation method combined with the utilization of SRNN. From the experimental results, we can see that the introduction of spatial context information has improved the performance of the original FPN model both in quantitative and qualitative comparison. The findings of this work would benefit from further research including different scan patterns, since a prior knowledge of the ultrasound scan pattern would help add more precise spatial context information.
2306.05850
Deterministic equivalent of the Conjugate Kernel matrix associated to Artificial Neural Networks
We study the Conjugate Kernel associated to a multi-layer linear-width feed-forward neural network with random weights, biases and data. We show that the empirical spectral distribution of the Conjugate Kernel converges to a deterministic limit. More precisely we obtain a deterministic equivalent for its Stieltjes transform and its resolvent, with quantitative bounds involving both the dimension and the spectral parameter. The limiting equivalent objects are described by iterating free convolution of measures and classical matrix operations involving the parameters of the model.
Clément Chouard
2023-06-09T12:31:59Z
http://arxiv.org/abs/2306.05850v1
# Deterministic equivalent of the conjugate kernel matrix associated to artificial neural networks ###### Abstract. We study the Conjugate Kernel associated to a multi-layer linear-width feed-forward neural network with random weights, biases and data. We show that the empirical spectral distribution of the Conjugate Kernel converges to a deterministic limit. More precisely we obtain a deterministic equivalent for its Stieltjes transform and its resolvent, with quantitative bounds involving both the dimension and the spectral parameter. The limiting equivalent objects are described by iterating free convolution of measures and classical matrix operations involving the parameters of the model. **Acknowledgments:** This work received support from the University Research School EUR-MINT (reference number ANR-18-EURE-0023). ###### Contents * 1 Introduction * 1.1 Overview of the article * 1.2 General notations and definitions * 1.2.1 Concentration framework * 1.2.2 Polynomial bounds in \(z\) and notation \(O_{z}(\epsilon_{n})\) * 1.2.3 Covariance matrices of functions of Gaussian vectors * 1.2.4 \(O_{z}(\epsilon_{n})\) * 1.2.5 \(O_{z}(\epsilon_{n})\) * 1.2.6 \(O_{z}(\epsilon_{n})\) * 1.2.7 \(O_{z}(\epsilon_{n})\) * 1.2.8 \(O_{z}(\epsilon_{n})\) * 1.2.9 \(O_{z}(\epsilon_{n})\) * 1.3 \(O_{z}(\epsilon_{n})\) * 1.4 \(O_{z}(\epsilon_{n})\) * 1.5 \(O_{z}(\epsilon_{n})\) * 1.6 \(O_{z}(\epsilon_{n})\) * 1.7 \(O_{z}(\epsilon_{n})\) * 1.8 \(O_{z}(\epsilon_{n})\) * 1.9 \(O_{z}(\epsilon_{n})\) * 1.10 \(O_{z}(\epsilon_{n})\) * 1.11 \(O_{z}(\epsilon_{n})\) * 1.12 \(O_{z}(\epsilon_{n})\) * 1.13 \(O_{z}(\epsilon_{n})\) * 1.14 \(O_{z}(\epsilon_{n})\) * 1.15 \(O_{z}(\epsilon_{n})\) * 1.16 \(O_{z}(\epsilon_{n})\) * 1.17 \(O_{z}(\epsilon_{n})\) * 1.18 \(O_{z}(\epsilon_{n})\) * 1.19 \(O_{z}(\epsilon_{n})\) * 1.20 \(O_{z}(\epsilon_{n})\) * 1.21 \(O_{z}(\epsilon_{n})\) * 1.22 \(O_{z}(\epsilon_{n})\) * 1.23 \(O_{z}(\epsilon_{n})\) * 1.24 \(O_{z}(\epsilon_{n})\) * 1.25 \(O_{z}(\epsilon_{n})\) * 1.26 \(O_{z}(\epsilon_{n})\) * 1.27 \(O_{z}(\epsilon_{n})\) * 1.28 \(O_{z}(\epsilon_{n})\) * 1.29 \(O_{z}(\epsilon_{n})\) * 1.30 \(O_{z}(\epsilon_{n})\) * 1.31 \(O_{z}(\epsilon_{n})\) * 1.32 \(O_{z}(\epsilon_{n})\) * 1.33 \(O_{z}(\epsilon_{n})\) * 1.34 \(O_{z}(\epsilon_{n})\) * 1.35 \(O_{z}(\epsilon_{n})\) * 1.36 \(O_{z}(\epsilon_{n})\) * 1.37 \(O_{z}(\epsilon_{n})\) * 1.38 \(O_{z}(\epsilon_{n})\) * 1.39 \(O_{z}(\epsilon_{n})\) * 1.40 \(O_{z}(\epsilon_{n})\) * 1.41 \(O_{z}(\epsilon_{n})\) * 1.42 \(O_{z}(\epsilon_{n})\) * 1.43 \(O_{z}(\epsilon_{n})\) * 1.44 \(O_{z}(\epsilon_{n})\) * 1.45 \(O_{z}(\epsilon_{n})\) * 1.46 \(O_{z}(\epsilon_{n})\) * 1.47 \(O_{z}(\epsilon_{n})\) * 1.48 \(O_{z}(\epsilon_{n})\) * 1.49 \(O_{z}(\epsilon_{n})\) * 1.41 \(O_{z}(\epsilon_{n})\) * 1.40 \(O_{z}(\epsilon_{n})\) * 1.41 \(O_{z}(\epsilon_{n})\) * 1.42 \(O_{z}(\epsilon_{n})\) * 1.43 \(O_{z}(\epsilon_{n})\) * 1.44 \(O_{z}(\epsilon_{n})\) * 1.45 \(O_{z}(\epsilon_{n})\) * 1.46 \(O_{z}(\epsilon_{n})\) * 1.47 \(O_{z}(\epsilon_{n})\) * 1.48 \(O_{z}(\epsilon_{n})\) * 1.49 \(O_{z}(\epsilon_{n})\) * 1.49 \(O_{z}(\epsilon_{n})\) * 1.40 \(O_{z}(\epsilon_{n})\) * 1.41 \(O_{z}(\epsilon_{n})\) * 1.42 \(O_{z}(\epsilon_{n})\) * 1.43 \(O_{z}(\epsilon_{n})\) * 1.44 \(O_{z}(\epsilon_{n})\) * 1.45 \(O_{z}(\epsilon_{n})\) * 1.46 \(O_{z}(\epsilon_{n})\) * 1.47 \(O_{z}(\epsilon_{n})\) * 1.48 \(O_{z}(\epsilon_{n})\) * 1.49 \(O_{z}(\epsilon_{n})\) * 1.40 \(O_{z}(\epsilon_{n})\) * 1.41 \(O_{z}(\epsilon_{n})\) * 1.42 \(O_{z}(\epsilon_{n})\) * 1.43 \(O_{z}(\epsilon_{n})\) * 1.44 \(O_{z}(\epsilon_{n})\) * 1.45 \(O_{z}(\epsilon_{n})\) * 1.46 \(O_{z}(\epsilon_{n})\) * 1.47 \(O_{z}(\epsilon_{n})\) * 1.48 \(O_{z}(\epsilon_{n})\) * 1.49 \(O_{z}(\epsilon_{n})\) * 1.49 \(O_{z}(\epsilon_{n})\) * 1.40 \(O_{z}(\epsilon_{n})\) * 1.41 \(O_{z}(\epsilon_{n})\) * 1.41 \(O_{z}(\epsilon_{n})\) * 1.42 \(O_{z}(\epsilon_{n})\) * 1.43 \(O_{z}(\epsilon_{n})\) * 1.44 \(O_{z}(\epsilon_{n})\) * 1.45 \(O_{z}(\epsilon_{n})\) * 1.46 \(O_{z}(\epsilon_{n})\) * 1.47 \(O_{z}(\epsilon_{n})\) * 1.48 \(O_{z}(\epsilon_{n})\) * 1.49 \(O_{z}(\epsilon_{n})\) * 1.49 \(O_{z}(\epsilon_{n})\) * 1.41 \(O_{z}(\epsilon_{n})\) * 1.41 \(O_{z}(\epsilon_{n})\) * 1.42 \(O_{z}(\epsilon_{n})\) * 1.43 \(O_{z}(\epsilon_{n})\) * 1.44 \(O_{z}(\epsilon_{n})\) * 1.45 \(O_{z}(\epsilon_{n})\) * 1.46 \(O_{z}(\epsilon_{n})\) * 1.47 \(O_{z}(\epsilon_{n})\) * 1.48 \(O_{z}(\epsilon_{n})\) * 1.49 \(O_{z}(\epsilon_{n})\) ## 1. Introduction Artificial Neural Networks are computing systems inspired by a simplified model of interconnected neurons, storing and exchanging information inside a biological brain. Theorized in the late fifties [11], this class of algorithms only established oneself as a cutting-edge technology in the two thousands, thanks to the ever-increasing computing power, and perhaps even more thanks to the accessibility of huge databases in the modern-age internet. Compared to the delicate craftsmanship of artificial intelligence specialists, the mathematical understanding of these networks remained at a very basic level until recently. The classical tools of random matrix theory in particular were helpless regarding the intrinsic non-linear connections between the artificial neurons. This article is concerned about the Conjugate Kernel model, first introduced from the point of view of random matrices in [14]. This model is a random matrix ensemble that corresponds to a multi-layer feed-forward neural network with weights initialized at random, in the asymptotic regime where the network width grows linearly with the sample size. In this setting, the Conjugate Kernel matrix is simply the sample covariance matrix of the output vectors produced by the final layer of the network. It can be shown that the Conjugate Kernel governs the training and generalization properties of the underlying network ([1], [15]). Its spectral properties are thus of great theoretical and practical interest. The limiting spectral distribution of the Conjugate Kernel associated to a single-layer network was first characterized in [14] under the form of a quartic polynomial equation satisfied by the Stieltjes transform of the limiting measure. This was proven rigorously in [1] under more general assumptions, using advanced combinatorics. Some results on the extremal eigenvalues of the model were even obtained in [1]. [10] first noticed a connection between these equations and the measures obtained from a free multiplicative convolution with some Marcenko-Pastur distributions. Let us also mention the work of [11], where a variant of model including a rank-one additive perturbation is examined, also by means of combinatorial techniques. A first step towards a deeper understanding of the model using analytical techniques was done in [13] in the case of a single-layer network with deterministic data. [14] later generalized this result to random data and multi-layer networks, and was the first to convincingly explain the appearance of free probability convolutions in the limiting spectral distribution of the Conjugate Kernel matrix. A similar analysis was done in [15] in the regime where the network width is much larger than the sample size. The aforementioned results may be classified as global laws, in the sense that they establish the existence of a limiting spectral distribution, a problem that is directly linked to the convergence of the Stieltjes transform via the classical inversion formulas. The next natural question that arises in random matrix theory is to look for a deterministic equivalent of the resolvent of the Conjugate Kernel matrix in the sense of [12]. This consists in finding a deterministic matrix that is asymptotically close to the expected resolvent of the matrix, and thus close to the random resolvent itself provided enough concentration in the random matrices. Establishing a deterministic equivalent result allows for instance to approximate linear statistics of the eigenvectors [10], to examine the convergence of the eigenvectors empirical spectral distributions, and possibly to study the outliers of spiked models [11]. Regarding classical sample covariance random ensembles, this task was done a decade ago in the series of articles [13], [1], [14]. Such results were recently partially extended to models with a general dependence structure in [12], [15]. The most important contribution of this article is Theorem 7.2, that provides a deterministic equivalent of the Conjugate Kernel of a multi-layer neural network model. This theorem extends previously known results in various directions. First we include models with non-differentiable activation functions, as well as potential biases inside or outside the activation function. Secondly we give a quantitative estimate for the convergence of the Stieltjes transforms, which translates into a quantitative convergence of the measures in Kolmogorov distance. Finally and most importantly, we obtain local results, taking the form of a deterministic equivalent for the resolvent matrix of the Conjugate Kernel, quantitatively on both the dimension and the spectral parameter. In the rest of the paper, we also establish intermediary results that may be interesting by themselves. In particular we show that the free convolution of measures with a Marcenko-Pastur distribution is regular with respect to the Stieltjes transform of these distributions (Theorem 4.7). We obtain a similar property for the deterministic equivalent matrices that appear in our results (Proposition 4.11). We also show how our framework applies to other models involving entry-wise operations on random matrices in Section 5.5. This paper relies mostly on analytical methods that study spectral functions of random matrices. We make great use of the theory of Stieltjes transforms and resolvent matrices, particularly its recent developments towards the deterministic equivalent of general sample covariance matrices ([12], [15]). We introduce a new notion of asymptotic equivalence for objects that depend both on a dimension and on a complex spectral parameter. Our definition is convenient to work with, whilst keeping enough precision to imply some quantitative results like the convergence of measures in Kolmogorov distance. This analytical toolbox works in conjunction with concentration of measure principles. We follow the framework of [12], in particular we use the notion of Lipschitz concentration that is remarkably compatible with entry-wise operations on random matrices. We also use a linearization argument to study the covariance matrices of functions of weakly correlated Gaussian vectors. Using the theory of Hermite polynomials, we provide estimates for this approximation, not only on an entry-wise basis like it was done in [10], but also in spectral norm which is new to our knowledge. After we released the first version of this manuscript, we came across the preprint [16], which we were not aware of. A future comparison of the two different perspectives would be certainly interesting, as [16] studies similar objects and seems to describe related phenomena to those considered here. ### Overview of the article In Section 2, we remind the notations and basic properties of concentrated random vectors and matrices, following the framework of [10]. We also introduce the new notion of asymptotic equivalence that will be key in the rest of the article. In Section 3, we address the problem of the approximation the covariance matrices of functions of weakly correlated Gaussian vectors. In Section 4, we remind the general deterministic equivalent results on which this article is based, and we study thoroughly the properties of the matrices appearing in these equivalents. In Section 5, we study a first model of artificial neural network with a single layer and deterministic data. We also explain how our framework applies to other models involving entry-wise operations on random matrices (5.5). In Section 6, we analyze a second model of artificial neural network, still consisting of a single layer but with random data instead. In Section 7, we explain how to study multi-layer networks by induction on the model with one layer. In the Appendix 8, we prove an independent result about the convergence the convergence in Kolmogorov distance of some empirical spectral measures. ### General notations and definitions The set of matrices with \(d\) lines, \(n\) columns, and entries belonging to a set \(\mathbb{K}\) is denoted as \(\mathbb{K}^{d\times n}\). We use the following norms for vectors and matrices: \(\left\|\cdot\right\|\) the Euclidean norm, \(\left\|\cdot\right\|_{F}\) the Frobenius norm, \(\left\|\cdot\right\|\) the spectral norm, and \(\left\|\cdot\right\|_{\max}\) the entry-wise maximum norm. Given \(M\in\mathbb{C}^{n\times n}\), we denote by \(M^{\top}\) its transpose, and by \(\mathrm{Tr}(M)=\sum_{i=1}^{n}M_{ii}\) its trace. If \(M\) is real and diagonalizable we denote by \(\mathrm{Sp}M\) its spectrum and by \(\mu_{M}=\frac{1}{n}\sum_{\lambda\in\mathrm{Sp}M}\delta_{\lambda}\) its empirical spectral distribution. If \(M\) is symmetric positive semi-definite, then \(\mathrm{Sp}M\subset\mathbb{R}^{+}\), and given a spectral parameter \(z\in\mathbb{C}^{+}=\{\omega\in\mathbb{C}\text{ such that }\Im(\omega)>0\}\), we define its resolvent \(\mathcal{G}_{M}(z)=(M-zI_{n})^{-1}\) and its Stieltjes transform \(g_{M}(z)=(1/n)\mathrm{Tr}\mathcal{G}_{M}(z)\). If \(\mu\) is a real probability distribution, its Stieltjes transform is \(g_{\mu}(z)=\int_{\mathbb{R}}\frac{\mu(dt)}{t-z}\), well defined for \(z\in\mathbb{C}^{+}\). It is easy to see that the Stieltjes transform of a matrix is the same as the Stieltjes transform of its empirical spectral distribution, that is \(g_{\mu_{M}}(z)=g_{M}(z)\). We denote by \(\mathcal{F}_{\mu}\) be the cumulative distribution function of the measure \(\mu\), and by \(D(\mu,\nu)=\sup_{t\in\mathbb{R}}|\mathcal{F}_{\mu}(t)-\mathcal{F}_{\nu}(t)|\) the Kolmogorov distance between \(\mu\) and \(\nu\). Let us also introduce two operations on measures: if \(\gamma\in\mathbb{R}\), we denote by \(\gamma\cdot\mu+(1-\gamma)\cdot\nu\) the new measure such that \((\gamma\cdot\mu+(1-\gamma)\cdot\nu)(B)=\gamma\mu(B)+(1-\gamma)\nu(B)\) for any measurable set \(B\) (it may be a signed measure). If \(a,b\in\mathbb{R}\), we denote by \(a+b\mu\) the distribution of the random variable \(a+bX\), where \(X\) is a random variable distributed according to \(\mu\). We denote by \(\mathcal{N}\) a standard (centered and reduced) Gaussian random variable, or the law of this random variable depending on the context. We denote by \(\delta_{a}\) the Dirac delta distribution centered at \(a\in\mathbb{R}\), and by \(\mathrm{MP}(\gamma)\) the Marcenko-Pastur distribution with shape parameter \(\gamma>0\). The operator \(\boxtimes\) denotes the multiplicative free convolution between measures ([4]). The distribution \(\operatorname{MP}(\gamma)\boxtimes\mu\) may be also defined by its Stieltjes transform \(g(z)\), which is the unique solution of the self-consistent equation [10]: \[g(z)=\int_{\mathbb{R}}\frac{1}{(1-\gamma-\gamma zg(z))t-z}\mu(dt).\] For a better readability, we will sometimes omit to mention indices \(n\) and spectral parameters \(z\) in our notations, especially in the course of technical proofs, even if we are dealing implicitly with sequences and complex functions. ## 2. Technical tools ### Concentration framework **Definition 2.1**.: Let \(X_{n}\) be a sequence of random vectors in finite dimensional normed vector spaces \((E_{n},\left\|\cdot\right\|)\), and let \(\sigma_{n}>0\) be a sequence. 1. We say that \(X\) is Lipschitz concentrated if there is a constant \(C>0\) such that for any sequence of \(1\)-Lipschitz maps \(f_{n}:E_{n}\to\mathbb{C}\), for any \(n\in\mathbb{N}\) and \(t\geq 0\): \[\mathbb{P}(|f_{n}(X_{n})-\mathbb{E}[f_{n}(X_{n})]|\geq t)\leq Ce^{-\frac{1}{C }\left(\frac{t}{\sigma_{n}}\right)^{2}}.\] We note this Lipschitz concentration property \(X\propto_{\left\|\cdot\right\|}\mathcal{E}(\sigma_{n})\), or simply \(X\propto\mathcal{E}(\sigma_{n})\) when there is no ambiguity on the chosen norm. 2. If \(Z_{n}\in E_{n}\) is a sequence of deterministic vectors, we say that \(X\) is linearly concentrated around \(Z\) if there is a constant \(C>0\) such that for any sequence of \(1\)-Lipschitz linear maps \(f_{n}:E_{n}\to\mathbb{C}\), for any \(n\in\mathbb{N}\) and \(t\geq 0\): \[\mathbb{P}(|f_{n}(X_{n})-f_{n}(Z_{n})|\geq t)\leq Ce^{-\frac{1}{C}\left(\frac{ t}{\sigma_{n}}\right)^{2}}.\] We note this linear concentration property \(X\in_{\left\|\cdot\right\|}Z\pm\mathcal{E}(\sigma_{n})\), or simply \(X\in Z\pm\mathcal{E}(\sigma_{n})\). We refer to [10] for a comprehensive study of these notions of concentration. Let us enumerate the main properties that will be used throughout this article: **Proposition 2.2**.: 1. _A random vector (respectively a matrix) with i.i.d._ \(\mathcal{N}\) _entries is_ \(\propto\mathcal{E}(1)\) _concentrated with respect to the Euclidean norm (respectively the Frobenius norm). Other examples are listed in_ _[_10_, Theorem 1]__._ 2. _A Lipschitz transformation of a Lipschitz concentrated vector still verifies a Lipschitz concentration property_ _[_10_, Proposition 1]__._ 3. _If_ \(X\propto\mathcal{E}(\sigma_{n})\)_,_ \(Y\propto\mathcal{E}(\sigma_{n}^{\prime})\)_, and_ \(X\) _and_ \(Y\) _are independent, then_ \((X,Y)\propto\mathcal{E}(\sigma_{n}+\sigma_{n}^{\prime})\) _(_[_10_, Proposition 7]__)._ 4. _Lipschitz concentration implies linear concentration: if_ \(X\propto\mathcal{E}(\sigma_{n})\)_, then_ \(X\in\mathbb{E}[X]\pm\mathcal{E}(\sigma_{n})\)_(_[_10_, Proposition 4]__). Both definitions are equivalent in one dimension._ 5. _If_ \(X\in Z\pm\mathcal{E}(\sigma_{n})\) _and_ \(\tilde{Z}\) _is another deterministic vector, we have the equivalence:_ \(X\in\tilde{Z}\pm\mathcal{E}(\sigma_{n})\iff\left\|Z-\tilde{Z}\right\|\leq O( \sigma_{n})\) _(_[_10_, Lemma 3]__)._ 6. _If_ \(X\in Z\pm\mathcal{E}(\sigma_{n})\) _and_ \(X^{\prime}\) _is another random vector such that_ \(\left\|X^{\prime}\right\|\leq O(\sigma_{n})\) _a.s., then_ \(X+X^{\prime}\in Z\pm\mathcal{E}(\sigma_{n})\)_._ 7. _Since_ \(\left\|\cdot\right\|\leq\left\|\cdot\right\|_{F}\) _on matrices, the concentration with respect to the Frobenius norm is a stronger property than with the spectral norm._ 8. _The map_ \(Y\mapsto\left(Y^{\top}Y/n-zI_{p}\right)^{-1}\) _is_ \(\frac{2^{3/2}|z|^{1/2}}{\Im(z)^{2}\sqrt{n}}\)_-Lipschitz with respect to the Frobenius norm (__[_11_, Proposition 4.3]__). In particular, if_ \(Y\propto_{\left\|\cdot\right\|_{F}}\mathcal{E}(1)\)_, then_ \(\mathcal{G}_{Y^{\top}Y/n}(z)\propto_{\left\|\cdot\right\|_{F}}\mathcal{E} \Bigg{(}\frac{|z|^{1/2}}{\Im(z)^{2}\sqrt{n}}\Bigg{)}\)_._ _._ 9. _The map_ \(M\mapsto\frac{1}{n}\mathrm{Tr}(M)\) _is_ \(1/\sqrt{n}\)_-Lipschitz in Frobenius norm. In particular, if_ \(\mathcal{G}_{K}(z)\propto_{\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right| \kern-1.075pt\right|_{F}}\mathcal{E}(\sigma_{n})\) _for some random matrix_ \(K\in\mathbb{R}^{n\times n}\)_, then_ \(g_{K}(z)\propto\mathcal{E}(\sigma_{n}/\sqrt{n})\)_._ 10. _If a random variable_ \(X\) _is_ \(\propto\mathcal{E}(\sigma_{n})\) _concentrated, then_ \(|X-\mathbb{E}[X]|\leq O(\sqrt{\log n}\,\sigma_{n})\) _a.s. (_[_10_, Proposition 3.3]__)._ We will also need a result for a product of matrices, and introduce for this reason a notion of conditional concentration. If \(\mathcal{B}\) is a measurable subset of the universe \(\Omega\) with \(\mathbb{P}(\mathcal{B})>0\), the random matrix \(X\) conditioned with the event \(\mathcal{B}\), denoted as \((X|\mathcal{B})\), designates the measurable mapping \((\mathcal{B},\mathcal{F}_{\mathcal{B}},\mathbb{P}/\mathbb{P}(\mathcal{B})) \mapsto E_{n}\) satisfying: \(\forall\omega\in\mathcal{B}\), \((X|\mathcal{B})(\omega)=X(\omega)\). **Proposition 2.3** (Proposition 9 in [10]).: _If \(X\) and \(Y\) are sequences of random matrices with dimensions bounded by \(O(n)\), such that \((X,Y)\propto_{\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt \right|_{F}}\mathcal{E}(1)\), \(\mathbb{E}[\left|\kern-1.075pt\left|\kern-1.075pt\left|\chi\right|\right| \kern-1.075pt\right|]\leq O(\sqrt{n})\) and \(\mathbb{E}[\left|\kern-1.075pt\left|\kern-1.075pt\left|\kern-1.075pt\left|Y \right|\kern-1.075pt\right|]\right|\leq O(\sqrt{n})\), then there exists a constant \(c>0\) such that:_ 1. \(\mathbb{E}[\left|\kern-1.075pt\left|\kern-1.075pt\left|\kern-1.075pt\left|X \right|\kern-1.075pt\right|]\right|\leq c\sqrt{n}\) _and_ \(\mathbb{E}[\left|\kern-1.075pt\left|\kern-1.075pt\left|Y\right|\kern-1.075pt \right|]\right|\leq c\sqrt{n}\)_._ 2. _With_ \(\mathcal{B}=\{\left|\kern-1.075pt\left|\kern-1.075pt\left|X\right|\kern-1.075pt \right|\kern-1.075pt\right|\kern-1.075pt\right|\kern-1.075pt\}\) _and_ \(\left|\kern-1.075pt\left|\kern-1.075pt\left|Y\right|\kern-1.075pt\right| \kern-1.075pt\right|\kern-1.075pt\}\)__\(\leq 2c\sqrt{n}\)_),_ \(\mathbb{P}(\mathcal{B}^{c})\leq ce^{-n/c}\)_._ 3. \((XY|\mathcal{B})\propto_{\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot \right|\kern-1.075pt\right|_{F}}\mathcal{E}(\sqrt{n})\)_._ ### Polynomial bounds in \(z\) and notation \(O_{z}(\epsilon_{n})\) Throughout this article we will deal with quantitative estimates involving both a dimension parameter \(n\in\mathbb{N}\) and a spectral parameter \(z\in\mathbb{C}^{+}\). As it turns out, for many applications it is not useful to track the exact dependence in \(z\), which motivates the following definition: **Definition 2.4**.: Let \(\zeta:\mathbb{N}\times\mathbb{C}^{+}\to\mathbb{R}^{+}\) be a function and \(\epsilon_{n}>0\) a sequence. We say that \(\zeta\) is bounded by \(\epsilon_{n}\) in \(n\) and polynomially in \(z\), and we note \(\zeta(n,z)\leq O_{z}(\epsilon_{n})\), if there exists \(\alpha\geq 0\) such that uniformly in \(z\in\mathbb{C}^{+}\) with bounded \(\Im(z)\): \[\zeta(n,z)\leq O\bigg{(}\epsilon_{n}\frac{|z|^{\alpha}}{\Im(z)^{2\alpha}} \bigg{)}.\] We say that a family of functions is uniformly \(O_{z}(\epsilon_{n})\) bounded if the above bound holds uniformly for any function of the family. _Remark 2.5_.: If \(\Im(z)\) is bounded, then \(\frac{|z|}{\Im(z)}\geq 1\) and \(\frac{1}{\Im(z)}\) is bounded away from \(0\). As a consequence, if \(\alpha\geq\alpha^{\prime}\geq 0\), then \(\frac{|z|^{\alpha}}{\Im(z)^{2\alpha}}\leq O\bigg{(}\frac{|z|^{\alpha^{\prime}} }{\Im(z)^{2\alpha^{\prime}}}\bigg{)}\). The classical rules of calculus thus apply to our notation: \(O_{z}(\epsilon_{n})+O_{z}(\epsilon_{n}^{\prime})=O_{z}(\epsilon_{n}+\epsilon_{n} ^{\prime})\) and \(O_{z}(\epsilon_{n})O_{z}(\epsilon_{n}^{\prime})=O_{z}(\epsilon_{n}\epsilon_{n} ^{\prime})\). The next technical lemma will be key to translate results like the deterministic equivalent Theorem 4.3 into simplified versions using our \(O_{z}(\epsilon_{n})\) notations. **Lemma 2.6**.: _If for some constants \(\alpha_{0},\alpha_{1},\ldots,\alpha_{6}>0\), the function \(\zeta\) satisfies an a priori inequality \(|\zeta(n,z)|\leq O\bigg{(}\frac{|z|^{\alpha_{1}}}{\epsilon_{n}^{\alpha_{0}} \Im(z)^{\alpha_{1}+\alpha_{2}}}\bigg{)}\), and if the bound:_ \[\zeta(n,z)\leq O\bigg{(}\epsilon_{n}\frac{|z|^{\alpha_{3}}}{\Im(z)^{\alpha_{3} +\alpha_{4}}}\bigg{)}\] _holds uniformly for \(z\in\mathbb{C}^{+}\) with bounded \(\Im(z)\) and satisfying \(\epsilon_{n}\frac{|z|^{\alpha_{5}}}{\Im(z)^{\alpha_{5}+\alpha_{6}}}\leq c\) for some constant \(c>0\), then for some exponent \(\alpha>0\), the bound:_ \[\zeta(n,z)\leq O\bigg{(}\epsilon_{n}\frac{|z|^{\alpha}}{\Im(z)^{2\alpha}} \bigg{)}\] _holds uniformly in \(z\in\mathbb{C}^{+}\) with bounded \(\Im(z)\), that is \(\zeta\leq O_{z}(\epsilon_{n})\)._ Proof.: Let us choose \(\alpha=\max\left(\alpha_{5}(\alpha_{0}+1)+\alpha_{1},\alpha_{6}(\alpha_{0}+1)+ \alpha_{2},\alpha_{3},\alpha_{4}\right)\). If \(\epsilon_{n}\frac{|z|^{\alpha_{5}}}{\Im(z)^{\alpha_{5}+\alpha_{6}}}\leq c\), we use the main inequality on \(\zeta\): \[\zeta(n,z)\leq O\bigg{(}\epsilon_{n}\frac{|z|^{\alpha_{3}}}{\Im(z)^{\alpha_{ 3}+\alpha_{4}}}\bigg{)}\leq O\bigg{(}\epsilon_{n}\frac{|z|^{\alpha}}{\Im(z)^{2 \alpha}}\bigg{)}.\] In the case where \(\epsilon_{n}\frac{|z|^{\alpha_{5}}}{\Im(z)^{\alpha_{5}+\alpha_{6}}}\geq c\), we use the _a priori_ inequality on \(\zeta\): \[|\zeta(n,z)| \leq O\bigg{(}\frac{|z|^{\alpha_{1}}}{\epsilon_{n}^{\alpha_{0}} \Im(z)^{\alpha_{1}+\alpha_{2}}}\bigg{)}\] \[\leq O\bigg{(}\epsilon_{n}\bigg{(}\frac{|z|^{\alpha_{5}}}{\Im(z) ^{\alpha_{5}+\alpha_{6}}}\bigg{)}^{\alpha_{0}+1}\frac{|z|^{\alpha_{1}}}{\Im(z )^{\alpha_{1}+\alpha_{2}}}\bigg{)}\] \[\leq O\bigg{(}\epsilon_{n}\frac{|z|^{\alpha}}{\Im(z)^{2\alpha}} \bigg{)}.\] _Remark 2.7_.: As we will see later in this article, the lemma is particularly suited to simplify some quantitative statements that require the spectral parameter \(z\) to be away from the real axis. Indeed the Stieltjes tranforms and resolvent matrices satisfy the classical bounds \(|g(z)|\leq 1/\Im(z)\) and \(\left\|\mathcal{G}(z)\right\|_{F}\leq\sqrt{n}\|\mathcal{G}(z)\|\|\leq\sqrt{n} /\Im(z)\), which translate into _a priori_ inequalities for any difference of such objects. Let us wrap this subsection by giving a general setting on which \(O_{z}(\epsilon_{n})\) bounds between Stieltjes transforms imply polynomial bounds in Kolmogorov distance. We remind our reader that for measures \(\nu\) and \(\mu\) with cumulative distribution functions \(\mathcal{F}_{\nu}\) and \(\mathcal{F}_{\mu}\), the Kolmogorov distance is defined as \(D(\mu,\nu)=\sup\limits_{t\in\mathbb{R}}|F_{\nu}(t)-\mathcal{F}_{\mu}(t)|\). The following result is an immediate consequence of [10, Theorem 3.6]. **Proposition 2.8**.: _Let \(\mu_{n}\) and \(\nu_{n}\) be sequences of probability measures on \(\mathbb{R}^{+}\) such that:_ 1. \(\int_{\mathbb{R}}|\mathcal{F}_{\mu_{n}}(t)-\mathcal{F}_{\nu_{n}}(t)|dt<\infty\)_._ 2. \(\mathcal{F}_{\nu_{n}}\) _are uniformly Holder continuous with exponent_ \(\beta\in(0,1]\)_._ 3. \(\mu_{n}\) _and_ \(\nu_{n}\) _have uniformly bounded second moments._ 4. \(|g_{\mu_{n}}(z)-g_{\nu_{n}}(z)|\leq O_{z}(\epsilon_{n})\) _for some sequence_ \(\epsilon_{n}\to 0\)_._ _Then there exists \(\theta>0\) such that \(D(\mu_{n},\nu_{n})\leq O(\epsilon_{n}^{\theta})\)._ _Remark 2.9_.: In some cases, the uniform Holder continuity hypothesis may be adapted using a simple change of measures. For instance, given shape parameters \(\gamma_{n}>1\) and measures \(\tau_{n}\) supported on the same compact of \((0,\infty)\), the cumulative distribution function of \(\nu_{n}=\operatorname{MP}(\gamma_{n})\boxtimes\tau_{n}\) are not even continuous at \(0\). However we can consider the new measures \(\check{\nu}_{n}=(1-\gamma_{n})\cdot\delta_{0}+\gamma_{n}\cdot\nu_{n}\), and use the fact that \(\mathcal{F}_{\check{\nu}_{n}}\) are uniformly \(1/2\)-Holder continuous to deduce the same result (see [10, Section 8.3]). A variant of this result for empirical spectral distributions may also be adapted, see Proposition 8.1 in the Appendix. ## 3. Covariance matrices of functions of Gaussian vectors In this section we consider the random vector \(y=f(u)\in\mathbb{R}^{n}\), obtained by applying a real function on each of the coordinates of a centered Gaussian vector \(u\in\mathbb{R}^{n}\). We are particularly interested in the case where the entries of \(u\) are weakly correlated. In this instance, we will give approximations of the covariance matrix \(\Sigma=\mathbb{E}\Big{[}yy^{\top}\Big{]}\), with a control of the error either entry-wise, or in spectral norm, or for their respective Stieltjes transforms. We first present the tools making these approximations possible. ### Hermite polynomials Let us remind the notation \(\mathcal{N}\) for a standard Gaussian random variable. We denote by \(\mathcal{H}\) the Hilbert space of real Borel functions such that \(\mathbb{E}\big{[}f(\mathcal{N})^{2}\big{]}<\infty\), endowed with the Gaussian inner product: \[<f,g>=\mathbb{E}[f(\mathcal{N})g(\mathcal{N})]=\frac{1}{\sqrt{2\pi}}\int_{ \mathbb{R}}f(t)g(t)e^{-t^{2}/2}dt.\] Remark that all Lipschitz continuous functions automatically belong to \(\mathcal{H}\). The \(r\)-th non-normalized Hermite polynomial is by definition: \[h_{r}(t)=(-1)^{r}e^{t^{2}/2}\frac{\mathrm{d}^{r}}{\mathrm{d}t^{r}}\Big{(}e^{-t ^{2}/2}\Big{)}.\] The first Hermite polynomials are: \(h_{0}=1\), \(h_{0}=\mathbf{X}\), \(h_{2}=\mathbf{X}^{2}-1\), and \(h_{3}=\mathbf{X}^{3}-3\mathbf{X}\). The \(r\)-th normalized Hermite polynomial is \(\hat{h}_{r}=\frac{h_{r}}{\sqrt{r!}}\), and the \(r\)-th Hermite coefficient of a function \(f\in\mathcal{H}\) is \(\zeta_{r}(f)=<f,\hat{h}_{r}>\). Hereafter we remind some classical properties of Hermite polynomials without proofs. For more details, we invite our reader to consult [10, Chapter 4]. **Proposition 3.1**.: 1. _[label=()]_ 2. \(h_{r}\) _are monic polynomials of degree_ \(r\)_._ \(\hat{h}_{r}\) _form a complete orthonormal basis of_ \(\mathcal{H}\)_._ 3. _Every function of_ \(\mathcal{H}\) _can be expanded as the converging sum in_ \(\mathcal{H}\)_:_ \(f=\sum_{r\geq 0}\zeta_{r}(f)\hat{h}_{r}\)_. In particular_ \(\|f\|_{\mathcal{H}}^{2}=\mathbb{E}\big{[}f(\mathcal{N})^{2}\big{]}=\sum_{r \geq 0}\zeta_{r}(f)^{2}\)_._ 4. _The Hermite polynomials satisfy the relations:_ \[h_{k}^{\prime}(t) =kh_{k-1}(t),\] \[h_{r+1}(t) =th_{r}(t)-h_{r}^{\prime}(t).\] We move on to more specific properties that shall be used in the course of this section. **Lemma 3.2** (Lemma D.2 in [11]).: _If \(z_{1}\) and \(z_{2}\) are standard Gaussian random variables, such that \((z_{1},z_{2})\) forms a Gaussian vector, then:_ \[\mathbb{E}\Big{[}\hat{h}_{r}(z_{1})\hat{h}_{s}(z_{2})\Big{]}=\mathbf{1}_{r=s} \operatorname{Cov}[z_{1},z_{2}]^{r}.\] **Proposition 3.3**.: _Let us fix \(f\in\mathcal{H}\), and for \(r\in\mathbb{N}\) let \(\Psi_{r}(\sigma)=\sigma^{-r}\mathbb{E}[f(\sigma\mathcal{N})h_{r}(\mathcal{N})]\). Then \(\Psi_{r}\) is \(\mathcal{C}^{\infty}\) on \((0,\infty)\) and \(\Psi_{r}^{\prime}(\sigma)=\sigma\Psi_{r+2}(\sigma)\)._ Proof.: From the identity \(\mathbb{E}[f(\sigma\mathcal{N})h_{r}(\mathcal{N})]=\frac{1}{\sigma\sqrt{2\pi}}\int_ {\mathbb{R}}f(t)h_{r}(t/\sigma)e^{-\frac{t^{2}}{2\sigma^{2}}}dt\) and the Leibniz integral rule, we deduce that \(\Psi_{r}\) is \(\mathcal{C}^{\infty}\) and that: \[\Psi_{r}^{\prime}(\sigma) =\frac{d}{d\sigma}\bigg{(}\frac{1}{\sigma^{r+1}\sqrt{2\pi}}\int_ {\mathbb{R}}f(t)h_{r}(t/\sigma)e^{-\frac{t^{2}}{2\sigma^{2}}}dt\bigg{)}\] \[=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}f(t)\frac{d}{d\sigma} \bigg{(}\frac{1}{\sigma^{r+1}}h_{r}(t/\sigma)e^{-\frac{t^{2}}{2\sigma^{2}}} \bigg{)}dt.\] The derivative inside the integral can be computed in a few steps. Firstly from the definition \(\frac{d}{dt}\Big{(}h_{r}(t)e^{-t^{2}/2}\Big{)}=(-1)^{r}\frac{d^{r+1}}{dt^{r+1} }\Big{(}e^{-t^{2}/2}\Big{)}=-h_{r+1}(t)e^{-t^{2}/2}\). Then applying the identity \(th_{r+1}(t)=h_{r+2}(t)+(r+1)h_{r}(t)\), which can be deduced from Proposition 3.1, leads to: \[\frac{d}{d\sigma}\bigg{(}h_{r}(t/\sigma)e^{-\frac{t^{2}}{2\sigma^ {2}}}\bigg{)} =\frac{t}{\sigma^{2}}h_{r+1}(t/\sigma)e^{-t^{2}/2\sigma^{2}}\] \[=\frac{1}{\sigma}(h_{r+2}(t/\sigma)+(r+1)h_{r}(t/\sigma))e^{- \frac{t^{2}}{2\sigma^{2}}},\] \[\frac{d}{d\sigma}\bigg{(}\frac{1}{\sigma^{r+1}}h_{r}(t/\sigma)e^ {-\frac{t^{2}}{2\sigma^{2}}}\bigg{)} =\frac{1}{\sigma^{r+2}}(h_{r+2}(t/\sigma)+(r+1)h_{r}(t/\sigma))e^{ -\frac{t^{2}}{2\sigma^{2}}}-\frac{r+1}{\sigma^{r+2}}h_{r}(t/\sigma)e^{-\frac{t ^{2}}{2\sigma^{2}}}\] \[=\frac{1}{\sigma^{r+2}}h_{r+2}(t/\sigma)e^{-\frac{t^{2}}{2\sigma ^{2}}}.\] We obtain finally: \[\Psi_{r}^{\prime}(\sigma) =\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}f(t)\frac{1}{\sigma^{r+2}} h_{r+2}(t/\sigma)e^{-\frac{t^{2}}{2\sigma^{2}}}dt\] \[=\sigma\cdot\bigg{(}\frac{1}{\sigma^{r+3}\sqrt{2\pi}}\int_{ \mathbb{R}}f(t)h_{r+2}(t/\sigma)e^{-\frac{t^{2}}{2\sigma^{2}}}dt\bigg{)}\] \[=\sigma\cdot\Psi_{r+2}(\sigma).\] **Corollary 3.4**.: _Let \(f\in\mathcal{H}\), and for \(\sigma>0\) let \(f_{\sigma}:t\mapsto f(\sigma t)\). Then for any integer \(r\geq 0\):_ \[\zeta_{r}(f_{\sigma}) =\sigma^{r}(\zeta_{r}(f)+O(\sigma-1))\] \[=\sigma^{r}\Big{(}\zeta_{r}(f)+\sqrt{(r+1)(r+2)}(\sigma-1)\zeta_{ r+2}(f)+O((\sigma-1)^{2})\Big{)}.\] Proof.: These expressions are straightforward consequences of first and second order Taylor expansions of \(\Psi_{r}\). For instance for the second formula: \[\zeta_{r}(f_{\sigma}) =\frac{\sigma^{r}}{\sqrt{r!}}\Psi_{r}(\sigma)\] \[=\frac{\sigma^{r}}{\sqrt{r!}}\big{(}\Psi_{r}(1)+(\sigma-1)\Psi_{ r+2}(1)+O((\sigma-1)^{2})\big{)}\] \[=\sigma^{r}\Big{(}\zeta_{r}(f)+\sqrt{(r+1)(r+2)}(\sigma-1)\zeta_{ r+2}(f)+O((\sigma-1)^{2})\Big{)}.\] ### Iterated Hadamard products In what follows, \(M^{\circ r}\) denotes the Hadamard product of \(r\) copies of a matrix \(M\in\mathbb{R}^{n\times n}\). We denote by \(\operatorname{diag}(M)\) and \(\operatorname{off}(M)\) respectively the diagonal and off-diagonal sub-matrices of \(M\). \(\operatorname{diag}(M)\in\mathbb{R}^{n}\) denotes the diagonal elements of \(M\) reshaped as a vector in \(\mathbb{R}^{n}\). **Lemma 3.5**.: _If \(\Delta\in\mathbb{R}^{n\times n}\) is a symmetric matrix such that \(I_{n}+\Delta\) is positive semi-definite, then for any integer \(r\geq 1\):_ \[\left|\!\left|\!\left|\Delta^{\circ r+1}\right|\!\right|\!\right| \leq(1+\left|\!\left|\Delta\right|\!\right|_{\max})\!\left|\! \left|\Delta^{\circ r}\right|\!\right|+\left|\!\left|\Delta\right|\!\right|_{ \max}^{r},\] \[\left|\!\left|\!\left|\Delta^{\circ 2r}\right|\!\right|\!\right| \leq\left|\!\left|\Delta\right|\!\right|_{\max}^{2r-2}\!\left|\! \left|\!\left|\Delta\circ\Delta\right|\!\right|\!\right|.\] Proof.: For any matrices \(A\) and \(B\in\mathbb{R}^{n\times n}\) with \(A\) positive semi-definite, the inequality holds true ([10, Proposition 3.7.9.]). As a consequence: \[\left|\!\left|\!\left|\Delta^{\circ r+1}\right|\!\right|\!\right| \leq\left|\!\left|\!\left|\left(I_{n}+\Delta\right)\circ\Delta^{ \circ r}\right|\!\right|\!\right|+\left|\!\left|\!\left|I_{n}\circ\Delta^{ \circ r}\right|\!\right|\] \[\leq\left|\!\left|I_{n}+\Delta\right|\!\right|\!\left|\!\left| \Delta^{\circ r}\right|\!\right|\!\right|+\left|\!\left|\!\left|\operatorname {diag}(\Delta)^{r}\right|\!\right|\] \[\leq(1+\left|\!\left|\Delta\right|\!\right|\!\left|\Delta^{ \circ r}\right|\!\right|+\left|\!\left|\Delta\right|\!\right|_{\max}^{r}.\] For the second inequality, since \(\Delta^{\circ 2r}\) has non negative entries, there is \(\alpha\in\left(\mathbb{R}^{+}\right)^{n}\) such that \(\left|\!\left|\alpha\right|\!\right|=1\) and. We deduce that: \[\left|\!\left|\!\left|\Delta^{\circ 2r}\right|\!\right|\!\right| =\left(\sum_{i=1}^{n}\left(\sum_{j=1}^{n}\Delta_{ij}^{2r}\alpha_ {j}\right)^{2}\right)^{1/2}\] \[\leq\left|\!\left|\Delta\right|\!\right|_{\max}^{2r-2}\!\left(\sum _{i=1}^{n}\left(\sum_{j=1}^{n}\Delta_{ij}^{2}\alpha_{j}\right)^{2}\right)^{1/2}\] \[\leq\left|\!\left|\Delta\right|\!\right|_{\max}^{2r-2}\!\left|\! \left|\!\left|\Delta\circ\Delta\right|\!\right|\!\right|\!\right|\!.\] **Proposition 3.6**.: _If \(\Delta\in\mathbb{R}^{n\times n}\) is a sequence of symmetric matrices such that \(I_{n}+\Delta\) is positive semi-definite, \(\left|\!\left|\Delta\right|\!\right|\!\right|\) converges to \(0\) and \(\left|\!\left|\!\left|\!\left|\Delta\right|\!\right|\!\right|\) is bounded, then for any \(r\geq 1\) the quantities,, and are all bounded by \(O\!\left(\left|\Delta\right|\!\right|\!\right|\!\right|\)._ Proof.: According to the preceding lemma,. This implies that: and, and. The main results follow after remarking that \(\left|\!\left|\Delta^{\circ 2r+1}\right|\!\right|\leq(1+\left|\!\left|\Delta\right|\! \right|\!\left|\Delta\right|\!\right|\!\left|\!\right|+\left|\!\left|\Delta \right|\!\right|\!\left|\!\left|\Delta^{\circ 2r}\right|\!\right|\!\left|\!\right|+\left|\! \left|\Delta\right|\!\right|_{\max}^{2r-2}\!\left|\!\left|\!\left|\Delta\circ \Delta\right|\!\right|\!\right|\leq O\!\left(\left|\!\left|\Delta\right|\! \right|\!\right|\!\right|_{\max}^{2r-2}\!\left|\!\left|\!\left|\!\left|\! \left|\Delta\right|\!\right|\!\right|_{\max}^{2r-2}\!\right)\). The remaining results follow after remarking that \(\left|\!\left|\operatorname{diag}(\Delta^{\circ r})\right|\!\right|\!\leq O( \left|\!\left|\Delta\right|\!\right|\!\right|_{\max}^{r})\). ### General expansion of the covariance matrix \(\Sigma\) In the following paragraphs we are considering a function \(f\in\mathbf{H}\), and \(u\in\mathbb{R}^{n}\) a centered Gaussian vector with covariance matrix \(S\in\mathbb{R}^{n\times n}\). The random vector \(y=f(u)\) is obtained by applying the function \(f\) entry-wise on \(u\), and we let \(\Sigma=\mathbb{E}\!\left[yy^{\top}\right]\). For \(1\leq i\leq n\), we denote by \(f_{i}\) the function \(f_{i}:t\mapsto f\!\left(S_{ii}^{1/2}\,t\right)\), and by \(D_{r}\in\mathbb{R}^{n\times n}\) the diagonal matrix with entries \({S_{ii}}^{-r/2}\zeta_{r}(f_{i})\). **Proposition 3.7**.: _The convergence of the following sum holds entry-wise:_ \[\Sigma=\sum_{r\geq 0}D_{r}\,S^{\sigma r}D_{r}.\] Proof.: The random variables \(\tilde{u}_{i}=S_{ii}^{-1/2}u_{i}\) are standard Gaussian random variables, and they form a Gaussian vector. Using Lemma 3.2, we have: \[\Sigma_{ij} =\mathbb{E}[f_{i}(\tilde{u}_{i})f_{j}(\tilde{u}_{j})]\] \[=\sum_{r,s\geq 0}\zeta_{r}(f_{i})\zeta_{s}(f_{j})\mathbb{E}\Big{[} \hat{h}_{r}(\tilde{u}_{i})\hat{h}_{s}(\tilde{u}_{j})\Big{]}\] \[=\sum_{r\geq 0}\zeta_{r}(f_{i})\zeta_{r}(f_{j})\operatorname{Cov} [\tilde{u}_{i},\tilde{u}_{j}]^{r}\] \[=\sum_{r\geq 0}\zeta_{r}(f_{i})\zeta_{r}(f_{j}){S_{ii}}^{-r/2}{S_ {jj}}^{-r/2}{S_{ij}}^{r}\] \[=\sum_{r\geq 0}\left(D_{r}\right)_{i}(D_{r})_{j}(S^{\sigma r})_ {ij},\] which proves the Proposition. ### Approximation of \(\Sigma\) for weakly correlated Gaussian vectors We will now explain how the expansion of \(\Sigma\) given by Proposition 3.7 can be simplified when \(u\) has weakly correlated entries, in the sense that its covariance matrix \(S\) is close to \(I_{n}\). To this end we let \(\Delta=S-I_{n}\), and: \[\Sigma_{\text{approx}}=\left\|f\right\|_{\mathcal{H}}^{2}I_{n}+\frac{\zeta_{2 }(f)^{2}}{2}\vec{\operatorname{diag}}(\Delta)\vec{\operatorname{diag}}( \Delta)^{\top}+\sum_{r=1,2,3}\zeta_{r}(f)^{2}\Delta^{\sigma r}\] **Assumptions 3.8**.: 1. \(f\) is Lipschitz and Gaussian centered, i.e. \(\zeta_{0}(f)=\mathbb{E}[f(\mathcal{N})]=0\). 2. \(\left|\!\left|\Delta\right|\!\right|\) and \(\left|\!\left|\vec{\operatorname{diag}}(\Delta)\right|\!\right|\) are bounded. 3. \(\left|\!\left|\Delta\right|\!\right|_{\text{max}}\) converges to \(0\). **Theorem 3.9**.: _Under Assumptions 3.8, \(\left|\!\left|\!\left|\Sigma-\Sigma_{\text{approx}}\right|\!\right|\!\right| \leq O(\left|\!\left|\Delta\right|\!\right|_{\text{max}})\)._ Proof.: Let \(\epsilon_{n}=\left|\Delta\right|\!\right|_{\text{max}}\). Since \(f\) is Lipschitz, \(\left\|f-f_{i}\right\|_{L^{2}}=O\Big{(}\left|1-S_{ii}^{1/2}\right|\Big{)}=O( \epsilon_{n})\). We deduce that \(\left|\!\left|\!\left|\operatorname{diag}(\Sigma)-\left\|f\right|I_{n}\right| \!\right|=\max\limits_{1\leq i\leq n}\left|\left|f_{i}\right|-\left\|f\right| _{\mathcal{H}}\right|\leq O(\epsilon_{n})\). For the off-diagonal terms, let \(\nu\) be the vector in \(\mathbb{R}^{n}\) defined by its coordinates \(\nu_{i}=\zeta_{0}(f_{i})\). Let also \(\mu=\frac{\zeta_{2}(f)^{2}}{2}\vec{\operatorname{diag}}(\Delta)\). The decomposition: \[\operatorname{off}(\Sigma)=\operatorname{off}\Big{(}\nu\nu^{\top}\Big{)}+\sum _{r\geq 1}D_{r}\operatorname{off}(\Delta^{\sigma r})\,D_{r}\] follows easily from the expansion of \(\Sigma\) given by Proposition 3.7. Since \(\zeta_{0}(f)=0\) and \(S_{ii}^{1/2}=1+\frac{\Delta_{ii}}{2}+O(\Delta_{ii}^{2})\), using the second order expansion given by Corollary 3.4 applied to the function \(f\), uniformly in \(i\in[\![1,n]\!]\) we have: \[\nu_{i}=\zeta_{0}(f_{i}) =\sqrt{2}\Big{(}S_{ii}^{1/2}-1\Big{)}\zeta_{2}(f)+O\bigg{(}\Big{(}S _{ii}^{1/2}-1\Big{)}^{2}\bigg{)}\] \[=\frac{\zeta_{2}(f)}{\sqrt{2}}\Delta_{ii}+O(\Delta_{ii}^{2})\] \[=\mu_{i}+O(\Delta_{ii}^{2}).\] Therefore \(\left\|\nu-\mu\right\|^{2}=O\Bigg{(}\sum_{i=1}^{n}\Delta_{ii}^{4}\Bigg{)}\leq O \bigg{(}\max_{1\leq i\leq n}\Delta_{ii}^{2}\bigg{)}\left\|\vec{\operatorname{ diag}}(\Delta)\right\|^{2}\leq O(\epsilon_{n}^{2})\), and: For the remaining sum \(\sum_{r\geq 1}D_{r}\operatorname{off}(\Delta^{\circ r})\,D_{r}\), we use again that \(|\zeta_{r}(f_{i})-\zeta_{r}(f)|\leq\left\|f_{i}-f\right\|_{\mathcal{H}}\leq O (\epsilon_{n})\), hence \(\left\|D_{r}-\zeta_{r}(f)I_{n}\right\|\leq O(\epsilon_{n})\), and in particular \(\left\|D_{r}\right\|\leq O(1)\). Using Proposition 3.6, for \(r=1,2\) or \(3\): \[\left\|D_{r}\operatorname{off}(\Delta^{\circ r})\,D_{r}-\zeta_{r} (f)^{2}\Delta^{\circ r}\right\|\] \[\quad+\left\|D_{r}-\zeta_{r}(f)I_{n}\right\|\left\|\Delta^{\circ r }\right\|\left\|(\left\|D_{r}\right\|+|\zeta_{r}(f)|\right)\] \[\leq O(\epsilon_{n}).\] Finally for the remaining terms, using Proposition 3.6 for \(r\geq 4\): \[\left\|\left\|\sum_{r\geq 4}D_{r}\operatorname{off}(\Delta^{\circ r})\,D_{r} \right\|\right\|\leq\sum_{r\geq 2}O\big{(}\epsilon_{n}^{2r-2}\big{)}\leq O \big{(}\epsilon_{n}^{2}\big{)}.\] ### Linearization of \(\Sigma\) in specific settings Let: \[\Sigma_{\operatorname{lin}}=\left\|f\right\|_{\mathcal{H}}^{2}I_{n}+\zeta_{1} (f)^{2}\Delta\] The matrix \(\Sigma_{\operatorname{lin}}\) is obtained by means of usual matrix operations, easy to handle with classical random matrix theory tools, in contrast to \(\Sigma_{\operatorname{approx}}\) which involves Hadamard products. may not converge to \(0\) under the mere Assumptions 3.8. We can nonetheless identify conditions involving \(\Delta\) and \(f\), under which \(\Sigma_{\operatorname{lin}}\) is a good approximation of \(\Sigma\) either entry-wise, or in spectral norm, or for their respective Stieltjes transforms. **Proposition 3.10**.: _Under Assumptions 3.8, \(\left\|\Sigma-\Sigma_{\operatorname{lin}}\right\|\) is bounded, and with \(\epsilon_{n}=\zeta_{2}(f)^{2}\|\Delta\|_{\max}^{2}+\zeta_{3}(f)^{2}\|\Delta\|_ {\max}^{3}\):_ \[\left\|\Sigma-\Sigma_{\operatorname{lin}}\right\|_{\max} \leq O(\left\|\Delta\right\|_{\max}),\] \[\left\|\Sigma-\Sigma_{\operatorname{lin}}\right\| \leq O(\left\|\Delta\right\|_{\max}+n\epsilon_{n}),\] \[\left|g_{\Sigma}(z)-g_{\Sigma_{\operatorname{lin}}}(z)\right| \leq\frac{1}{\Im(z)^{2}}O\big{(}\left\|\Delta\right\|_{\max}+ \sqrt{n}\epsilon_{n}\big{)}.\] Proof.: We start from the expression: \[\Sigma_{\operatorname{approx}}-\Sigma_{\operatorname{lin}}=\frac{\zeta_{2}(f )^{2}}{2}\vec{\operatorname{diag}}(\Delta)\vec{\operatorname{diag}}(\Delta) ^{\top}+\zeta_{2}(f)^{2}\Delta^{\circ 2}+\zeta_{3}(\tilde{f})^{2}\Delta^{\circ 3}\] The matrices \(\vec{\mathrm{diag}}(\Delta)\vec{\mathrm{diag}}(\Delta)^{\top}\), \(\Delta^{\circ 2}\) and \(\Delta^{\circ 3}\) are all bounded in spectral norm, thus \(\left\|\kern-1.075pt\left|\Sigma-\Sigma_{\mathrm{lin}}\right|\kern-1.075pt \right\|\) is bounded and from Theorem 3.9\(\left\|\kern-1.075pt\left|\Sigma-\Sigma_{\mathrm{lin}}\right|\kern-1.075pt\right\|\) is always bounded. The bound \(\left\|\Sigma_{\mathrm{approx}}-\Sigma_{\mathrm{lin}}\right\|_{\max}\leq O( \epsilon_{n})\) is immediate given the above expression for the difference of these matrices, and from Theorem 3.9: \[\left\|\Sigma-\Sigma_{\mathrm{lin}}\right\|_{\max} \leq\left\|\kern-1.075pt\left|\Sigma-\Sigma_{\mathrm{approx}} \right|\kern-1.075pt\right\|+\left\|\Sigma_{\mathrm{approx}}-\Sigma_{\mathrm{ lin}}\right\|_{\max}\] \[\leq O(\left\|\Delta\right\|_{\max}+\epsilon_{n})\leq O(\left\| \Delta\right\|_{\max}).\] The other bounds follow from classical matrix inequalities. Indeed: \[\left\|\kern-1.075pt\left|\Sigma-\Sigma_{\mathrm{lin}}\right| \kern-1.075pt\right\| \leq\left\|\kern-1.075pt\left|\Sigma-\Sigma_{\mathrm{approx}} \right|\kern-1.075pt\right\|+\left\|\kern-1.075pt\left|\Sigma_{\mathrm{approx }}-\Sigma_{\mathrm{lin}}\right|\kern-1.075pt\right\|\] \[\leq O(\left\|\Delta\right\|_{\max})+n\left\|\Sigma_{\mathrm{ approx}}-\Sigma_{\mathrm{lin}}\right\|_{\max}\] \[\leq O(\left\|\Delta\right\|_{\max}+n\epsilon_{n}).\] Finally for the inequality on Stieltjes transforms, for any \(A,B\in\mathbb{R}^{n\times n}\) and \(z\in\mathbb{C}^{+}\): \[\left|g_{A}(z)-g_{B}(z)\right| =\left|\frac{1}{n}\mathrm{Tr}(\mathcal{G}_{A}(z)(B-A)\mathcal{G} _{B}(z))\right|\] \[\leq\frac{1}{n}\left\|\kern-1.075pt\left|\mathcal{G}_{A}(z) \right|\kern-1.075pt\right\|\left|I_{n}\right\|_{F}\kern-1.075pt\right\|B-A \kern-1.075pt\left|\kern-1.075pt\left|\mathcal{G}_{B}(z)\right|\kern-1.075pt\right\|\] \[\leq\frac{\left\|B-A\kern-1.075pt\right\|_{F}}{\sqrt{n}\Im(z)^ {2}}.\] We deduce that: \[\left|g_{\Sigma}(z)-g_{\Sigma_{\mathrm{lin}}}(z)\right| \leq\left|g_{\Sigma}(z)-g_{\Sigma_{\mathrm{approx}}}(z)\right| +\left|g_{\Sigma_{\mathrm{approx}}}(z)-g_{\Sigma_{\mathrm{lin}}}(z)\right|\] \[\leq\frac{\left\|\Sigma-\Sigma_{\mathrm{lin}}\right\|_{F}}{ \sqrt{n}\Im(z)^{2}}+\frac{\left\|\Sigma_{\mathrm{approx}}-\Sigma_{\mathrm{ lin}}\right\|_{F}}{\sqrt{n}\Im(z)^{2}}\] \[\leq\frac{\left\|\kern-1.075pt\left|\Sigma-\Sigma_{\mathrm{lin} }\right|\kern-1.075pt\right\|}{\Im(z)^{2}}+\sqrt{n}\frac{\left\|\Sigma_{ \mathrm{approx}}-\Sigma_{\mathrm{lin}}\right\|_{\max}}{\Im(z)^{2}}\] \[\leq\frac{1}{\Im(z)^{2}}O\big{(}\left\|\Delta\right\|_{\max}+ \sqrt{n}\epsilon_{n}\big{)}.\] _Remark 3.11_.: In practice, \(\left\|\kern-1.075pt\Sigma-\Sigma_{\mathrm{lin}}\right\|\kern-1.075pt\right|\) converges to \(0\) provided one of the following additional hypothesis holds true: * \(\left\|\kern-1.075pt\Delta\right\|_{\max}=o\Big{(}n^{-1/2}\Big{)}\), * \(\zeta_{2}(f)=0\) and \(\left\|\kern-1.075pt\Delta\right\|_{\max}=o\Big{(}n^{-1/3}\Big{)}\), * \(\zeta_{2}(f)=\zeta_{3}(f)=0\). For the Stieltjes transforms, \(g_{\Sigma}(z)-g_{\Sigma_{\mathrm{lin}}}(z)\) converges to \(0\) pointwise, and thus \(\Sigma\) and \(\Sigma_{\mathrm{lin}}\) have the same limiting spectral distribution if it exists, provided one of the following additional hypothesis holds true: * \(\left\|\kern-1.075pt\Delta\right\|_{\max}=o\Big{(}n^{-1/4}\Big{)}\), * \(\zeta_{2}(f)=0\) and \(\left\|\kern-1.075pt\Delta\right\|_{\max}=o\Big{(}n^{-1/6}\Big{)}\), * \(\zeta_{2}(f)=\zeta_{3}(f)=0\). ## 4. Deterministic equivalent of sample covariance matrices In this section we recall the latest results about the deterministic equivalent of the resolvents of sample covariance matrices on which this article is based. These estimates were first established in [10] with a convergence speed in \(n\), and complemented with quantitative estimates in the spectral parameter \(z\) in [11]. In a second step, we will thoroughly study the properties of the deterministic equivalent matrices appearing in these results. Given \(\Sigma\in\mathbb{R}^{n\times n}\) a positive semi-definite matrix and a sequence of shape parameters \(\gamma_{n}>0\), we build the matrix function \(\mathbf{G}_{\boxtimes}^{\Sigma}:\mathbb{C}^{+}\to\mathbb{C}^{n\times n}\) from the following objects: \[\nu^{\Sigma} =\mathrm{MP}(\gamma_{n})\boxtimes\mu_{\Sigma},\] \[\bar{\nu}^{\Sigma} =(1-\gamma_{n})\cdot\delta_{0}+\gamma_{n}\cdot\nu^{\Sigma},\] \[l_{\bar{\nu}^{\Sigma}}(z) =-1/g_{\bar{\nu}^{\Sigma}}(z),\] \[\mathbf{G}_{\boxtimes}^{\Sigma}(z) =(-zg_{\bar{\nu}^{\Sigma}}(z)\Sigma-zI_{n})^{-1},\] \[=z^{-1}l_{\bar{\nu}^{\Sigma}}(z)\mathcal{G}_{L}(l_{\bar{\nu}^{ \Sigma}}(z)).\] Let us state without proofs some useful properties of these objects. \(\bar{\nu}^{\Sigma}\) is always a true probability measure ([11, Lemma 6.1]). The usual resolvent inequality \(\left\|\mathbf{G}_{\boxtimes}^{\Sigma}(z)\right\|\leq 1/\Im(z)\) holds ([11, Lemma 5.1]), and \(\mathrm{Tr}\mathbf{G}_{\boxtimes}^{\Sigma}(z)=ng_{\nu^{\Sigma}}(z)\) ([11, Proposition 6.2]). _Remark 4.1_.: The notation \(\mathbf{G}_{\boxtimes}^{\Sigma}(z)\) is inspired from the free probability theory, which appears as the profound canvas hidden within the above definitions. Indeed it is not difficult to see that the pair \(\left(g_{\bar{\nu}^{\Sigma}}(z),\mathbf{G}_{\boxtimes}^{\Sigma}(z)\right)\) satisfies the system of self-consistent equations: \[g_{\bar{\nu}^{\Sigma}}(z) =\left(-z-z\frac{\gamma_{n}}{n}\mathrm{Tr}\big{(}\mathbf{G}_{ \boxtimes}^{\Sigma}(z)\Sigma\big{)}\right)^{-1},\] \[\mathbf{G}_{\boxtimes}^{\Sigma}(z) =(-zI_{n}-zg_{\bar{\nu}^{\Sigma}}(z)\Sigma)^{-1}.\] This may be rewritten as the operator-valued self-consistent equation: \[z\mathcal{H}(z)=I_{n+1}+z\eta(\mathcal{H}(z))\mathcal{H}(z),\] where \(\mathcal{H}(z)=\begin{pmatrix}g_{\bar{\nu}^{\Sigma}}(z)&0\\ 0&\mathbf{G}_{\boxtimes}^{\Sigma}(z)\end{pmatrix}\), and: \[\eta:\mathbb{C}\oplus\mathbb{C}^{n\times n} \to\mathbb{C}\oplus\mathbb{C}^{n\times n},\] \[\begin{pmatrix}g&0\\ 0&G\end{pmatrix} \mapsto\begin{pmatrix}\frac{\gamma_{n}}{n}\mathrm{Tr}(G\Sigma)&0\\ 0&g\Sigma\end{pmatrix}.\] \(z\mathcal{H}(z^{2})\) thus corresponds to the resolvent of an operator-valued free semi-circular variable with covariance \(\eta\), see [12, Section 3.3]. In this sense the definition of \(\mathbf{G}_{\boxtimes}^{\Sigma}(z)\) extends the notion of free convolution, and it should come as no surprise that such objects appear as deterministic equivalent of sample covariance matrices. ### General results Let \(Y\in\mathbb{R}^{d\times n}\) be a sequence of random matrices. The associated sample covariance matrix is \(K=Y^{\top}Y/d\), and for \(z\in\mathbb{C}^{+}\) we define its resolvent \(\mathcal{G}_{K}(z)=\left(K-zI_{n}\right)^{-1}\) and Stieltjes transform \(g_{K}(z)=(1/n)\mathrm{Tr}\mathcal{G}_{K}(z)\). From the expected covariance matrix \(\Sigma=\mathbb{E}[K]\), we follow the above procedure to build the matrix function \(\mathbf{G}_{\boxtimes}^{\Sigma}(z)\). **Assumptions 4.2**.: 1. \(Y\propto_{\left\|\mspace{-1.0mu }\right\|_{F}}\mathcal{E}(1)\), the rows of \(Y\) are i.i.d. sampled from the distribution of a random vector \(y\), with \(\left\|\mathbb{E}[y]\right\|\) and \(\left\|\left\|\mathbb{E}[yy^{\top}]\right\|\right\|=\left\|\left\|\Sigma \right\|\right\|\) bounded. 2. The ratio \(\gamma_{n}=\dfrac{n}{d}\) is bounded from above and away from \(0\). **Theorem 4.3**.: _[_2_, Theorem 2.3]_ _Uniformly under Assumptions 4.2, there exists \(C>0\) such that uniformly in \(z\in\mathbb{C}^{+}\) with \(\Im(z)\) bounded and \(\dfrac{|z|^{7}}{\Im(z)^{16}}\leq n/C\), the following inequality holds:_ \[\left\|\mathbb{E}[\mathcal{G}_{K}(z)]-\mathbf{G}_{\boxtimes}^{\Sigma}(z) \right\|_{F}\leq O\Bigg{(}\dfrac{|z|^{5/2}}{\Im(z)^{9}n^{1/2}}\Bigg{)}.\] _Remark 4.4_.: By "uniformly under Assumptions 4.2", we mean that the implicit constants in the result only depend on the constants chosen in the assumptions. Said otherwise, if a family of random matrices satisfies Assumptions 4.2 uniformly, then the results holds uniformly for any matrix of this family. Keeping track of the exponents appearing in this theorem will become more and more difficult as we go further into this article. We choose for this reason to work with a weaker notion of approximation using our concept of \(O_{z}(\epsilon_{n})\) polynomial bounds. As a side effect, we will see that we do not need anymore to specify conditions on \(z\) on which the approximation holds true. **Theorem 4.5** (Simplified version of the Deterministic Equivalent).: _Uniformly under Assumptions 4.2, the following concentration properties hold true:_ 1. \(g_{K}(z)\propto\mathcal{E}(O_{z}(1/n))\) _and_ \(\mathcal{G}_{K}(z)\propto_{\left\|\mspace{-1.0mu }\right\|_{F}}\mathcal{E} \big{(}O_{z}(1/\sqrt{n})\big{)}\)_._ 2. \(g_{K}(z)\in g_{\nu}(z)\pm\mathcal{E}(O_{z}(1/n))\) _and_ \(\mathcal{G}_{K}(z)\in_{\left\|\mspace{-1.0mu }\right\|_{F}}\mathbf{G}_{ \boxtimes}^{\Sigma}(z)\pm\mathcal{E}\big{(}O_{z}(1/\sqrt{n})\big{)}\)_._ Proof.: From Theorem 4.3, Lemma 2.6 and the _a priori_ bound \(\left\|\mathbb{E}[\mathcal{G}_{K}(z)]-\mathbf{G}_{\boxtimes}^{\Sigma}(z) \right\|_{F}\leq 2\sqrt{n}/\Im(z)\), we have \(\left\|\mathbb{E}[\mathcal{G}_{K}(z)]-\mathbf{G}_{\boxtimes}^{\Sigma}(z) \right\|_{F}\leq O_{z}(1/\sqrt{n})\). The general properties of concentration recalled in Proposition 2.2 prove the Theorem. Similarly to [2, Corollaries 2.5 and 2.8], we may deduce the following spectral properties of \(K\). We will not prove this result immediately, but rather prompt our reader to consult the proof of Corollary 5.9 which is extremely similar. **Corollary 4.6**.: _Uniformly under Assumptions 4.2 :_ 1. \(|g_{K}(z)-g_{\nu^{\Sigma}}(z)|\leq O_{z}\Big{(}\sqrt{\log n}/n\Big{)}\) _and_ \(\left\|\mathcal{G}_{K}(z)-\mathbf{G}_{\boxtimes}^{\Sigma}(z)\right\|_{\max} \leq O_{z}\Big{(}\sqrt{\log n/n}\Big{)}\)_._ 2. _If the eigenvalues of_ \(\Sigma\) _are bounded from below, there exists_ \(\theta>0\) _such that_ \(D(\mu_{K},\nu^{\Sigma})\leq O(n^{-\theta})\) _a.s._ _._ 3. _If additionally_ \(\mu_{\Sigma}\to\mu_{\infty}\) _weakly and_ \(\gamma_{n}\to\gamma_{\infty}\)_, then_ \(\mu_{K}\to\nu_{\infty}=\operatorname{MP}(\gamma_{\infty})\boxtimes\mu_{\infty}\) _weakly a.s., and more precisely:_ \[D(\mu_{K},\nu_{\infty})\leq D(\mu_{\Sigma},\mu_{\infty})+O(|\gamma_{n}-\gamma _{\infty}|)+O(n^{-\theta})\quad\text{a.s.}\] ### Regularity of the Stieltjes transform with respect to the free convolution Let \(\Sigma\in\mathbb{R}^{n\times n}\) be a sequence of positive semi-definite matrices and \(\tau\) probability measures such that \(g_{\Sigma}(z)-g_{\tau}(z)\) converges to \(0\) pointwise in \(\mathbb{C}^{+}\). Then \(\mu_{\Sigma}\) and \(\tau\) converge weakly to the same limit if it exists. The free multiplicative convolutions \(\nu=\operatorname{MP}(\gamma_{n})\boxtimes\mu_{\Sigma}\) and \(\chi=\operatorname{MP}(\gamma_{n})\boxtimes\tau\) also converge weakly to the same limit, which is equivalent to their Stieltjes transforms \(g_{\nu}(z)\) and \(g_{\chi}(z)\) having the same limit pointwise. In this section we would like to refine this result by quantifying the convergence of \(g_{\nu}(z)-g_{\chi}(z)\) to \(0\). In the upcoming paragraphs, we always consider shape parameters \(\gamma_{n}\) that are bounded from above and away from \(0\), like in Assumptions 4.2. We remind our reader that we are dealing with sequences of matrices, measures, and complex functions, even if we sometimes omit the indices \(n\) and \(z\) for a better readability. **Theorem 4.7**.: _Let \(\Sigma\in\mathbb{R}^{n\times n}\) be deterministic positive semi-definite matrices, and \(\tau\) deterministic probability measures supported on \(\mathbb{R}^{+}\)._ _If \(|g_{\Sigma}(z)-g_{\tau}(z)|\leq O_{z}(\epsilon_{n})\), then \(|g_{\nu}(z)-g_{\chi}(z)|\leq O_{z}(\epsilon_{n})\)._ The proof of this result may be decomposed in several steps. First we translate the definition of the measures \(\nu\) and \(\chi\) into appropriate self-consistent equations on their reciprocal Cauchy transforms (Proposition 4.8). Then we see that \(\chi\) is an approximate fixed point of the equation corresponding to \(\nu\) (Proposition 4.10). Finally we use the stability of these self-consistent equations and the tools developed in [10] and [11] to conclude. The key function in the upcoming paragraphs \(l_{\hat{\nu}\Sigma}(z)=-1/g_{\hat{\nu}}(z)\) is known as the reciprocal Cauchy transform of the measure \(\tilde{\nu}^{\Sigma}\). As such some classical properties of this function may be found in the seminal book [14, Section 3.4]. We will nonetheless provide short proofs for all the properties we need in this article. **Proposition 4.8** (Self-consistent equation for reciprocal Cauchy transforms).: _If \(\mu\) is a measure supported on \(\mathbb{R}^{+}\), we let \(\nu=\operatorname{MP}(\gamma_{n})\boxtimes\mu\), \(\tilde{\nu}=(1-\gamma_{n})\cdot\delta_{0}+\gamma_{n}\cdot\nu\), and \(l_{\hat{\nu}}(z)=-1/g_{\hat{\nu}}(z)\)._ _Then \(l_{\hat{\nu}}(z)\) is the only solution on \(\mathbb{C}^{+}\) of the self consistent equation in \(l\):_ \[l=z+\gamma_{n}l+\gamma_{n}l^{2}g_{\mu}(l)=z+\gamma_{n}z\int_{\mathbb{R}}\frac{ t}{\frac{zt}{l}-z}\mu(dt).\] Proof.: The two right hand side terms are always equal since: \[\int_{\mathbb{R}}\frac{zt}{\frac{zt}{l}-z}\mu(dt)=l\int_{\mathbb{R}}\frac{t}{ t-l}\mu(dt)=l\int_{\mathbb{R}}\bigg{(}1+\frac{l}{t-l}\bigg{)}\mu(dt)=l+l^{2}g_{ \mu}(l).\] Let us work with the first formulation. It is a classical property of Stieltjes transforms that \(g_{\hat{\nu}}(z)\in\mathbb{C}^{+}\) when \(z\in\mathbb{C}^{+}\), hence \(l_{\hat{\nu}}(z)\in\mathbb{C}^{+}\). Since \(g_{\hat{\nu}}=\dfrac{\gamma_{n}-1}{z}+\gamma_{n}g_{\nu}\), we have the identity \(1-\gamma_{n}-\gamma_{n}zg_{\nu}=\dfrac{z}{l_{\hat{\nu}}}\). By definition of the free multiplicative convolution with a Marcenko-Pastur distribution, \(g_{\nu}\) is the only solution on \(\mathbb{C}^{+}\) of: \[g_{\nu} =\int_{\mathbb{R}}\frac{1}{(1-\gamma_{n}-\gamma_{n}zg_{\nu})t-z}\mu (dt)\] \[=\int_{\mathbb{R}}\frac{1}{zt/l_{\hat{\nu}}-z}\mu(dt)\] \[=\frac{l_{\hat{\nu}}}{z}\int_{\mathbb{R}}\frac{1}{t-l_{\hat{\nu}} }\mu(dt)\] \[=\frac{l_{\hat{\nu}}}{z}g_{\mu}(l_{\hat{\nu}}).\] Using again the identity \(1-\gamma_{n}-\gamma_{n}zg_{\nu}=\frac{z}{l_{\hat{\nu}}}\), the self-consistent equation characterizing \(g_{\nu}\) is equivalent for \(l_{\hat{\nu}}\) to satisfy: \[\gamma_{n}l_{\hat{\nu}}+\gamma_{n}l_{\hat{\nu}}^{2}g_{\mu}(l_{\hat{\nu}})= \gamma_{n}l_{\hat{\nu}}(1+zg_{\nu})=l_{\hat{\nu}}-z.\] **Lemma 4.9**.: _With the same notations and hypothesis as in Proposition 4.8:_ \[0 \leq\Im\big{(}z^{-1}l_{\hat{\nu}}(z)\big{)},\] \[\Im(z) \leq\Im(l_{\hat{\nu}}(z))\leq O(1+\Im(z)),\] \[|l_{\hat{\nu}}(z)| \leq O\bigg{(}\frac{|z|}{\Im(z)}\bigg{)}.\] _In particular if a function \(\zeta:\mathbb{N}\times\mathbb{C}^{+}\to\mathbb{R}\) satisfies \(\zeta(n,z)\leq O_{z}(\epsilon_{n})\), then \(\zeta(n,l_{\hat{\nu}}(z))\leq O_{z}(\epsilon_{n})\)._ Proof.: \(\nu=\operatorname{MP}(\gamma_{n})\boxtimes\mu\) is supported on \(\mathbb{R}^{+}\), thus: \[-\Im\bigg{(}\frac{z}{l_{\hat{\nu}}}\bigg{)} =-\Im(1-\gamma_{n}-\gamma_{n}zg_{\nu})\] \[=\gamma_{n}\int_{\mathbb{R}^{+}}\Im\bigg{(}\frac{z}{t-z}\bigg{)} \nu(dt)\] \[=\gamma_{n}\Im(z)\int_{\mathbb{R}^{+}}\frac{t}{|t-z|^{2}}\nu(dt) \geq 0,\] which proves that \(\Im\big{(}z^{-1}l_{\hat{\nu}}\big{)}\geq 0\). Secondly: \[\Im(l_{\hat{\nu}}-z) =\Im\Bigg{(}\gamma_{n}z\int_{\mathbb{R}}\frac{t}{\frac{zt}{l_{ \hat{\nu}}}-z}\mu(dt)\Bigg{)}\] \[=\gamma_{n}\int_{\mathbb{R}^{+}}\Im\bigg{(}\frac{1}{1/l_{\hat{\nu }}-1/t}\bigg{)}\mu(dt)\] \[=\gamma_{n}\int_{\mathbb{R}^{+}}\frac{\Im(l_{\hat{\nu}})t^{2}}{ \Re(l_{\hat{\nu}})^{2}+\Im(l_{\hat{\nu}})^{2}+t^{2}-2\Re(l_{\hat{\nu}})t}\mu( dt)\geq 0.\] The right hand side integral is bounded from above by \(\dfrac{1}{\Im(l_{\tilde{\nu}})}\int_{\mathbb{R}^{+}}t^{2}\mu(dt)\), hence \(\Im(l_{\tilde{\nu}})^{2}\leq\Im(z)\Im(l_{\tilde{\nu}})+\gamma_{n}\int_{\mathbb{ R}^{+}}t^{2}\mu(dt)\). Solving this second order polynomial inequality gives \(\Im(l_{\tilde{\nu}})\in\left[\Im(z)/2\pm\left(\gamma_{n}\int_{\mathbb{R}^{+}}t^ {2}\mu(dt)+\Im(z)^{2}/4\right)^{1/2}\right]\) and \(\Im(l_{\tilde{\nu}})\leq O(1+\Im(z))\). We also have \(\Im\bigg{(}\dfrac{zt}{l_{\tilde{\nu}}}\bigg{)}\leq 0\) for any \(t\geq 0\), hence: \[|l_{\tilde{\nu}}| \leq|z|+\gamma_{n}|z|\int_{\mathbb{R}^{+}}\dfrac{t}{\left|\Im \left(\frac{zt}{l_{\tilde{\nu}}}-z\right)\right|}\mu(dt)\] \[\leq|z|+\gamma_{n}\dfrac{|z|}{\Im(z)}\int_{\mathbb{R}^{+}}t\,\mu (dt)\leq O\bigg{(}\dfrac{|z|}{\Im(z)}\bigg{)}.\] For the last statement, let \(\zeta\) be a function such that \(\zeta(n,z)\leq O_{z}(\epsilon_{n})\). If \(\Im(z)\) is bounded, so is \(\Im(l_{\tilde{\nu}}(z))\), hence there exists \(\alpha>0\) such that: \[\zeta(n,l_{\tilde{\nu}}(z))\leq O\bigg{(}\epsilon_{n}\dfrac{|l_{\tilde{\nu}}( z)|^{\alpha}}{\Im(l_{\tilde{\nu}}(z))^{2\alpha}}\bigg{)}\leq O\bigg{(} \epsilon_{n}\dfrac{|z|^{\alpha}}{\Im(z)^{3\alpha}}\bigg{)}\leq O\bigg{(} \epsilon_{n}\dfrac{|z|^{2\alpha}}{\Im(z)^{4\alpha}}\bigg{)},\] from which we deduce that \(\zeta(n,l_{\tilde{\nu}}(z))\leq O_{z}(\epsilon_{n})\). We may now move on to the second part of the proof of Theorem 4.7. Let us define the mapping: \[\mathcal{F}:l\in\mathbb{C}^{+}\mapsto z+\gamma_{n}\dfrac{z}{n}\mathrm{Tr} \bigg{(}\Big{(}\dfrac{z}{l}\Sigma-zI_{n}\Big{)}^{-1}\Sigma\bigg{)}.\] As shown in Proposition 4.8, the definition of \(\mathcal{F}\) is equivalent to: \[\mathcal{F}(l)=z+\gamma_{n}z\int_{\mathbb{R}}\dfrac{t}{\frac{zt}{l}-z}\mu_{ \Sigma}(dt)=z+\gamma_{n}l+\gamma_{n}l^{2}g_{\Sigma}(l).\] We set \(l_{\tilde{\nu}}(z)=-1/g_{\tilde{\nu}}(z)\) and \(l_{\tilde{\chi}}(z)=-1/g_{\tilde{\chi}}(z)\). In Proposition 4.8 we have proved that \(l_{\tilde{\nu}}(z)\) is a fixed point of \(\mathcal{F}\). We will see in the next Proposition that \(l_{\tilde{\chi}}(z)\) is almost a fixed point of \(\mathcal{F}\). **Proposition 4.10**.: \(|\mathcal{F}(l_{\tilde{\chi}}(z))-l_{\tilde{\chi}}(z)|\leq O_{z}(\epsilon_{n})\)_._ Proof.: Using the first formulation of the self-consistent equation of Proposition 4.8, we have \(l_{\tilde{\chi}}=z+\gamma_{n}n_{l}\tilde{\chi}+\gamma_{n}n_{l}^{2}\tilde{ \chi}_{\tilde{\chi}}g_{\nu}(l_{\tilde{\chi}})\), thus: \[\mathcal{F}(l_{\tilde{\chi}})-l_{\tilde{\chi}}=\gamma_{nn}l_{\tilde{\chi}}^{2} (g_{\mu_{\Sigma}}(l_{\tilde{\chi}})-g_{\tau}(l_{\tilde{\chi}})).\] As seen in Lemma 4.9, since \(|g_{\mu_{\Sigma}}(z)-g_{\tau}(z)|\leq O_{z}(\epsilon_{n})\), we also have \(|g_{\mu_{\Sigma}}(l_{\tilde{\chi}}(z))-g_{\tau}(l_{\tilde{\chi}}(z))|\leq O_{z }(\epsilon_{n})\), and we obtain: \[|\mathcal{F}(l_{\tilde{\chi}})-l_{\tilde{\chi}}| \leq|\gamma_{n_{n}}||l_{\tilde{\chi}}|^{2}|g_{\mu_{\Sigma}}(l_{ \tilde{\chi}})-g_{\tau}(l_{\tilde{\chi}})|\] \[\leq O(1)O\bigg{(}\dfrac{|z|}{\Im(z)}\bigg{)}^{2}O_{z}(\epsilon_{ n})\leq O_{z}(\epsilon_{n}).\] The last step to prove Theorem 4.7 is to use the stability of the self-consistent equation \(l=\mathcal{F}(l)\). Let us recall the tools and results established in [14, Section 6]. For a fixed \(z\in\mathbb{C}^{+}\), we introduce the domain \(\mathbf{D}=\{\omega\in\mathbb{C}\) such that \(\Im(\omega)\geq\Im(z)\) and \(\Im(z^{-1}\omega)\geq 0\}\), and the semi-metric on \(\mathbb{C}^{+}\): \[d(\omega_{1},\omega_{2})=\frac{|\omega_{1}-\omega_{2}|}{\Im(\omega_{1})^{1/2} \Im(\omega_{2})^{1/2}}.\] \(\mathcal{F}\) is a contraction mapping on \(\mathbf{D}\) with respect to \(d\). More precisely, \(\mathcal{F}\) is \(k_{\mathcal{F}}\)-Lipschitz with \(k_{\mathcal{F}}=\frac{\frac{|z|}{\Im(z)^{2}}}{1+\frac{|z|}{\Im(z)^{2}}}\) ([14, Proposition 6.11]). Moreover, if \(c\in\mathbf{D}\) is a fixed point of \(\mathcal{F}\) and \(b\in\mathbf{D}\) any other point, provided \(k_{\mathcal{F}}(1+d(b,\mathcal{F}(b)))<1\), the following inequality holds true ([14, Lemma 6.14]): \[|c-b|\leq\frac{|\mathcal{F}(b)-b|}{1-k_{\mathcal{F}}(1+d(b,\mathcal{F}(b)))}.\] We have now collected all the arguments required to compare the Stieltjes transforms of \(\nu\) and \(\chi\). Proof of Theorem 4.7.: If \(\Im(z)\) is bounded, using Proposition 4.10 there exists \(\alpha\) and \(C>0\) such that \(|\mathcal{F}(l_{\tilde{\chi}})-l_{\tilde{\chi}}|\leq C\epsilon_{n}\frac{|z|^{ \alpha}}{\Im(z)^{2\alpha}}\). For values of \(z\) such that \[\epsilon_{n}\frac{|z|^{\alpha+1}}{\Im(z)^{2\alpha+3}}\leq 1/2C,\,\frac{|z|}{ \Im(z)^{2}}d(\mathcal{F}(l_{\tilde{\chi}}),l_{\tilde{\chi}})\leq 1/2,\,\text{ thus:}\] \[k_{\mathcal{F}}(1+d(\mathcal{F}(l_{\tilde{\chi}}),l_{\tilde{\chi}})) =\frac{\frac{|z|}{\Im(z)^{2}}+\frac{|z|}{\Im(z)^{2}}d(\mathcal{F }(l_{\tilde{\chi}}),l_{\tilde{\chi}})}{1+\frac{|z|}{\Im(z)^{2}}}\] \[\leq 1-\frac{1}{2\Big{(}1+\frac{|z|}{\Im(z)^{2}}\Big{)}}.\] In particular \(k_{\mathcal{F}}(1+d(\mathcal{F}(l_{\tilde{\chi}}),l_{\tilde{\chi}}))<1\), and from [14, Lemma 6.14]: \[|l_{\tilde{\chi}}-l_{\tilde{\nu}}| \leq\frac{|\mathcal{F}(l_{\tilde{\chi}})-l_{\tilde{\chi}}|}{1-k_ {\mathcal{F}}(1+d(\mathcal{F}(l_{\tilde{\chi}}),l_{\tilde{\chi}}))}\] \[\leq 2\bigg{(}1+\frac{|z|}{\Im(z)^{2}}\bigg{)}C\epsilon_{n}\frac{|z| ^{\alpha}}{\Im(z)^{2\alpha}}\] \[\leq O\bigg{(}\epsilon_{n}\frac{|z|^{\alpha+1}}{\Im(z)^{2\alpha+ 2}}\bigg{)}.\] To conclude: \[|g_{\nu}-g_{\chi}|=\frac{|g_{\tilde{\nu}}-g_{\tilde{\chi}}|}{\gamma}=\frac{| g_{\tilde{\nu}}||g_{\tilde{\chi}}||l_{\tilde{\nu}}-l_{\tilde{\chi}}|}{ \gamma_{n}}\leq O\bigg{(}\epsilon_{n}\frac{|z|^{\alpha+1}}{\Im(z)^{2\alpha+2 }}\bigg{)},\] for values of \(z\) such that \(\epsilon_{n}\frac{|z|^{\alpha+1}}{\Im(z)^{2\alpha+3}}\leq\frac{1}{2C}\). Given the _a priori_ bound \(|g_{\nu}(z)-g_{\chi}(z)|\leq\frac{2}{\Im(z)}\) and Lemma 2.6, the bound \(|g_{\nu}(z)-g_{\chi}(z)|\leq O_{z}(\epsilon_{n})\) holds true. ### Approximation of deterministic equivalent built from deterministic matrices Given an approximation for the Stieltjes transform \(g_{\Sigma}(z)\approx g_{\tau}(z)\), and another approximation for the resolvent \(\mathcal{G}_{\Sigma}(z)\approx\mathbf{H}(z)\), can we find an approximation for the matrix \(\mathbf{G}_{\boxtimes}^{\Sigma}(z)\)? We build a matrix function \(\mathbf{K}\) using the same procedure we used earlier to build \(\mathbf{G}_{\boxtimes}^{\Sigma}\) from \(\mu_{\Sigma}\) and \(\mathcal{G}_{\Sigma}\): \[\chi =\mathrm{MP}(\gamma_{n})\boxtimes\tau,\] \[\tilde{\chi} =(1-\gamma_{n})\cdot\delta_{0}+\gamma_{n}\cdot\chi,\] \[l_{\tilde{\chi}}(z) =-1/g_{\tilde{\chi}}(z),\] \[\mathbf{K}(z) =z^{-1}l_{\tilde{\chi}}(z)\mathbf{H}(l_{\tilde{\chi}}(z)).\] **Proposition 4.11**.: _Let \(\Sigma\in\mathbb{R}^{n\times n}\) be deterministic positive semi-definite matrices, \(\tau\) deterministic probability measures supported on \(\mathbb{R}^{+}\), and \(\mathbf{H}:\mathbb{C}^{+}\to\mathbb{C}^{n\times n}\) deterministic complex functions._ _If \(|g_{\Sigma}(z)-g_{\tau}(z)|\leq O_{z}(\epsilon_{n})\) and \(\left\|\!\left|\mathcal{G}_{\Sigma}(z)-\mathbf{H}(z)\right|\!\right|\leq O_{z }(\epsilon_{n}^{\prime})\), then \(\left\|\!\left|\mathbf{G}_{\boxtimes}^{\Sigma}(z)-\mathbf{K}(z)\right|\! \right|\leq O_{z}(\epsilon_{n}+\epsilon_{n}^{\prime})\)._ Proof.: We use a triangular inequality in the following decomposition: \[\mathbf{G}_{\boxtimes}^{\Sigma}(z)-\mathbf{K}(z) =z^{-1}(l_{\tilde{\nu}}-l_{\tilde{\chi}})\mathcal{G}_{\Sigma}(l_{ \tilde{\nu}})\] \[\qquad+z^{-1}l_{\tilde{\chi}}(\mathcal{G}_{\Sigma}(l_{\tilde{\nu }})-\mathcal{G}_{\Sigma}(l_{\tilde{\chi}}))\] \[\qquad+z^{-1}l_{\tilde{\chi}}(\mathcal{G}_{\Sigma}(l_{\tilde{ \chi}})-\mathbf{H}(l_{\tilde{\chi}})).\] For the first term, \(|l_{\tilde{\nu}}-l_{\tilde{\chi}}|\leq O_{z}(\epsilon_{n})\) and \(\left\|\!\left|\mathcal{G}_{\Sigma}(l_{\tilde{\nu}})\right|\!\right|\leq\dfrac {1}{\Im(l_{\tilde{\nu}})}\leq\dfrac{1}{\Im(z)}\), thus \(\left\|\!\left|z^{-1}(l_{\tilde{\nu}}-l_{\tilde{\chi}})\mathcal{G}_{\Sigma}(l _{\tilde{\nu}})\right|\!\right\|\leq O_{z}(\epsilon_{n})\). For the second term, using a resolvent identity: \[\left\|\!\left|z^{-1}l_{\tilde{\chi}}(\mathcal{G}_{\Sigma}(l_{ \tilde{\nu}})-\mathcal{G}_{\Sigma}(l_{\tilde{\chi}}))\right|\!\right\| \leq|z^{-1}l_{\tilde{\chi}}(z)|\!\left|\!\left|\mathcal{G}_{\Sigma}( l_{\tilde{\nu}})\right|\!\right|\!\left|l_{\tilde{\nu}}-l_{\tilde{\chi}}|\! \right|\!\left|\!\left|\mathcal{G}_{\Sigma}(l_{\tilde{\chi}})\right|\!\right|\] \[\leq\dfrac{1}{|z|}O\bigg{(}\dfrac{|z|}{\Im(z)}\bigg{)}\dfrac{O_{z }(\epsilon_{n})}{\Im(z)^{2}}\leq O_{z}(\epsilon_{n}).\] Finally for the third term, \(\left\|\!\left|\mathcal{G}_{\Sigma}(l_{\tilde{\chi}})-\mathbf{H}(l_{\tilde{ \chi}})\right|\!\right|\leq O_{z}(\epsilon_{n}^{\prime})\) using Lemma 4.9, thus: \(\left\|\!\left|z^{-1}l_{\tilde{\chi}}(\mathcal{G}_{\Sigma}(l_{\tilde{\chi}})- \mathbf{H}(l_{\tilde{\chi}}))\right|\!\right|\leq\dfrac{1}{|z|}O\bigg{(}\dfrac {|z|}{\Im(z)}\bigg{)}O_{z}(\epsilon_{n}^{\prime})\leq O_{z}(\epsilon_{n}^{ \prime}).\) **Corollary 4.12**.: _The map \(\Sigma\mapsto\mathbf{G}_{\boxtimes}^{\Sigma}(z)\) is \(O_{z}(1)\) Lipschitz with respect to the spectral norm._ Proof.: Using a resolvent identity we have: \(\left|\!\left|\!\left|\mathcal{G}_{\Sigma}(z)-\mathcal{G}_{\Sigma^{\prime}}(z) \right|\!\right|\leq\left|\!\left|\mathcal{G}_{\Sigma}(z)\right|\!\right|\! \right|\!\left|\!\left|\mathcal{\Sigma}^{\prime}-\Sigma\right|\!\right|\! \left|\!\left|\!\left|\mathcal{G}_{\Sigma}(z)\right|\!\right|\!\right|\leq\dfrac {\left|\!\left|\!\left|\Sigma-\Sigma^{\prime}\right|\!\right|\!\right|}{\Im(z) ^{2}}\). In particular \(\left|g_{\Sigma}(z)-g_{\Sigma^{\prime}(z)}\right|\leq\left|\!\left|\mathcal{G}_{ \Sigma}(z)-\mathcal{G}_{\Sigma^{\prime}}(z)\right|\!\right|\leq O_{z}\big{(} \!\left|\!\left|\!\left|\Sigma-\Sigma^{\prime}\right|\!\right|\!\right|\! \big{)}\). The result follows from Theorem 4.7 and Proposition 4.11. ### Concentration of deterministic equivalents built from random matrices If \(\Sigma\) is random and satisfies a typical \(\mathcal{E}(1/\sqrt{n})\) Lipschitz concentration property, from the approximations \(\mathbb{E}[g_{\Sigma}(z)]\approx g_{\tau}(z)\) and \(\mathbb{E}[\mathcal{G}_{\Sigma}(z)]\approx\mathbf{H}(z)\), we may deduce that \(g_{\Sigma}(z)\approx g_{\tau}(z)\) a.s., but we cannot expect that \(\mathcal{G}_{\Sigma}(z)\approx\mathbf{H}(z)\) since \(\mathcal{G}_{\Sigma}(z)\) and \(\mathbb{E}[\mathcal{G}_{\Sigma}(z)]\) are not necessarily close in spectral norm. We can however prove that \(\mathbf{G}_{\boxtimes}^{\Sigma}(z)\) is linearly concentrated around \(\mathbf{K}(z)\). **Proposition 4.13**.: _Let \(\Sigma\in\mathbb{R}^{n\times n}\) be random positive semi-definite matrices such that \(\Sigma\propto_{\left|\kern-1.0pt\left|\cdot\right|\kern-1.0pt\right|_{F}} \mathcal{E}(1/\sqrt{n})\). Then:_ 1. \(g_{\nu}(z)\propto\mathcal{E}(O_{z}(1/n))\) _and_ \(\mathbf{G}_{\boxtimes}^{\Sigma}(z)\propto_{\left|\kern-1.0pt\left|\cdot\right| \kern-1.0pt\right|}\mathcal{E}(O_{z}(1/\sqrt{n}))\)_._ 2. _If_ \(\tau\) _are deterministic probability measures supported on_ \(\mathbb{R}^{+}\) _such that_ \(\left|\mathbb{E}[g_{\Sigma}(z)]-g_{\tau}(z)\right|\leq O_{z}(\epsilon_{n})\)_, then_ \(\left|g_{\nu}(z)-g_{\chi}(z)\right|\leq O_{z}(\epsilon_{n}+\sqrt{\log n}/n)\) _a.s._ 3. _If in addition_ \(\mathbf{H}:\mathbb{C}^{+}\to\mathbb{C}^{n\times n}\) _are deterministic complex functions such that_ \(\left|\kern-1.0pt\left|\mathbb{E}[\mathcal{G}_{\Sigma}(z)]-\mathbf{H}(z) \right|\kern-1.0pt\right|\kern-1.0pt\right|\leq O_{z}(\epsilon_{n}^{\prime})\)_, then_ \(\mathbf{G}_{\boxtimes}^{\Sigma}(z)\in_{\left|\kern-1.0pt\left|\cdot\right| \kern-1.0pt\right|}\mathbf{K}(z)\pm\mathcal{E}(O_{z}(\epsilon_{n}+\epsilon_{n} ^{\prime}+1/\sqrt{n}))\)_._ Proof.: We refer to Proposition 2.2 for the properties of Lipschitz and linear concentration used in this proof. The map \(\Sigma\mapsto\mathbf{G}_{\boxtimes}^{\Sigma}\) is \(O_{z}(1)\) Lipschitz with respect to the spectral norm, thus \(\mathbf{G}_{\boxtimes}^{\Sigma}\propto_{\left|\kern-1.0pt\left|\cdot\right| \kern-1.0pt\right|}\mathcal{E}(O_{z}(1/\sqrt{n}))\). Remembering the identity \(\mathrm{Tr}\mathbf{G}_{\boxtimes}^{\Sigma}=ng_{\nu}\), we also deduce that \(g_{\nu}\propto\mathcal{E}(O_{z}(1/\sqrt{n}))\). The map \(\Sigma\mapsto\mathcal{G}_{\Sigma}\) is \(1/\Im(z)^{2}\) Lipschitz, thus \(\mathcal{G}_{\Sigma}\propto_{\left|\kern-1.0pt\left|\cdot\right|\kern-1.0pt \right|_{F}}\mathcal{E}(O_{z}(1/\sqrt{n}))\) and \(g_{\Sigma}\propto\mathcal{E}(O_{z}(1/n))\). We deduce that \(\left|\mathbb{E}[g_{\Sigma}]-g_{\Sigma}\right|\leq O_{z}(\sqrt{\log n}/n)\) a.s., hence \(\left|g_{\Sigma}-g_{\tau}\right|\leq O_{z}(\epsilon_{n}+\sqrt{\log n}/n)\) a.s. We can apply Theorem 4.7 uniformly in this set of full measure, and obtain that \(\left|g_{\nu}-g_{\chi}\right|\leq O_{z}(\epsilon_{n}+\sqrt{\log n}/n)\) a.s. In the decomposition: \[\mathbf{G}_{\boxtimes}^{\Sigma}(z)-\mathbf{K}(z) =z^{-1}(l_{\tilde{\nu}}-l_{\tilde{\chi}})\mathcal{G}_{\Sigma}(l_{ \tilde{\nu}})\] \[\qquad+z^{-1}l_{\tilde{\chi}}(\mathcal{G}_{\Sigma}(l_{\tilde{\nu }})-\mathcal{G}_{\Sigma}(l_{\tilde{\chi}}))\] \[\qquad+z^{-1}l_{\tilde{\chi}}(\mathcal{G}_{\Sigma}(l_{\tilde{\chi }})-\mathbf{H}(l_{\tilde{\chi}})),\] the first two terms are bounded by \(O_{z}(\epsilon_{n}+\sqrt{\log n}/n)\leq O_{z}(\epsilon_{n}+1/\sqrt{n})\) a.s. in spectral norm (see the proof of Proposition 4.11). For the third term, \(\mathcal{G}_{\Sigma}(z)\propto_{\left|\kern-1.0pt\left|\cdot\right|\kern-1.0pt \right|_{F}}\mathcal{E}(O_{z}(1/\sqrt{n}))\) and \(\left|\kern-1.0pt\left|\mathbb{E}[\mathcal{G}_{\Sigma}(z)]-\mathbf{H}(z) \right|\kern-1.0pt\right|\leq O_{z}(\epsilon_{n}^{\prime})\), thus \(\mathcal{G}_{\Sigma}(z)\in_{\left|\kern-1.0pt\left|\cdot\right|\kern-1.0pt \right|}\mathbf{H}(z)\pm\mathcal{E}(O_{z}(\epsilon_{n}^{\prime}+1/\sqrt{n}))\). From Lemma 4.9, we also have \(\mathcal{G}_{\Sigma}(l_{\tilde{\chi}})\in_{\left|\kern-1.0pt\left|\cdot\right| \kern-1.0pt\right|}\mathbf{H}(l_{\tilde{\chi}})\pm\mathcal{E}(O_{z}(\epsilon_{ n}^{\prime}+1/\sqrt{n}))\). Finally \(\left|z^{-1}l_{\tilde{\chi}}\right|\leq 1/\Im(z)\), and combining the above estimates and concentration properties leads to \(\mathbf{G}_{\boxtimes}^{\Sigma}(z)\in_{\left|\kern-1.0pt\left|\cdot\right| \kern-1.0pt\right|}\mathbf{K}(z)\pm\mathcal{E}(O_{z}(\epsilon_{n}+\epsilon_{n} ^{\prime}+1/\sqrt{n}))\). ## 5. Single-layer neural network with deterministic data ### Setting In this section we consider the Conjugate Kernel matrix associated to a single-layer artificial neural network with deterministic input. The model is made of: * a random weight matrix \(W\in\mathbb{R}^{d\times d_{0}}\), with variance parameter \(\sigma_{W}^{2}>0\), * a deterministic data matrix \(X\in\mathbb{R}^{d_{0}\times n}\), \(\sigma_{X}^{2}>0\), * two random biases matrices \(B\) and \(D\in\mathbb{R}^{d\times n}\), \(\sigma_{B}^{2},\sigma_{D}^{2}\geq 0\), * and an activation function \(f:\mathbb{R}\to\mathbb{R}\). As output of the neuron, we set \(Y=f(WX/\sqrt{d_{0}}+B)+D\in\mathbb{R}^{d\times n}\), where the function \(f\) is applied entry-wise. Our goal is to investigate the spectral properties of the Conjugate Kernel matrix \(K=Y^{\top}Y/d\). For \(z\in\mathbb{C}^{+}\) we define its resolvent \(\mathcal{G}_{K}(z)=\left(K-zI_{n}\right)^{-1}\) and Stieltjes transform \(g_{K}(z)=(1/n)\mathrm{Tr}\mathcal{G}_{K}(z)\). We also define the following objects: \[\tilde{\sigma}^{2} =\sigma_{W}^{2}\sigma_{X}^{2}+\sigma_{B}^{2},\] \[\tilde{f}(t) =f(\tilde{\sigma}t),\] \[\mathfrak{a} =\left\|\tilde{f}\right\|_{\mathcal{H}}^{2}-\frac{\sigma_{W}^{2 }\sigma_{X}^{2}}{\tilde{\sigma}^{2}}\zeta_{1}(\tilde{f})^{2}+\sigma_{D}^{2},\] \[\mathfrak{b} =\zeta_{1}(\tilde{f})^{2}\frac{\sigma_{W}^{2}}{\tilde{\sigma}^{ 2}},\] \[K_{X} =X^{\top}X/d_{0},\] \[\Delta_{X} =K_{X}-\sigma_{X}^{2}I_{n},\] \[\Sigma =\mathbb{E}[K],\] \[\Sigma_{\mathrm{lin}} =\mathfrak{a}I_{n}+\mathfrak{b}K_{X}.\] _Remark 5.1_.: We always have \(\mathfrak{a}\geq 0\) since \(\zeta_{1}(f)^{2}\leq\left\|\tilde{f}\right\|_{\mathcal{H}}^{2}\) and \(\sigma_{W}^{2}\sigma_{X}^{2}\leq\tilde{\sigma}^{2}\). Moreover \(\mathfrak{a}=0\) if and only if \(f\) is a linear function and there is no bias in the model (that is \(\sigma_{B}^{2}=\sigma_{D}^{2}=0\)). Like in the rest of this article, we sometimes omit the indices \(n\) and \(z\) for a better readability, even if we are implicitly dealing with sequences of matrices, measures, and complex functions. **Assumptions 5.2**.: 1. \(W\), \(B\) and \(D\) are random, independent, with i.i.d. \(\mathcal{N}(\sigma_{W}^{2})\), \(\mathcal{N}(\sigma_{B}^{2})\) and \(\mathcal{N}(\sigma_{D}^{2})\) entries respectively. 2. \(\tilde{f}\) is Lipschitz continuous and Gaussian centered, that is \(\mathbb{E}\Big{[}\tilde{f}(\mathcal{N})\Big{]}=\mathbb{E}[f(\tilde{\sigma} \mathcal{N})]=0\). 3. \(X\) is deterministic and is bounded. 4. \(\left\|\mathrm{diag}(\Delta_{X})\right\|\) is bounded and converges to \(0\). 5. The ratio \(\gamma_{n}=\frac{n}{d}\) is bounded from above and away from \(0\). We refer to the Remark 5.8 for a detailed discussion about the assumption (4). The main result of this section is Theorem 5.7 that gives a deterministic equivalent for \(\mathcal{G}_{K}(z)\) and \(g_{K}(z)\). To prove this result, we will combine the general results on resolvent matrices recalled in Section 4, with the linearization techniques of Section 3. We will wrap up this section by applying our framework to a simple yet original model having weakly correlated entries. ### Technicalities and linearization of \(\Sigma\) **Proposition 5.3**.: _Under Assumptions 5.2, \(Y\propto_{\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right|_{F}}\mathcal{ E}(1)\). The rows of \(Y\) are i.i.d. sampled from the distribution of a random vector \(y=f(X^{\top}w/\sqrt{d_{0}}+b)+\tilde{b}\), where \(w\in\mathbb{R}^{d_{0}}\) and \(b,\tilde{b}\in\mathbb{R}^{n}\) are independent Gaussian vectors with i.i.d. \(\mathcal{N}(\sigma_{W}^{2})\), \(\mathcal{N}(\sigma_{B}^{2})\) and \(\mathcal{N}(\sigma_{D}^{2})\) coordinates respectively. \(\left|\kern-1.075pt\left|\mathbb{E}[y]\right|\kern-1.075pt\right|\) is moreover bounded._ Proof.: The map \((W,B,D)\mapsto f(WX/\sqrt{d_{0}}+B)+D\) is Lipschitz with respect to the product Frobenius norm since \(f\) is Lipschitz and \(\left|\kern-1.075pt\left|\kern-1.075pt\left|X/\sqrt{d_{0}}\right|\kern-1.075pt \right|\kern-1.075pt\right|\) is bounded. Given the Gaussian concentration \((W,B,D)\propto_{\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right|_{F}} \mathcal{E}(1)\), we immediately obtain that \(Y\propto_{\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right|_{F}} \mathcal{E}(1)\). From the expression: \[Y_{ij}=f\!\left(\sum_{k=1}^{d_{0}}W_{ik}X_{kj}/\sqrt{d_{0}}+B_{ij}\right)+D_{ ij},\] we see that the rows of \(Y\) are independent and have the same distribution as \(y=f\!\left(X^{\top}w/\sqrt{d_{0}}+b\right)+\tilde{b}\). For the last statement, we may write \(y=\tilde{f}(u)+\tilde{b}\), where \(u\) is a centered Gaussian vector with covariance matrix \(S=\big{(}\sigma_{W}^{2}K_{X}+\sigma_{B}^{2}I_{n}\big{)}/\tilde{\sigma}^{2}\). We have \(S-I_{n}=(\sigma_{W}^{2}/\tilde{\sigma}^{2})\Delta_{X}\), hence the random variables \(u_{i}\) are centered Gaussian with covariance \(1+O((\Delta_{X})_{ii})\). Since \(\zeta_{0}(\tilde{f})=\mathbb{E}[f(\tilde{\sigma}\mathcal{N})]=0\), using the first order expansion given by Corollary 3.4 applied to the function \(\tilde{f}\), we have \(\mathbb{E}\!\left[\tilde{f}(u_{i})\right]=O((\Delta_{X})_{ii})\) uniformly on \(i\in\llbracket 1,n\rrbracket\). We deduce that \(\left|\kern-1.075pt\left|\mathbb{E}[y]\right|\kern-1.075pt\right|=\left| \kern-1.075pt\left|\mathbb{E}\!\left[\tilde{f}(u)\right]\right|\kern-1.075pt \right|=O\!\left(\left|\kern-1.075pt\left|\mathrm{diag}(\Delta_{X})\right| \kern-1.075pt\right|\kern-1.075pt\right|\kern-1.075pt\right)=O(1)\). **Corollary 5.4**.: _Under Assumptions 5.2, \(\left|\kern-1.075pt\left|\Sigma\right|\kern-1.075pt\right|\kern-1.075pt\right|\) is bounded, and moreover:_ \[\left|\kern-1.075pt\left|\Sigma-\Sigma_{\mathrm{lin}}\right| \kern-1.075pt\right|\leq O(\left|\kern-1.075pt\left|\Delta_{X} \right|\kern-1.075pt\right|_{\max}+n\zeta_{2}(\tilde{f})^{2}\|\Delta_{X}\|_{ \max}^{2}+n\zeta_{3}(\tilde{f})^{2}\|\Delta_{X}\|_{\max}^{3}),\] \[\left|\kern-1.075pt\left|g_{\Sigma}(z)-g_{\Sigma_{\mathrm{lin}}}( z)\right|\kern-1.075pt\right|\leq O(\left|\kern-1.075pt\left|\Delta_{X}\right|\kern-1.075pt\right|_{ \max}+\sqrt{n}\zeta_{2}(\tilde{f})^{2}\|\Delta_{X}\|_{\max}^{2}+\sqrt{n}\zeta_{3 }(\tilde{f})^{2}\|\Delta_{X}\|_{\max}^{3}).\] Proof.: As seen in the last Proposition, the rows of \(Y\) are i.i.d. sampled from the distribution of a vector \(y=\tilde{f}(u)+\tilde{b}\) where \(u\) is a centered Gaussian vector with covariance matrix \(S=I_{n}+\Delta\), and \(\Delta=(\sigma_{W}^{2}/\tilde{\sigma}^{2})\Delta_{X}\). The Assumptions 3.8 are satisfied, and we deduce from Proposition 3.10 that \(\left|\kern-1.075pt\left|\kern-1.075pt\left|\Sigma-\Sigma_{\mathrm{lin}}\right| \kern-1.075pt\right|\kern-1.075pt\right|\) is bounded. Since \(\left|\kern-1.075pt\left|\kern-1.075pt\left|K_{X}\right|\kern-1.075pt\right| \kern-1.075pt\right|\kern-1.075pt\right|\kern-1.075pt\right|\kern-1.075pt\) is bounded and \(\Sigma_{\mathrm{lin}}=\mathfrak{a}I_{n}+\mathfrak{b}K_{X}\), \(\left|\kern-1.075pt\left|\Sigma\right|\kern-1.075pt\right|\kern-1.075pt\) is also bounded. From the same proposition, we have the following estimates in spectral norm, with \(\epsilon_{n}=\left\|\Delta_{X}\right\|_{\max}+n\zeta_{2}(\tilde{f})^{2}\|\Delta_{X }\|_{\max}^{2}+n\zeta_{3}(\tilde{f})^{2}\|\Delta_{X}\|_{\max}^{3}\): \[\mathbb{E}\Big{[}\tilde{f}(u)\tilde{f}(u)^{\top}\Big{]} =\left\|\tilde{f}\right\|^{2}I_{n}+\zeta_{1}(\tilde{f})^{2}\Delta +O(\epsilon_{n})\] \[=\left\|\tilde{f}\right\|^{2}I_{n}+\zeta_{1}(\tilde{f})^{2}( \sigma_{W}^{2}/\tilde{\sigma}^{2})\big{(}K_{X}-\sigma_{X}^{2}I_{n}\big{)}+O( \epsilon_{n})\] \[\Sigma=\mathbb{E}\Big{[}yy^{\top}\Big{]} =\mathbb{E}\Big{[}\tilde{f}(u)\tilde{f}(u)^{\top}\Big{]}+\sigma_ {D}^{2}I_{n}\] \[=\left(\left\|\tilde{f}\right\|^{2}-\frac{\sigma_{W}^{2}\sigma_{X }^{2}}{\tilde{\sigma}^{2}}\zeta_{1}(\tilde{f})^{2}+\sigma_{D}^{2}\right)I_{n} +\zeta_{1}(\tilde{f})^{2}\frac{\sigma_{W}^{2}}{\tilde{\sigma}^{2}}K_{X}+O( \epsilon_{n})\] \[=\Sigma_{\mathrm{lin}}+O(\epsilon_{n}).\] The proof for the Stieltjes transforms is similar. ### Propagation of the approximate orthogonality The contents of this paragraph will not be useful in this section, but rather in later stages of the article to study multi-layers networks by induction. Similar results may be found in [10, Section D] under the name of propagation of approximate orthogonality (see Remark 5.8 for more details on this concept), and proved by slightly different means. **Lemma 5.5**.: _Let \(\sigma_{Y}^{2}=\left\|\tilde{f}\right\|_{\mathcal{H}}^{2}+\sigma_{D}^{2}\), and \(\Delta_{Y}=K-\sigma_{Y}^{2}I_{n}=Y^{\top}Y/d-\sigma_{Y}^{2}I_{n}\). Under Assumptions 5.2, there exists an event \(\mathcal{B}\) with \(\mathbb{P}(\mathcal{B}^{c})\leq ce^{-n/c}\) for some constant \(c>0\), such that uniformly on \(\mathcal{B}\), \(\left\|\mathrm{\widetilde{diag}}(\Delta_{Y})\right\|\) and \(\left\|\!\left|K\right|\!\right|\) are bounded, and \(\left\|\!\left|\Delta_{Y}\right\|\!\right|_{\max}\leq O\Big{(}\left\|\Delta_{ X}\right\|\!\right|_{\max}+\sqrt{\log n/n}\Big{)}\)._ Proof.: We have \(\left|\!\left|\!\left|\mathbb{E}[Y]\right|\!\right|^{2}\leq\left\|\mathbb{E}[ Y]\right\|_{F}^{2}\leq d\left\|\mathbb{E}[y]\right\|^{2}\leq O(n)\). From Proposition 2.3, there exists a constant \(c>0\) and an event \(\mathcal{B}\) with \(\mathbb{P}(\mathcal{B}^{c})\leq ce^{-n/c}\), such that \((K|\mathcal{B})\propto_{\left\|\!\left|\cdot\right|\!\right|_{F}}\mathcal{E}(1/ \sqrt{n})\), and \(\left\|\!\left|\!\left|Y\right|\!\right|\leq 2c\sqrt{n}\) on \(\mathcal{B}\), thus \(\left|\!\left|\!\left|K\right|\!\right|\!\right|\) is bounded on \(\mathcal{B}\). We will check the other statements in expectation first, and in a second stage use concentration to obtain bounds on the random objects. For any \(i\in\llbracket 1,n\rrbracket\), \(Y_{ii}\) has the same distribution as \(\tilde{f}(u_{i})+\tilde{b}_{i}\), where \(u_{i}\) and \(\tilde{b}_{i}\) are centered Gaussian variables, independent, with variances \(1+O((\Delta_{X})_{ii})\) and \(\sigma_{D}^{2}\) respectively. Using the first order expansion given by Corollary 3.4 applied to the function \(\tilde{f}^{2}\), uniformly on \(i\in\llbracket 1,n\rrbracket\) we have: \[\mathbb{E}\big{[}Y_{ii}^{2}\big{]} =\mathbb{E}\bigg{[}\Big{(}\tilde{f}(u_{i})+b_{i}\Big{)}^{2}\bigg{]}\] \[=\zeta_{0}(\tilde{f}^{2})+O((\Delta_{X})_{ii})+\sigma_{D}^{2}\] \[=\left\|\tilde{f}\right\|_{\mathcal{H}}^{2}+\sigma_{D}^{2}+O(( \Delta_{X})_{ii})\] \[=\sigma_{Y}^{2}+O((\Delta_{X})_{ii}).\] We deduce that \(\mathbb{E}\bigg{[}\left\|\mathrm{\widetilde{diag}}(\Delta_{Y})\right\|^{2} \bigg{]}\leq\sum_{i=1}^{n}O\big{(}(\Delta_{X})_{ii}^{2}\big{)}\leq O\bigg{(} \left\|\mathrm{\widetilde{diag}}(\Delta_{X})\right\|^{2}\bigg{)}\leq O(1)\). Since \(\Big{(}\mathrm{\widetilde{diag}}(\Delta_{Y})|\mathcal{B}\Big{)}\propto_{ \left\|\!\left|\cdot\right|\!\right|}\mathcal{E}(1/\sqrt{n})\), \(\left\|\mathrm{\widetilde{diag}}(\Delta_{Y})-\mathbb{E}\Big{[}\mathrm{ \widetilde{diag}}(\Delta_{Y})\Big{]}\right\|\leq O(1)\) a.s. on \(\mathcal{B}\), and \(\left\|\mathrm{\widetilde{diag}}(\Delta_{Y})\right\|\leq O(1)\) a.s. on \(\mathcal{B}\). Finally \(\left\|K-\Sigma\right\|_{\max}\leq O\Big{(}\sqrt{\log n/n}\Big{)}\) a.s. on \(\mathcal{B}\) using the general properties of concentration (see Proposition 2.2(10)), and from Proposition 3.10: \[\left\|\Sigma-\sigma_{Y}^{2}I_{n}\right\|_{\max}\leq\left\|\Sigma-\Sigma_{\rm lin }\right\|_{\max}+O(\left\|\Delta_{X}\right\|_{\max})\leq O(\left\|\Delta_{X} \right\|_{\max}),\] which proves the result after a final triangular inequality. ### Deterministic equivalent and consequences We let \(\nu^{\Sigma_{\rm lin}}=\mathrm{MP}(\gamma_{n})\boxtimes\mu_{\Sigma_{\rm lin}}\), and we refer to the beginning of Section 4 for the definition of the deterministic equivalent matrix \(\mathbf{G}_{\boxtimes}^{\Sigma_{\rm lin}}(z)\). _Remark 5.6_.: If \(\mathfrak{b}=0\), then \(\Sigma_{\rm lin}\) does not depend on \(X\) and the objects \(\nu^{\Sigma_{\rm lin}}\) and \(\mathbf{G}_{\boxtimes}^{\Sigma_{\rm lin}}(z)\) are fully explicit. Indeed \(\Sigma_{\rm lin}=\mathfrak{a}I_{n}\), \(\mu_{\Sigma_{\rm lin}}=\delta_{\mathfrak{a}}\), and \(\nu^{\Sigma_{\rm lin}}=\mathrm{MP}(\gamma_{n})\boxtimes\delta_{\mathfrak{a}}= \mathfrak{a}\mathrm{MP}(\gamma_{n})\). Since \(\Sigma_{\rm lin}\) is a multiple of the identity matrix, so are \(\mathcal{G}_{\Sigma_{\rm lin}}(z)\) and \(\mathbf{G}_{\boxtimes}^{\Sigma_{\rm lin}}\), hence: \[\mathbf{G}_{\boxtimes}^{\Sigma_{\rm lin}}(z)=\bigg{(}\frac{1}{n}\mathrm{Tr} \mathbf{G}_{\boxtimes}^{\Sigma_{\rm lin}}(z)\bigg{)}I_{n}=g_{\mathfrak{a} \mathrm{MP}(\gamma_{n})}(z)I_{n}.\] Note that if \(f\neq 0\), then \(\mathfrak{a}\neq 0\) and \(g_{\mathfrak{a}\mathrm{MP}(\gamma_{n})}(z)=\mathfrak{a}g_{\mathrm{MP}(\gamma_{ n})}(z/\mathfrak{a})\). In the case where \(\mathfrak{b}\neq 0\), we can describe the deterministic equivalents as functions of \(X\). Indeed \(\mu_{\Sigma_{\rm lin}}=\mathfrak{a}+\mathfrak{b}\mu_{K_{X}}\), \(\nu^{\Sigma_{\rm lin}}=\mathrm{MP}(\gamma_{n})\boxtimes(\mathfrak{a}+ \mathfrak{b}\mu_{K_{X}})\), and since \(\mathcal{G}_{\Sigma_{\rm lin}}(z)=\frac{1}{\mathfrak{b}}\mathcal{G}_{K_{X}} \bigg{(}\frac{z-\mathfrak{a}}{\mathfrak{b}}\bigg{)}\): \[\mathbf{G}_{\boxtimes}^{\Sigma_{\rm lin}}(z) =z^{-1}l_{\tilde{\nu}^{\Sigma_{\rm lin}}}(z)\mathcal{G}_{L}(l_{ \tilde{\nu}^{\Sigma_{\rm lin}}}(z))\] \[=\frac{l_{\tilde{\nu}^{\Sigma_{\rm lin}}}(z)}{\mathfrak{b}z} \mathcal{G}_{K_{X}}\bigg{(}\frac{l_{\tilde{\nu}^{L}}(z)-\mathfrak{a}}{ \mathfrak{b}}\bigg{)},\] where \(\tilde{\nu}^{\Sigma_{\rm lin}}=(1-\gamma_{n})\cdot\delta_{0}+\gamma_{n}\cdot \nu^{\Sigma_{\rm lin}}\) and \(l_{\tilde{\nu}^{\Sigma_{\rm lin}}}(z)=-1/g_{\tilde{\nu}^{\Sigma_{\rm lin}}}(z)\). For the next result let us denote: \[\epsilon_{n} =\frac{1}{n}+\left\|\Delta_{X}\right\|_{\max}+\sqrt{n}\zeta_{2} (\tilde{f})^{2}\|\Delta_{X}\|_{\max}^{2}+\sqrt{n}\zeta_{3}(\tilde{f})^{2}\| \Delta_{X}\|_{\max}^{3},\] \[\epsilon_{n}^{\prime} =\frac{1}{\sqrt{n}}+\left\|\Delta_{X}\right\|_{\max}+n\zeta_{2} (\tilde{f})^{2}\|\Delta_{X}\|_{\max}^{2}+n\zeta_{3}(\tilde{f})^{2}\|\Delta_{X} \|_{\max}^{3}.\] **Theorem 5.7**.: _Uniformly under Assumptions 5.2, the following concentration properties hold true:_ 1. \(g_{K}(z)\propto\mathcal{E}(O_{z}(1/n))\) _and_ \(\mathcal{G}_{K}(z)\propto_{\left|\kern-1.0pt\cdot\kern-1.0pt\right|_{F}} \mathcal{E}\big{(}O_{z}(1/\sqrt{n})\big{)}\)_._ 2. \(g_{K}(z)\in g_{\nu^{L}}(z)\pm\mathcal{E}(O_{z}(\epsilon_{n}))\) _and_ \(\mathcal{G}_{K}(z)\in_{\left|\kern-1.0pt\cdot\kern-1.0pt\right|\kern-1.0pt \right|}\mathbf{G}_{\boxtimes}^{\Sigma_{\rm lin}}(z)\pm\mathcal{E}\big{(}O_{z }(\epsilon_{n}^{\prime})\big{)}\)_._ _Remark 5.8_.: Assumption 5.2(4) states roughly that the data matrix is close to being an orthogonal matrix (up to rescaling), a notion that was first introduced in [13] under the name of approximate orthogonality. If this is true, the Conjugate Kernel model may be compared to a classical model derived from matrices with i.i.d. entries. To clarify this point, the assumption on \(\left\|\Delta_{X}\right\|_{\max}\) measures how the entries of \(Y\) are dependent and how far they are from being standard Gaussian random variables individually. The additional bound on \(\left\|\vec{\mathrm{diag}}(\Delta_{X})\right\|\) ensures that the latter phenomenon occurs somewhat uniformly. In order for the deterministic equivalents to be meaningful, we need that \(\epsilon_{n}\) converges to \(0\) for the Stieltjes transforms, and that \(\epsilon_{n}^{\prime}\) converges to \(0\) for the resolvent matrices. This is a stronger assumption that depends both on the data matrix \(X\) and on the activation function \(f\). More precisely, \(K_{X}\) should not be too far from \(I_{n}\) entry-wise, to an extent that also depends on how far \(f\) is from acting linearly on Gaussians. In practice, for the Stieltjes transforms and a general activation function \(f\) we need \(\left\|\Delta_{X}\right\|_{\max}=o\Big{(}n^{-1/4}\Big{)}\). If \(\zeta_{2}(\tilde{f})=0\), which happens for instance if \(f\) is odd symmetric, this convergence can be relaxed to \(\left\|\Delta_{X}\right\|_{\max}=o\Big{(}n^{-1/6}\Big{)}\). If \(\zeta_{2}(\tilde{f})=\zeta_{3}(\tilde{f})=0\), no additional hypothesis is required as \(\epsilon_{n}=\left\|\Delta_{X}\right\|_{\max}\) converges already to \(0\). For the resolvent matrices, the equivalent statements boil down to \(\left\|\Delta_{X}\right\|_{\max}=o\Big{(}n^{-1/2}\Big{)}\) in general, \(\left\|\Delta_{X}\right\|_{\max}=o\Big{(}n^{-1/3}\Big{)}\) if \(\zeta_{2}(\tilde{f})=0\), and no additional hypothesis if \(\zeta_{2}(\tilde{f})=\zeta_{3}(\tilde{f})=0\). Proof of Theorem 5.7.: \(Y\) satisfies the Assumptions 4.2 as seen in Proposition 5.3. We deduce from Theorem 4.5 the Lipschitz concentration properties (1) and the linear concentration properties \(\mathcal{G}_{K}(z)\in_{\llbracket\cdot\rrbracket_{F}}\mathbf{G}_{\boxtimes}^{ \Sigma}(z)\pm\mathcal{E}\big{(}O_{z}(1/\sqrt{n})\big{)}\) and \(g_{K}(z)\in g_{\nu}(z)\pm\mathcal{E}(O_{z}(1/n))\). Moreover from Theorem 4.7 and Corollaries 5.4 and 4.12: \[\left|g_{\nu^{\Sigma}}(z)-g_{\nu}x_{\text{\tiny lin}}(z)\right| \leq O_{z}(\left|g_{\Sigma}(z)-g_{\Sigma_{\text{\tiny lin}}}(z) \right|)\] \[\leq O_{z}(\epsilon_{n}),\] \[\left\|\!\left|\mathbf{G}_{\boxtimes}^{\Sigma}(z)-\mathbf{G}_{ \boxtimes}^{\Sigma_{\text{\tiny lin}}}(z)\right|\!\right|\!\right| \leq O_{z}(\left\|\!\left\|\Sigma-\Sigma_{\text{\tiny lin}} \right\|\!\right|)\] \[\leq O_{z}(\epsilon_{n}^{\prime}),\] which imply the linear concentration properties (2). Similarly to Corollary 4.6, we may deduce from the deterministic equivalents the following spectral properties: **Corollary 5.9**.: _Uniformly under Assumptions 4.2:_ 1. \(\left|g_{K}(z)-g_{\nu}x_{\text{\tiny lin}}(z)\right|\leq\sqrt{\log n}\,O_{z} (\epsilon_{n})\) _a.s. and_ \(\left\|\mathcal{G}_{K}(z)-\mathbf{G}_{\boxtimes}^{\Sigma_{\text{\tiny lin}}}(z )\right\|_{\max}\leq\sqrt{\log n}\,O_{z}(\epsilon_{n}^{\prime})\) _a.s._ 2. _If_ \(f\) _is not linear, or if is_ \(f\) _linear and the eigenvalues of_ \(K_{X}\) _are bounded from below, there exists_ \(\theta>0\) _such that_ \(D(\mu_{K},\nu^{\Sigma_{\text{\tiny lin}}})\leq O(\epsilon_{n}^{\theta})\) _a.s._ 3. _If moreover_ \(\mu_{K_{X}}\) _converges weakly to a measure_ \(\mu_{\infty}\) _and if_ \(\gamma_{n}\to\gamma_{\infty}\)_, then_ \(\mu_{K}\) _converges weakly to_ \(\nu_{\infty}=\operatorname{MP}(\gamma_{\infty})\boxtimes(\mathfrak{a}+ \mathfrak{b}\mu_{\infty})\)_, and more precisely:_ \[D(\mu_{K},\nu_{\infty})\leq O\Big{(}D(\mu_{K_{X}},\mu_{\infty})+\left|\gamma_ {n}-\gamma_{\infty}\right|+\epsilon_{n}^{\theta}\Big{)}\quad\text{a.s.}\] Proof.: By definition of the \(O_{z}(\epsilon_{n}^{\prime})\) notation, we may find \(\alpha>0\) such that uniformly in \(z\in\mathbb{C}^{+}\) with bounded \(\Im(z)\), \(\mathcal{G}_{K}(z)\in_{\llbracket\cdot\rrbracket_{\rrbracket}}\mathbf{G}_{ \boxtimes}^{\Sigma_{\text{\tiny lin}}}(z)\pm\mathcal{E}\big{(}O(\epsilon_{n}^{ \prime}(z))\big{)}\) where \(\epsilon_{n}^{\prime}(z)=\epsilon_{n}^{\prime}\frac{|z|^{\alpha}}{\Im(z)^{2\alpha}}\). Let us now fix \(z\in\mathbb{C}^{+}\). The maps \(M\mapsto M_{ij}\) are linear and \(1\)-Lipschitz with respect to the spectral norm. By definition of the linear concentration, there are constants \(C>0\) such that for any \(n,t,i\) and \(j\): \[\mathbb{P}\bigg{(}\bigg{|}\Big{(}\mathcal{G}_{K}(z)-\mathbf{G}_{\mathbb{S}}^{ \Sigma_{\mathrm{lin}}}(z)\Big{)}_{ij}\bigg{|}\geq t\bigg{)}\leq Ce^{-\frac{t^{ 2}}{Ct_{n}^{\prime}(z)^{2}}}.\] We choose \(t_{n}=\epsilon_{n}^{\prime}(z)\sqrt{4C\log n}\), and we use a union bound: \[\mathbb{P}\Big{(}\bigg{\|}\mathcal{G}_{K}(z)-\mathbf{G}_{\mathbb{S}}^{\Sigma_ {\mathrm{lin}}}(z)\Big{\|}_{\max}\geq t_{n}\Big{)}\leq n^{2}Ce^{-\frac{t_{n}^{ 2}}{Ct_{n}^{\prime}(z)^{2}}}=Ce^{2\log n-4\log n}=C/n^{2}.\] These probabilities are summable, so using Borel-Cantelli lemma: \[\bigg{\|}\mathcal{G}_{K}(z)-\mathbf{G}_{\mathbb{S}}^{\Sigma_{\mathrm{lin}}}( z)\Big{\|}_{\max}\leq t_{n}\leq O\Big{(}\sqrt{\log n}\,\epsilon_{n}^{\prime}(z) \Big{)}\quad\text{a.s.}\] We deduce that \(\bigg{\|}\mathcal{G}_{K}(z)-\mathbf{G}_{\mathbb{S}}^{\Sigma_{\mathrm{lin}}}( z)\Big{\|}_{\max}\leq\sqrt{\log n}\,O_{z}(\epsilon_{n}^{\prime})\) a.s. The proof for the Stieltjes transforms is similar. If \(f\) is not linear, or if \(f\) is linear and the eigenvalues of \(K_{X}\) are bounded from below, then the eigenvalues of \(\Sigma_{\mathrm{lin}}\) are bounded from below. In this case, for values of \(\gamma_{n}\leq 1\) the cumulative distribution function \(\mathcal{F}_{\nu}\Sigma_{\mathrm{lin}}\) are uniformly Holder continuous with exponent \(\beta=1/2\), and for values of \(\gamma_{n}\geq 1\) the cumulative distribution function \(\mathcal{F}_{\bar{\nu}}\Sigma_{\mathrm{lin}}\) are uniformly Holder continuous with exponent \(\beta=1/2\) (see [1, Section 8.3] and Remark 2.9). In both cases, from Proposition 2.8 we deduce the second assertion. Finally for the third assertion, using the properties of free multiplicative convolution [1, Proposition 4.13] we deduce that \(\nu^{\Sigma_{\mathrm{lin}}}\) converges weakly to \(\nu_{\infty}\), and that: \[D(\nu^{\Sigma_{\mathrm{lin}}},\nu^{\infty}) \leq D(\mathrm{MP}(\gamma_{n}),\mathrm{MP}(\gamma_{\infty}))+D( \mu_{\Sigma_{\mathrm{lin}}},\mathfrak{a}+\mathfrak{b}\mu_{\infty})\] \[\leq O(|\gamma_{n}-\gamma_{\infty}|+D(\mu_{K_{X}},\mu_{\infty})).\] ### Application to another model involving entry-wise operations In this paragraph we show how our framework applies to other models of random matrices, not strictly related to artificial neural networks. We consider \(U\in\mathbb{R}^{n\times n}\) a Gaussian random matrix filled with \(\mathcal{N}\) random variables, i.i.d. within the columns and weakly correlated within the rows. More precisely, we consider \(u\in\mathbb{R}^{n}\) a Gaussian random vector, centered, with covariance matrix: \[S=\begin{pmatrix}1&1/n&\dots&&1/n\\ 1/n&1&1/n&&\vdots\\ \vdots&1/n&\ddots&&\\ &&&\ddots&1/n\\ 1/n&\dots&&1/n&1\end{pmatrix}.\] We let \(U\) be the random matrix with i.i.d. sampled columns from the distribution of \(u\). Let \(f=\tanh\) be the hyperbolic tangent function, \(B\) and \(D\in\mathbb{R}^{n\times n}\) independent random matrices filled with i.i.d. \(\mathcal{N}\) random variables, and \(Y=f(U+B)+D\). We want to apply the contents of this section to study the spectral properties of the sample covariance matrix \(K=Y^{\top}Y/n\), its resolvent \(\mathcal{G}_{K}(z)=\left(K-zI_{n}\right)^{-1}\) and Stieltjes transform \(g_{K}(z)=(1/n)\mathrm{Tr}\mathcal{G}_{K}(z)\). Let us denote by \(J\in\mathbb{R}^{n\times n}\) the matrix whose entries are all equal to \(1\), so that \(S=I_{n}+(J-I_{n})/n\). In law \(U=XW\) where \(W\in\mathbb{R}^{n\times n}\) is a matrix filled with i.i.d. \(\mathcal{N}\) entries, independent from the other sources of randomness, and \(X=S^{1/2}\). Up to a transposition in the independence structure, the model \(Y\) is equal in law as \(f(XW+B)+D\), and satisfies the Assumptions 5.2, with \(\Delta_{X}=(J-I_{n})/n\) and \(\sigma_{W}^{2}=\sigma_{X}^{2}=\sigma_{B}^{2}=\sigma_{D}^{2}=1\). Indeed \(\tilde{\sigma}^{2}=2\), \(\tilde{f}(t)=\tanh(\sqrt{2}t)\), and by odd symmetry of \(\tanh\) it is clear that \(\zeta_{0}(\tilde{f})=\zeta_{2}(\tilde{f})=0\). We also have \(\left\|\!\left|\Delta_{X}\right|\!\right\|=1-1/n\), \(\widetilde{\mathrm{diag}}(\Delta_{X})=0\), \(\left\|\Delta_{X}\right\|_{\max}=1/n\) and \(\epsilon_{n}=O(1/n)\). The constants \(\mathfrak{a}\) and \(\mathfrak{b}\) are linked to Gaussian moments of the hyperbolic tangent function and may be numerically approximated. The matrix \(\Sigma_{\mathrm{lin}}=\mathfrak{a}I_{p}+\mathfrak{b}S=(\mathfrak{a}+\mathfrak{ b})I_{p}+\mathfrak{b}(J-I_{n})/n\) is explicitly diagonalizable and \(\mu_{\Sigma_{\mathrm{lin}}}=\frac{n-1}{n}\cdot\delta_{\mathfrak{a}+ \mathfrak{b}-\mathfrak{b}/n}+\frac{1}{n}\cdot\delta_{\mathfrak{a}+2\mathfrak{ b}-\mathfrak{b}/n}\). The measure \(\nu^{\Sigma_{\mathrm{lin}}}=\tilde{\nu}^{\Sigma_{\mathrm{lin}}}=\mathrm{MP}(1) \boxtimes\mu_{L}\) is characterized by its Stieltjes transform \(g_{\nu}\Sigma_{\mathrm{lin}}(z)\), which is the only solution \(g\in\mathbb{C}^{+}\) of the self-consistent equation: \[g =\int_{\mathbb{R}}\frac{1}{-zgt-z}\mu_{\Sigma_{\mathrm{lin}}}(dt)\] \[=\frac{1}{-zg(\mathfrak{a}+\mathfrak{b}-\mathfrak{b}/n)-z}+\frac {1}{n}\bigg{(}\frac{1}{-zg(\mathfrak{a}+2\mathfrak{b}-\mathfrak{b}/n)-z}- \frac{1}{-zg(\mathfrak{a}+\mathfrak{b}-\mathfrak{b}/n)-z}\bigg{)}\] This equation may be rewritten as a cubic polynomial equation and solved explicitly. The deterministic equivalent matrix \(\mathbf{G}_{\boxtimes}^{\Sigma_{\mathrm{lin}}}(z)\) may also be explicitly computed. The interested reader can check that: \[\mathbf{G}_{\boxtimes}^{\Sigma_{\mathrm{lin}}}(z) =-\frac{g_{\nu}\Sigma_{\mathrm{lin}}(z)\mathfrak{b}}{(g_{\nu} \Sigma_{\mathrm{lin}}(z)(\mathfrak{a}+\mathfrak{b}-\mathfrak{b}/n)+1)(g_{\nu }\Sigma_{\mathrm{lin}}(z)(\mathfrak{a}+2\mathfrak{b}-\mathfrak{b}/n)+1)}\frac{ J}{n}\] \[\quad-\frac{1}{(g_{\nu}\Sigma_{\mathrm{lin}}(z)(\mathfrak{a}+ \mathfrak{b}-\mathfrak{b}/n)+1)}\frac{I_{n}}{z}\] Theorem 5.7 applied to this model provides the following deterministic equivalents: \(g_{K}(z)\in g_{\nu}\Sigma_{\mathrm{lin}}(z)\pm\mathcal{E}(O_{z}(1/n))\), and \(\mathcal{G}_{K}(z)\in_{\left|\!\left|\cdot\right|\!\right|}\mathbf{G}_{ \boxtimes}^{\Sigma_{\mathrm{lin}}}(z)\pm\mathcal{E}\big{(}O_{z}(1/\sqrt{n}) \big{)}\). Since \(\mu_{\Sigma_{\mathrm{lin}}}\) converges weakly to \(\delta_{\mathfrak{a}+\mathfrak{b}}\), \(\nu^{L}\) and \(\mu_{K}\) converge weakly to \(\nu_{\infty}=\mathrm{MP}(1)\boxtimes\delta_{\mathfrak{a}+\mathfrak{b}}=( \mathfrak{a}+\mathfrak{b})\mathrm{MP}(1)\), and \(D(\mu_{K},\nu^{\Sigma_{\mathrm{lin}}})\leq O(n^{-\theta})\) for some \(\theta>0\). To go further, Corollary 5.9(3) is not helpful since the measures are discrete. As a matter of fact, \(D(\mu_{\Sigma_{\mathrm{lin}}},\delta_{\mathfrak{a}+\mathfrak{b}})=1-1/n\) does not vanish. However we can compare directly the Stieltjes transforms of \(\Sigma_{\mathrm{lin}}\) and \(\delta_{\mathfrak{a}+\mathfrak{b}}\): \[\left|g_{\Sigma_{\mathrm{lin}}}(z)-g_{\delta_{\mathfrak{a}+ \mathfrak{b}}}(z)\right| =\left|\frac{1-1/n}{\mathfrak{a}+\mathfrak{b}-\mathfrak{b}/n-z}+ \frac{1/n}{\mathfrak{a}+2\mathfrak{b}-\mathfrak{b}/n-z}-\frac{1}{\mathfrak{a}+ \mathfrak{b}-z}\right|\] \[\leq O_{z}(1/n)\] From Theorem 4.7, we have \(\left|g_{\nu}\Sigma_{\mathrm{lin}}(z)-g_{\nu^{\infty}}(z)\right|\leq O_{z}(1/n)\), hence \(D(\nu^{\Sigma_{\mathrm{lin}}},\nu^{\infty})\leq O(n^{-\theta})\) for some \(\theta>0\) from Proposition 2.8. We conclude that \(\mu_{K}\) converges to \(\nu_{\infty}\) as speed \(O(n^{-\theta})\) in Kolmogorov distance. ## 6. Single-layer neural network with random data ### Setting In this section we study the Conjugate Kernel matrix associated to a single-layer artificial neural network with random input. The hypothesis are thus the same as in the last section, excepted for the random data matrix. We consider: * a random weight matrix \(W\in\mathbb{R}^{d\times d_{0}}\), with variance parameter \(\sigma_{W}^{2}>0\), * a random data matrix \(X\in\mathbb{R}^{d_{0}\times n}\), \(\sigma_{X}^{2}>0\), * two random biases matrices \(B\) and \(D\in\mathbb{R}^{d\times n}\), \(\sigma_{B}^{2},\sigma_{D}^{2}\geq 0\), * and an activation function \(f:\mathbb{R}\to\mathbb{R}\). As output of the neuron, we set \(Y=f(WX/\sqrt{d_{0}}+B)+D\in\mathbb{R}^{d\times n}\), where the function \(f\) is applied entry-wise. Our goal is to investigate the spectral properties of the Conjugate Kernel matrix \(K=Y^{\top}Y/d\). For \(z\in\mathbb{C}^{+}\) we define its resolvent \(\mathcal{G}_{K}(z)=\left(K-zI_{n}\right)^{-1}\) and Stieltjes transform \(g_{K}(z)=(1/n)\mathrm{Tr}\mathcal{G}_{K}(z)\). We also define the following objects: \[\tilde{\sigma}^{2} =\sigma_{W}^{2}\sigma_{X}^{2}+\sigma_{B}^{2},\] \[\tilde{f}(t) =f(\hat{\sigma}t),\] \[\mathfrak{a} =\left\|\tilde{f}\right\|_{\mathcal{H}}^{2}-\frac{\sigma_{W}^{2 }\sigma_{X}^{2}}{\tilde{\sigma}^{2}}\zeta_{1}(\tilde{f})^{2}+\sigma_{D}^{2},\] \[\mathfrak{b} =\zeta_{1}(\tilde{f})^{2}\frac{\sigma_{W}^{2}}{\tilde{\sigma}^{ 2}},\] \[K_{X} =X^{\top}X/d_{0},\] \[\Delta_{X} =K_{X}-\sigma_{X}^{2}I_{n},\] \[\Sigma_{X} =\mathfrak{a}I_{n}+\mathfrak{b}K_{X}.\] Like in the rest of this article, we sometimes omit the indices \(n\) and \(z\) for a better readability, even if we are implicitly dealing with sequences of matrices, measures, and complex functions. For reasons that we will understand later, we do not make hypothesis on the random matrix \(X\) itself, but rather on \(X\) conditioned with a high probability event (see the definition before Proposition 2.3). **Assumptions 6.1**.: 1. \(W\), \(B\) and \(D\) are random, independent, with i.i.d. \(\mathcal{N}(\sigma_{W}^{2})\), \(\mathcal{N}(\sigma_{B}^{2})\) and \(\mathcal{N}(\sigma_{D}^{2})\) entries respectively. 2. \(\tilde{f}\) is Lipschitz continuous and Gaussian centered, that is \(\mathbb{E}\Big{[}\tilde{f}(\mathcal{N})\Big{]}=\mathbb{E}[f(\tilde{\sigma} \mathcal{N})]=0\). 3. \(X\) is random, independent from \(W\), \(B\) and \(D\). There is an event \(\mathcal{B}\) with \(\mathbb{P}(\mathcal{B}^{c})\leq O(\sqrt{\log n}/n)\), such that \((X|\mathcal{B})\propto_{\left\|\cdot\right\|_{F}}\mathcal{E}(1)\). 4. There is a sequence \(\epsilon_{n}\) converging to \(0\), such that uniformly in \(\omega\in\mathcal{B}\), and \(\left\|\widetilde{\mathrm{diag}}(\Delta_{X})\right\|\) are bounded, and \(\left\|\Delta_{X}\right\|_{\max}\leq O(\epsilon_{n})\). 5. The ratio \(\gamma_{n}=\frac{n}{d}\) is bounded from above and away from \(0\). 6. There is a sequence \(\hat{\epsilon}_{n}\geq 0\) such that \(\left|\mathbb{E}[g_{K_{X}}(z)]-g_{\tau}(z)\right|\leq O_{z}(\hat{\epsilon}_{n})\) for some sequence of measures \(\tau\) supported on \(\mathbb{R}^{+}\). 7. There is a sequence \(\hat{\epsilon}^{\prime}_{n}\geq\hat{\epsilon}_{n}\) such that \(\left|\kern-1.075pt\left|\kern-1.075pt\left|\mathbb{E}[\mathcal{G}_{K_{X}}(z)]- \mathbf{H}(z)\right|\kern-1.075pt\right|\kern-1.075pt\right|\leq O_{z}(\hat{ \epsilon}^{\prime}_{n})\) for some sequence of matrix functions \(\mathbf{H}:\mathbb{C}^{+}\to\mathbb{C}^{n\times n}\) satisfying \(\left|\kern-1.075pt\left|\kern-1.075pt\left|\mathbf{H}\right|\kern-1.075pt \right|\kern-1.075pt\right|\leq 1/\Im(z)\). _Remark 6.2_.: Compared to Assumptions 5.2, in the assertions (3) and (4) we ask the data matrix \(X\) to be independent from the other random matrices, well concentrated, and uniformly approximately orthogonal on an event of high probability. Let us explain why \(\mathbb{P}(\mathcal{B}^{c})\leq O(\sqrt{\log n}/n)\) is a convenient choice to simplify our results. We claim that, up to a \(O_{z}(\sqrt{\log n}/n)\) error that will blend into similar terms, it will be equivalent for us to compute expectations on the whole probability set or on \(\mathcal{B}\). Indeed, if a random function \(\zeta:\mathbb{N}\times\mathbb{C}^{+}\to\mathbb{C}\) satisfies the _a priori_ bound \(\zeta(n,z)\leq 1/\Im(z)\), then we have: \[\left|\mathbb{E}_{\mathcal{B}}[\zeta]-\mathbb{E}[\zeta]\right| =\left|\mathbb{E}[\mathbf{1}_{\mathcal{B}}\zeta]/\mathbb{P}( \mathcal{B})-\mathbb{E}[\zeta]\right|\] \[=\frac{\left|\mathbb{E}[\mathbf{1}_{\mathcal{B}^{c}}\zeta]+ \mathbb{E}[\zeta]\mathbb{P}(\mathcal{B}^{c})\right|}{\mathbb{P}(\mathcal{B})}\] \[\leq\frac{2\mathbb{P}(\mathcal{B}^{c})}{1-\mathbb{P}(\mathcal{B} ^{c})}\frac{1}{\Im(z)}\] \[\leq O_{z}(\mathbb{P}(\mathcal{B}^{c}))\leq O_{z}(\sqrt{\log n}/ n).\] The assertions (6) and (7) correspond to deterministic equivalents for the Stieltjes transform and the resolvent of \(K_{X}\). If \(\zeta_{1}(\tilde{f})=0\) then \(\mathfrak{b}=0\), and as seen in Remark 5.6 the deterministic equivalents will not depend on \(X\). In this case \(\tau\) and \(\mathbf{H}(z)\) will not appear in the results and the assertions (6) and (7) are in essence empty. ### Deterministic equivalent and consequences We let \(\nu^{\Sigma_{X}}=\operatorname{MP}(\gamma_{n})\boxtimes\mu_{\Sigma_{X}}\), and we refer to the beginning of Section 4 for the definition of the deterministic equivalent matrix \(\mathbf{G}_{\boxtimes}^{\Sigma_{X}}(z)\). Similarly to the process used to express \(\mathbf{G}_{\boxtimes}^{\Sigma_{X}}(z)\) as a function of \(\mu_{K_{X}}\) and \(\mathcal{G}_{K_{X}}\) (see Remark 5.6), we define the objects: \[\chi =\operatorname{MP}(\gamma_{n})\boxtimes(\mathfrak{a}+\mathfrak{b }\tau),\] \[\check{\chi} =(1-\gamma_{n})\cdot\delta_{0}+\gamma_{n}\cdot\chi,\] \[l_{\check{\chi}}(z) =-1/g_{\check{\chi}}(z),\] \[\mathbf{K}(z) =\frac{z^{-1}l_{\check{\chi}}(z)}{\mathfrak{b}}\mathbf{H}\!\left( \frac{l_{\check{\chi}}(z)-\mathfrak{a}}{\mathfrak{b}}\right) \text{if }\mathfrak{b}\neq 0,\] \[\mathbf{K}(z) =g_{\mathfrak{a}\mathrm{MP}(\gamma_{n})}(z)I_{n} \text{if }\mathfrak{b}=0.\] **Lemma 6.3**.: _Under Assumptions 6.1, \(\left|\mathbb{E}[g_{\nu^{\Sigma_{X}}}(z)]-g_{\chi}(z)\right|\leq\zeta_{1}( \tilde{f})O_{z}(\hat{\epsilon}_{n}+\sqrt{\log n}/n)\), and \(\left|\kern-1.075pt\left|\mathbb{E}\!\left[\mathbf{G}_{\boxtimes}^{\Sigma_{X} }(z)\right]-\mathbf{K}(z)\right|\kern-1.075pt\right|\kern-1.075pt\right|\leq \zeta_{1}(\tilde{f})O_{z}(\hat{\epsilon}^{\prime}_{n}+1/\sqrt{n})\)._ Proof.: If \(\zeta_{1}(\tilde{f})=0\), then \(\Sigma_{X}\) does not depend on \(X\), \(\mathbf{G}_{\boxtimes}^{\Sigma_{X}}(z)=\mathbf{K}(z)\) and \(\nu^{\Sigma_{X}}=\chi\). If \(\zeta_{1}(\tilde{f})\neq 0\), from Proposition 2.3 applied to \((X|\mathcal{B})\) there is a constant \(c>0\) and an event \(\mathcal{B}^{\prime}\subset\mathcal{B}\) with \(\mathbb{P}\!\left(\mathcal{B}^{\prime c}\right)\leq\mathbb{P}(\mathcal{B}^{c} )+ce^{-n/c}\leq O(\sqrt{\log n}/n)\), on which \((\Sigma_{X}|\mathcal{B}^{\prime})\propto_{\left|\kern-1.075pt\left|\kern-1.075pt \left|\cdot\right|\kern-1.075pt\right|_{F}}\mathcal{E}\!\left(1/\sqrt{n}\right)\). As explained in the Remark 6.2, given the _a priori_ bounds on Stieltjes transforms and on the spectral norm of resolvent matrices, we may pass from expectations on \(\mathcal{B}^{\prime}\) to expectations on the full probability set at the cost of a \(O_{z}(\sqrt{\log n}/n)\) error term. We thus have: \[\left|\mathbb{E}_{\mathcal{B}^{\prime}}[g_{\Sigma_{X}}(z)]-g_{ \mathfrak{a}+\mathfrak{b}\tau}(z)\right| =\left|\mathbb{E}[\mathbf{1}_{\mathcal{B}^{\prime}}(g_{\Sigma_{X} }(z)-g_{\mathfrak{a}+\mathfrak{b}\tau}(z))]\right|/\mathbb{P}(\mathcal{B}^{ \prime})\] \[\leq O_{z}(\hat{\epsilon}_{n}+\sqrt{\log n}/n).\] Proposition 4.13 applied to \((\Sigma_{X}|\mathcal{B}^{\prime})\) implies that: \[\left|\mathbb{E}_{\mathcal{B}^{\prime}}[g_{\nu}\Sigma_{X}(z)]-g_{\chi}(z) \right|\leq O_{z}(\hat{\epsilon}_{n}+\sqrt{\log n}/n),\] hence \(\left|\mathbb{E}[g_{\nu}\Sigma_{X}(z)]-g_{\chi}(z)\right|\leq O_{z}(\hat{ \epsilon}_{n}+\sqrt{\log n}/n)\). Similarly for the resolvents: \[\left|\left|\left|\mathbb{E}_{\mathcal{B}^{\prime}}[\mathcal{G} _{\Sigma_{X}}(z)]-\frac{1}{\mathfrak{b}}\mathbf{H}\bigg{(}\frac{z-\mathfrak{a }}{\mathfrak{b}}\bigg{)}\right|\right| =\left|\left|\left|\mathbb{E}\bigg{[}\frac{1}{\mathfrak{b}} \mathcal{G}_{K_{X}}\bigg{(}\frac{z-\mathfrak{a}}{\mathfrak{b}}\bigg{)}\right| \right|-\frac{1}{\mathfrak{b}}\mathbf{H}\bigg{(}\frac{z-\mathfrak{a}}{ \mathfrak{b}}\bigg{)}\right|\right|\] \[\quad+O_{z}(\sqrt{\log n}/n)\] \[\leq O_{z}(\hat{\epsilon}^{\prime}_{n}+\sqrt{\log n}/n),\] which implies by Proposition 4.13 that \(\left|\left|\left|\mathbb{E}_{\mathcal{B}^{\prime}}\Big{[}\mathbf{G}_{\mathbb{ S}}^{\Sigma_{X}}(z)\Big{]}-\mathbf{K}(z)\right|\right|\right|\leq O_{z}(\hat{ \epsilon}_{n}+\tilde{\epsilon}^{\prime}_{n}+1/\sqrt{n})\leq O_{z}(\hat{ \epsilon}^{\prime}_{n}+1/\sqrt{n})\), and \(\left|\left|\left|\mathbb{E}\Big{[}\mathbf{G}_{\mathbb{S}}^{\Sigma_{X}}(z) \Big{]}-\mathbf{K}(z)\right|\right|\right|\leq O_{z}(\hat{\epsilon}^{\prime}_ {n}+1/\sqrt{n})\). We remind our reader that the sequences \(\epsilon\) appearing in the Assumptions 6.1 measure the lack of orthogonality of \(X\) for \(\epsilon_{n}\), and the convergence speeds in the deterministic equivalents \(g_{K_{X}}(z)\approx g_{\tau}(z)\) for \(\hat{\epsilon}_{n}\) and \(\mathcal{G}_{K_{X}}(z)\approx\mathbf{H}(z)\) for \(\hat{\epsilon}^{\prime}_{n}\) respectively. For the next result let us denote: \[\tilde{\epsilon}_{n} =\sqrt{\log n}/n+\zeta_{1}(\tilde{f})\hat{\epsilon}_{n}+\epsilon _{n}+\sqrt{n}\zeta_{2}(\tilde{f})^{2}\epsilon_{n}^{2}+\sqrt{n}\zeta_{3}(\tilde {f})^{2}\epsilon_{n}^{3},\] \[\tilde{\epsilon}^{\prime}_{n} =1/\sqrt{n}+\zeta_{1}(\tilde{f})\hat{\epsilon}^{\prime}_{n}+ \epsilon_{n}+n\zeta_{2}(\tilde{f})^{2}\epsilon_{n}^{2}+n\zeta_{3}(\tilde{f})^{ 2}\epsilon_{n}^{3}.\] \(\tilde{\epsilon}_{n}\) and \(\tilde{\epsilon}^{\prime}_{n}\) will correspond to the new convergence speeds in the deterministic equivalents \(g_{K}(z)\approx g_{\chi}(z)\) and \(\mathcal{G}_{K}(z)\approx\mathbf{K}(z)\). **Theorem 6.4**.: _Uniformly under Assumptions 6.1, there is an event \(\mathcal{B}^{\prime}\subset\mathcal{B}\) with \(\mathbb{P}(\mathcal{B}^{\prime c})\leq O(\sqrt{\log n}/n)\), such that the following conditional expectation properties hold true:_ 1. \((Y|\mathcal{B}^{\prime})\left.\propto_{\left|\cdot\right|\right|_{F}}\mathcal{ E}(1)\)_,_ \((g_{K}(z)|\mathcal{B}^{\prime})\propto\mathcal{E}(O_{z}(1/n))\)_, and_ \((\mathcal{G}_{K}(z)|\mathcal{B}^{\prime})\left.\propto_{\left|\cdot\right|_{F}} \mathcal{E}(O_{z}(1/\sqrt{n}))\)_._ 2. \((g_{K}(z)|\mathcal{B}^{\prime})\in g_{\chi}(z)\pm\mathcal{E}(O_{z}(\tilde{ \epsilon}_{n}))\) _and_ \((\mathcal{G}_{K}(z)|\mathcal{B}^{\prime})\in_{\left|\cdot\right|\right|} \mathbf{K}(z)\pm\mathcal{E}\big{(}O_{z}(\tilde{\epsilon}^{\prime}_{n})\big{)}\)_._ Proof.: For a better readability in the upcoming arguments, we choose to omit the spectral parameters \(z\) in most of our notations. From Proposition 2.3 applied to \((W,(X|\mathcal{B}))\left.\propto_{\left|\cdot\right|\right|_{F}}\mathcal{E}(1)\) there is a constant \(c>0\) and an event \(\mathcal{B}^{\prime}\subset\mathcal{B}\), with \(\mathbb{P}\big{(}\mathcal{B}^{\prime c}\big{)}\leq\mathbb{P}(\mathcal{B}^{c}) +ce^{-n/c}\leq O(\sqrt{\log n}/n)\), such that \((WX|\mathcal{B}^{\prime})\left.\propto_{\left|\cdot\right|_{F}}\mathcal{E} \big{(}\sqrt{n}\big{)}\). The map \((U,B,D)\mapsto f(U+B)+D\) is Lipschitz with respect to the Frobenius norm, and by independence \(\Big{(}(WX/\sqrt{d_{0}}\,|\mathcal{B}^{\prime}),B,D\Big{)}\propto_{\left| \cdot\right|_{F}}\mathcal{E}(1)\), thus \((Y|\mathcal{B}^{\prime})=f\Big{(}(WX/\sqrt{d_{0}}\,|\mathcal{B}^{\prime})+B \Big{)}+D\propto_{\left|\cdot\right|\right|_{F}}\mathcal{E}(1)\). The general concentration properties recalled in Proposition 2.2 imply that \((g_{K}|\mathcal{B}^{\prime})\propto\mathcal{E}(O_{z}(1/n))\) and \((\mathcal{G}_{K}|\mathcal{B}^{\prime})\propto_{\llbracket\cdot\rrbracket_{F}} \mathcal{E}(O_{z}(1/\sqrt{n}))\). For the second assertion, given the linear concentration properties \((\mathcal{G}_{K}|\mathcal{B}^{\prime})\in_{\llbracket\cdot\rrbracket} \mathbb{E}\big{[}(\mathcal{G}_{K}|\mathcal{B}^{\prime})\big{]}\pm\mathcal{E} (O_{z}(1/\sqrt{n}))\) and \(\big{(}g_{K}|\mathcal{B}^{\prime})\in\mathbb{E}\big{[}(g_{K}|\mathcal{B}^{ \prime})\big{]}\pm\mathcal{E}\big{(}O_{z}(1/n))\), we only need to prove that \(\big{|}\mathbb{E}\big{[}\big{(}g_{K}|\mathcal{B}^{\prime}\big{)}\big{]}-g_{ \chi}\big{|}\leq O_{z}(\tilde{\epsilon}_{n})\), and that \(\big{|}\big{|}\mathbb{E}\big{[}\big{(}\mathcal{G}_{K}|\mathcal{B}^{\prime} \big{)}\big{]}-\mathbf{K}\big{|}\big{|}\leq O_{z}(\tilde{\epsilon}_{n}^{ \prime})\). To compare these expectations, as explained in the Remark 6.2, we may assume without loss of generality that \(\mathcal{B}^{\prime}=\Omega\), because the additional \(O_{z}(\sqrt{\log n}/n)\) error terms will not change the final estimates. We apply Theorem 5.7 to the models with deterministic data matrices \(X(\omega)\), uniformly in the outcomes \(\omega\in\Omega_{X}\). Since \(1/\sqrt{n}\leq\tilde{\epsilon}_{n}^{\prime}\) and \(\epsilon_{n}+n\zeta_{2}(\tilde{f})^{2}\epsilon_{n}^{2}+n\zeta_{3}(\tilde{f})^{ 2}\epsilon_{n}^{3}\leq\tilde{\epsilon}_{n}^{\prime}\), we obtain that \(\mathcal{G}_{K(\omega)}\in_{\llbracket\cdot\rrbracket\cdot\rrbracket} \mathbf{G}_{\boxtimes}^{\Sigma_{X}(\omega)}\pm\mathcal{E}\big{(}O_{z}(\tilde{ \epsilon}_{n}^{\prime})\). As a consequence, we get uniformly in \(\omega\in\Omega_{X}\): \[\big{|}\big{|}\mathbb{E}_{W,B,D}[\mathcal{G}_{K}(\omega)]-\mathbf{G}_{ \boxtimes}^{\Sigma_{X}(\omega)}\big{|}\big{|}\leq O_{z}(\tilde{\epsilon}_{n}^ {\prime}).\] Since \(X\) is independent from the other sources of randomness, for any measurable function \(\Phi\) we have \(\mathbb{E}[\Phi(W,X,B,D)|X]=\mathbb{E}_{W,B,D}[\Phi(W,X,B,D)]\). We can thus integrate on \(X\) the above inequality and using the tower property of conditional expectation: \[\big{|}\big{|}\mathbb{E}[\mathcal{G}_{K}]-\mathbb{E}\big{[} \mathbf{G}_{\boxtimes}^{\Sigma_{X}}\big{]}\big{|}\big{|}\] \[\leq\mathbb{E}_{X}\Big{[}\big{|}\big{|}\mathbb{E}_{W,B,D}[\mathcal{ G}_{K}]-\mathbf{G}_{\boxtimes}^{\Sigma_{X}}\big{|}\big{|}\Big{]}\leq O_{z}( \tilde{\epsilon}_{n}^{\prime}).\] From Lemma 6.3 we also have \(\big{|}\big{|}\mathbb{E}\big{[}\mathbf{G}_{\boxtimes}^{\Sigma_{X}}\big{]}- \mathbf{K}\big{|}\big{|}\leq\zeta_{1}(\tilde{f})O_{z}(\tilde{\epsilon}_{n}^{ \prime}+1/\sqrt{n})\leq O_{z}(\tilde{\epsilon}_{n}^{\prime})\), hence \(\big{|}\!\big{|}\mathbb{E}[\mathcal{G}_{K}]-\mathbf{K}\big{|}\!\big{|}\leq O_{ z}(\tilde{\epsilon}_{n}^{\prime})\). The proof for the Stieltjes transforms is similar. **Corollary 6.5**.: _Uniformly under Assumptions 6.1:_ 1. \(|g_{K}(z)-g_{\chi}(z)|\leq\sqrt{\log n}\,O_{z}(\tilde{\epsilon}_{n})\) _a.s., and_ \(\left\|\mathcal{G}_{K}(z)-\mathbf{K}(z)\right\|_{\max}\leq\sqrt{\log n}\,O_{z}( \tilde{\epsilon}_{n}^{\prime})\) _a.s._ 2. _If_ \(f\) _is not linear, or if is_ \(f\) _linear and the measures_ \(\tau\) _are supported on the same compact of_ \((0,\infty)\)_, there exists_ \(\theta>0\) _such that_ \(D(\mu_{K},\chi)\leq O(\tilde{\epsilon}_{n}^{\theta})\) _a.s._ 3. _If moreover_ \(\tau\) _converges weakly to a measure_ \(\tau_{\infty}\)_, and if_ \(\gamma_{n}\to\gamma_{\infty}\)_, then_ \(\mu_{K}\) _converges weakly to_ \(\chi_{\infty}=\mathrm{MP}(\gamma_{\infty})\boxtimes(\mathfrak{a}+\mathfrak{b} \tau_{\infty})\)_, and more precisely:_ \[D(\mu_{K},\chi_{\infty})\leq O\Big{(}D(\tau,\tau_{\infty})+|\gamma_{n}-\gamma_{ \infty}|+\tilde{\epsilon}_{n}^{\theta}\Big{)}\quad\text{a.s.}\] As we did for Corollary 4.6, we will not prove this result here, but rather prompt our reader to consult the proof of Corollary 5.9 which is extremely similar. ### Application to data matrices with i.i.d. columns In this paragraph we focus on a fairly general setting where the data matrix \(X\) is made of independent samples, and we explore the consequences given by our deterministic equivalents. Let us first mention a general framework on which the Assumption 6.1(4) holds true. **Proposition 6.6** ([20], Proposition 3.3).: _Let \(X\in\mathbb{R}^{d_{0}\times n}\) be a random matrix whose columns are i.i.d. sampled from the distribution of a random vector \(x\in\mathbb{R}^{d_{0}}\), such that \(x\propto_{\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right|}\mathcal{E}(1)\), \(\mathbb{E}[x]=0\), and \(\mathbb{E}\Big{[}\big{\|}x\big{\|}^{2}\Big{]}=\sigma_{x}^{2}d_{0}\). We also assume the ratio \(\dfrac{n}{d_{0}}\) to be bounded from above and away from \(0\)._ _Then there is an event \(\mathcal{B}\) with \(\mathbb{P}(\mathcal{B}^{c})\leq O(1/n)\), such that uniformly in \(\omega\in\mathcal{B}\), and are bounded, and \(\|\Delta_{X}\|_{\max}\leq O(\epsilon_{n})\) with \(\epsilon_{n}=\sqrt{\log n/n}\)._ _Remark 6.7_.: In [20] the result is stated for a weaker notion called called convex concentration. We will not digress on this here but rather refer our reader to [10, Section 1.7] which presents in details this variant of concentration. Also note that in the above result we only need concentration for the columns of \(X\), while in our deterministic equivalent result we need concentration for the whole matrix \(X\). To obtain deterministic equivalents for the input data matrix, it is of course possible to use again Theorem 4.5: **Proposition 6.8**.: _If \(X\in\mathbb{R}^{d_{0}\times n}\) is a random matrix, \(\propto_{\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right|_{F}} \mathcal{E}(1)\) concentrated, whose columns are i.i.d. sampled from the distribution of a random vector \(x\in\mathbb{R}^{d_{0}}\), with \(\left|\mathbb{E}[x]\right|\) and \(\left|\kern-1.075pt\left|\mathbb{E}[K_{X}]\right|\kern-1.075pt\right|\kern-1.075pt\right|\) bounded, and if the ratio \(n/d_{0}\) is bounded from above and away from \(0\), then with \(\tau=\operatorname{MP}(n/d_{0})\boxtimes\mu_{\mathbb{E}[K_{X}]}\):_ \[\left|\mathbb{E}[g_{K_{X}}(z)]-g_{\tau}(z)\right| \leq O_{z}(1/n)\] \[\Big{|}\Big{|}\mathbb{E}[\mathcal{G}_{K_{X}}(z)]-\mathbf{G}_{ \boxtimes}^{\mathbb{E}[K_{X}]}(z)\Big{|}\Big{|} \leq O_{z}(1/\sqrt{n}).\] If we combine the previous propositions, we obtain deterministic equivalents for the Conjugate Kernel model in a fairly general setting where the data matrix is made of i.i.d. training samples. This encompasses in particular the case where \(X\) is a matrix with i.i.d. \(\mathcal{N}\) entries, which was the original model studied in [20]. _Remark 6.9_.: Note that the typical order of magnitude \(\epsilon_{n}=\sqrt{\log n/n}\) given by Proposition 6.6 is good enough for a meaningful equivalent of the Stieltjes transform, with a \(O_{z}(\log n/\sqrt{n})\) convergence speed. However it is not small enough for the resolvent, where we would have an \(O_{z}(\zeta_{2}(\tilde{f})^{2}\log n)\) error term, unless of course \(\zeta_{2}(\tilde{f})=0\) (see Remark 5.8). If \(\zeta_{2}(\tilde{f})=0\), we obtain a \(O_{z}\Big{(}(\log n)^{3/2}/\sqrt{n}\Big{)}\) error term for the deterministic equivalent of the resolvent. This condition \(\zeta_{2}(\tilde{f})=0\), and more generally the Hermite coefficients of the activation function \(f\), appear in other articles that study the Conjugate Kernel model. Let us consider indeed \(\tilde{Y}=\sqrt{\mathfrak{a}}Z+\sqrt{\mathfrak{b}}WX/\sqrt{d_{0}}\), and \(\tilde{K}=\tilde{Y}^{\top}\tilde{Y}/d\), where \(Z\) is a third random matrix, independent from the others, and filled with i.i.d. \(\mathcal{N}\) entries. Using Theorem 5.7 conditionally on \(X\) and similar arguments to those of this paper, we see that \(\mathcal{G}_{\tilde{K}}(z)\) also admits \(\mathbf{G}_{\boxtimes}^{\mathbb{E}[K_{X}]}(z)\) as deterministic equivalent, with a \(O_{z}\Big{(}(\log n)^{3/2}/\sqrt{n}\Big{)}\) error term in the case where \(\zeta_{2}(\tilde{f})=0\). In [1, Theorem 2.3], using combinatorics it is shown that the biggest eigenvalues of both models behave similarly if \(\zeta_{2}(\tilde{f})=0\). Although we could not manage to retrieve this property with our deterministic equivalent solely, there is without a doubt a connection between these statements. [1] also provides other equivalent models in the case where \(\zeta_{2}(\tilde{f})\neq 0\), which we could not relate to our results. ## 7. Multi-layer neural network model In this section we consider the Conjugate Kernel matrix associated to an artificial neural network with \(L\) hidden layers and a random input matrix: \[X_{0}\to X_{1} =f_{1}\Big{(}W_{1}X_{0}/\sqrt{d_{0}}+B_{1}\Big{)}+D_{1}\] \[X_{1}\to X_{2} =f_{2}\Big{(}W_{2}X_{1}/\sqrt{d_{1}}+B_{2}\Big{)}+D_{2}\] \[\quad\vdots\] \[X_{l}\to X_{l+1}=f_{l+1}\Big{(}W_{l+1}X_{l}/\sqrt{d_{l}}+B_{l+1} \Big{)}+D_{l+1}\] \[\quad\vdots\] \[X_{L-1}\to X_{L}=f_{L}\Big{(}W_{L}X_{L-1}/\sqrt{d_{L-1}}+B_{L} \Big{)}+D_{L}\] The initial data \(X_{0}\in\mathbb{R}^{d_{0}\times n}\) is a random matrix with variance parameter \(\sigma^{2}_{X_{0}}>0\). Each layer \(l\in\llbracket 1,L\rrbracket\) is made of: * a random weight matrix \(W_{l}\in\mathbb{R}^{d_{l}\times d_{l-1}}\), with variance parameter \(\sigma^{2}_{W_{l}}>0\), * two random biases matrices \(B_{l}\) and \(D_{l}\in\mathbb{R}^{d_{l}\times n}\), \(\sigma^{2}_{B_{l}},\sigma^{2}_{D_{l}}\geq 0\), * and an activation function \(f_{l}:\mathbb{R}\to\mathbb{R}\). At each layer, we define the conjugate kernel matrix \(K_{l}=X_{l}^{\top}X_{l}/d_{l}\), and for \(z\in\mathbb{C}^{+}\) its resolvent \(\mathcal{G}_{l}(z)=\left(K_{l}-zI_{n}\right)^{-1}\), and Stieltjes transform \(g_{l}(z)=(1/n)\mathrm{Tr}\mathcal{G}_{l}(z)\). We define by induction the following objects: \[\tilde{\sigma}_{l}^{2} =\sigma^{2}_{W_{l}}\sigma^{2}_{X_{l-1}}+\sigma^{2}_{B_{l}},\] \[\tilde{f}_{l}(t) =f_{l}(\tilde{\sigma}_{l}t),\] \[\sigma_{X_{l}} =\left\|\tilde{f}_{l}\right\|^{2}_{\mathcal{H}}+\sigma^{2}_{D_{l}},\] \[\mathfrak{a}_{l} =\left\|\tilde{f}_{l}\right\|^{2}_{\mathcal{H}}-\frac{\sigma^{2 }_{W_{l}}\sigma^{2}_{X_{l}}}{\tilde{\sigma}_{l}^{2}}\zeta_{1}(\tilde{f}_{l})^{ 2}+\sigma^{2}_{D_{l}},\] \[\mathfrak{b}_{l} =\zeta_{1}(\tilde{f}_{l})^{2}\frac{\sigma^{2}_{W_{l}}}{\tilde{ \sigma}_{l}^{2}},\] \[\Delta_{X_{l}} =K_{l}-\sigma^{2}_{X_{l}}I_{n},\] \[\Sigma_{X_{l}} =\mathfrak{a}_{l}I_{n}+\mathfrak{b}_{l}K_{X}.\] **Assumptions 7.1**.: 1. \(W_{l}\), \(B_{l}\) and \(D_{l}\) are random, independent as a family for \(l\in\llbracket 1,L\rrbracket\), with i.i.d. \(\mathcal{N}(\sigma^{2}_{W_{l}})\), \(\mathcal{N}(\sigma^{2}_{B_{l}})\) and \(\mathcal{N}(\sigma^{2}_{D_{l}})\) entries respectively. 2. \(\tilde{f}_{l}\) are Lipschitz continuous and Gaussian centered, that is \(\mathbb{E}\Big{[}\tilde{f}_{l}(\mathcal{N})\Big{]}=\mathbb{E}[f(\tilde{\sigma }_{l}\mathcal{N})]=0\). 3. \(X_{0}\) is random, independent from all the other matrices. There is an event \(\mathcal{B}\) with \(\mathbb{P}(\mathcal{B}^{c})\leq O(\sqrt{\log n}/n)\), such that \((X_{0}|\mathcal{B})\propto_{\llbracket 1\rrbracket_{F}}\mathcal{E}(1)\). 4. There is a sequence \(\epsilon_{n}\) converging to \(0\), with \(\sqrt{\log n/n}\leq O(\epsilon_{n})\), such that uniformly in \(\omega\in\mathcal{B}\), \(\left|\!\left|\!\left|K_{0}\right|\!\right|\!\right|\) and \(\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left| \!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\ using Theorem 6.4 we get the deterministic equivalents for all layers. The error terms \(\hat{\epsilon}_{n}^{(l)}\) and \(\hat{\epsilon}_{n}^{\prime(l)}\) are given by the formulas above Theorem 6.4. _Remark 7.3_.: The above deterministic equivalents are only meaningful if the error terms \(\hat{\epsilon}_{n}^{\prime(l)}\) and \(\hat{\epsilon}_{n}^{(l)}\) vanish when \(n\to\infty\). Similarly to Remark 5.8, let us mention a few cases where their expressions may be greatly simplified: * If \(\zeta_{1}(\tilde{f}_{l})=0\) for some layer \(l\), then for all subsequent layers \(k\geq l\) the error terms \(\hat{\epsilon}_{n}^{\prime(k)}\) and \(\hat{\epsilon}_{n}^{(k)}\) do not depend on \(\hat{\epsilon}_{n}^{(0)}\) and \(\hat{\epsilon}_{n}^{\prime(0)}\) anymore. * If \(\epsilon_{n}=o(n^{-1/4})\), then \(\hat{\epsilon}_{n}^{(l)}=O\Big{(}\hat{\epsilon}_{n}^{(0)}+\sqrt{n}\epsilon_{n }^{2}\Big{)}\). * If \(\zeta_{2}(\tilde{f}_{l})=0\) and \(\epsilon_{n}=o(n^{-1/6})\), then \(\hat{\epsilon}_{n}^{(l)}=O\Big{(}\hat{\epsilon}_{n}^{(0)}+\sqrt{n}\epsilon_{n }^{3}\Big{)}\). * If \(\zeta_{2}(\tilde{f}_{l})=\zeta_{3}(\tilde{f}_{l})=0\), then \(\hat{\epsilon}_{n}^{(l)}=O\Big{(}\hat{\epsilon}_{n}^{(0)}+\sqrt{\log n}/n\Big{)}\). * If \(\epsilon_{n}=o(n^{-1/2})\), then \(\hat{\epsilon}_{n}^{\prime(l)}=O\Big{(}\hat{\epsilon}_{n}^{(0)}+\hat{\epsilon }_{n}^{\prime(0)}+n\epsilon_{n}^{2}\Big{)}\). * If \(\zeta_{2}(\tilde{f}_{l})=0\) and \(\epsilon_{n}=o(n^{-1/3})\), then \(\hat{\epsilon}_{n}^{\prime(l)}=O\Big{(}\hat{\epsilon}_{n}^{\prime(0)}+n \epsilon_{n}^{3}\Big{)}\). * If \(\zeta_{2}(\tilde{f}_{l})=\zeta_{3}(\tilde{f}_{l})=0\), then \(\hat{\epsilon}_{n}^{\prime(l)}=O\Big{(}\hat{\epsilon}_{n}^{\prime(0)}+1/\sqrt{ n}\Big{)}\). In the case \(\epsilon_{n}=O(\sqrt{\log n/n})\) corresponding to a data matrix \(X_{0}\) with i.i.d. columns (see Proposition 6.6), and starting from typical \(\hat{\epsilon}_{n}^{(0)}=O_{z}(1/n)\) and \(\hat{\epsilon}_{n}^{\prime(0)}O_{z}(1/\sqrt{n})\) equivalents for the Stieltjes transform and the resolvent of \(K_{0}\) respectively, Theorem 7.2 gives an \(O_{z}(\log n/\sqrt{n})\) equivalent for the Stieltjes transform. The error for the resolvents does not vanish in general because of the \(O_{z}(\zeta_{2}(\tilde{f})^{2}\log n)\) error term. If \(\zeta_{2}(\tilde{f})=0\) however, we obtain a \(O_{z}((\log n)^{3/2}/\sqrt{n})\) approximation. **Corollary 7.4**.: _Uniformly under Assumptions 7.1:_ 1. \(\Big{|}g_{K_{l}}(z)-g_{\chi_{n}^{(l)}}(z)\Big{|}\leq\sqrt{\log n}\,O_{z}(\hat {\epsilon}_{n}^{(l)})\) _a.s., and_ \(\left\|\mathcal{G}_{K_{l}}(z)-\mathbf{G}_{l}(z)\right\|_{\max}\leq\sqrt{\log n }\,O_{z}(\hat{\epsilon}_{n}^{\prime})\) _a.s._ 2. _If_ \(\hat{f}_{l}\) _is not linear, or if the measures_ \(\chi_{n}^{(0)}\) _are supported on the same compact of_ \((0,\infty)\)_, there exists_ \(\theta>0\) _such that_ \(D(\mu_{K_{l}},\chi_{n}^{(l)})\leq O(\hat{\epsilon}_{n}^{(l)\theta})\) _a.s._ 3. _If moreover_ \(\chi_{n}^{(0)}\) _converges weakly to a measure_ \(\chi_{\infty}^{(0)}\)_, and if all ratios_ \(\gamma_{n}^{(k)}\to\gamma_{\infty}^{(k)}>0\) _, then_ \(\mu_{K_{l}}\) _converges a.s. to the measure_ \(\chi_{\infty}^{(l)}\) _defined by induction as:_ \[\chi_{\infty}^{(l)}=\mathrm{MP}\Big{(}\gamma_{\infty}^{(l)}\Big{)}\boxtimes \Big{(}\mathfrak{a}_{l}+\mathfrak{b}_{l}\chi_{\infty}^{(l-1)}\Big{)}.\] _More precisely:_ \[D(\mu_{K_{l}},\chi_{\infty}^{(l)})\leq O\bigg{(}D(\chi_{n}^{(0)},\chi_{\infty}^ {(0)})+\max_{0\leq k\leq l}|\gamma_{n}^{(k)}-\gamma_{\infty}^{(k)}|+\hat{ \epsilon}_{n}^{(l)\theta}\bigg{)}\quad\text{a.s.}\] Again we will not prove this result here, but refer to the proof of Corollary 5.9 which is similar. _Remark 7.5_.: Our result generalizes previously known global laws on the Conjugate Kernel model. Adapted to our notations, [10, Theorem 3.4] states that, without bias in the model, if \(f\) is twice differentiable, \(\epsilon_{n}=o(n^{-1/4})\), and \(\chi_{n}^{(0)}\) converges weakly to \(\chi_{\infty}^{(0)}\), then \(\mu_{K_{l}}\) converges weakly to \(\chi_{\infty}^{(l)}\) a.s. Taking into account the Remark 7.3, in this setting we have \(\hat{\epsilon}_{n}^{(l)}=O(\hat{\epsilon}_{n}^{(0)}+\sqrt{n}\epsilon_{n}^{2})=O( \hat{\epsilon}_{n}^{(0)}+o(1))\), and we retrieve the a.s. convergence of \(\mu_{K_{l}}\) towards \(\chi_{\infty}^{(l)}\) weakly, supplemented with quantitative estimates for the Stieltjes transforms and the Kolmogorov distances. ## 8. Appendix: Bounds on Kolmogorov distances between empirical spectral measures Let us remind the notations \(\mathcal{F}_{\nu}\) for the cumulative distribution function of a measure \(\nu\), and \(D(\nu,\mu)=\sup\limits_{t\in\mathbb{R}}|\mathcal{F}_{\nu}(t)-\mathcal{F}_{\mu}(t)|\) for the Kolmogorov distance between two measures \(\nu\) and \(\mu\). It is a well-known fact that the convergence in Kolmogorov distance implies the weak convergence for probability measures, and there is even an equivalence if the limiting measure admits a Holder continuous cumulative distribution function ([1]). In [1] and [11], the authors propose a general method to derive a convergence speed in Kolmogorov distance from estimates on the Stieltjes transforms. This method implies for instance our Proposition 2.8. However the techniques employed are not well suited to work with two discrete measures like empirical spectral distributions. Two matrices close in spectral norm admit the same limiting spectral distribution if it exists. This does not imply any bound on the Kolmogorov distances however in general because the measures are discrete. In this section, we show how a quantitative result may still be obtained, provided the limiting empirical measure is regular enough. We strongly incite the reader to first examine [11, Section 8] where the technical tools are explained in full details. **Proposition 8.1**.: _Let \(\Sigma\) and \(\tilde{\Sigma}\in\mathbb{R}^{p\times p}\) be symmetric matrices such that:_ 1. \(\left\|\Sigma\right\|\) _and_ \(\left\|\tilde{\Sigma}\right\|\) _are bounded, and_ \(\left\|\Sigma-\tilde{\Sigma}\right\|\) _converges to_ \(0\)_._ 2. \(\mu_{\tilde{\Sigma}}\) _converges weakly to some probability measure_ \(\nu^{\infty}\)_, and_ \(\mathcal{F}_{\nu^{\infty}}\) _is Holder continuous for some parameter_ \(\beta>0\)_._ _Then \(\mu_{\Sigma}\) converges weakly to \(\nu^{\infty}\), and more precisely in Kolmogorov distance:_ **Lemma 8.2**.: _For any \(y\in(0,1)\) and \(A>0\):_ Proof.: We closely follow the [11, Section 3.1] with the only key difference that we plug in Bai's Inequality the following bound: \[\left|\mathcal{F}_{\mu_{\tilde{\Sigma}}}(x+t)-\mathcal{F}_{\mu_{\tilde{\Sigma }}}(x)\right|\leq|\mathcal{F}_{\nu^{\infty}}(x+t)-\mathcal{F}_{\nu^{\infty}}( x)|+2\,D\big{(}\mu_{\tilde{\Sigma}},\nu^{\infty}\big{)}.\] We thus obtain: \[D(\mu_{\tilde{\Sigma}},\mu_{\Sigma}) \leq\frac{2}{\pi}\Bigg{(}\int_{\mathbb{R}}|g_{\mu_{\tilde{\Sigma }}}-g_{\mu_{\Sigma}}\big{|}(t+iy)dt+\frac{1}{y}\sup\limits_{x\in\mathbb{R}} \int_{\left[\pm 2y\tan\left(\frac{3\pi}{8}\right)\right]}|\mathcal{F}_{\nu^{\infty }}(x+t)-\mathcal{F}_{\nu^{\infty}}(x)|dt\Bigg{)}\] \[\quad+O\big{(}D\big{(}\mu_{\tilde{\Sigma}},\nu^{\infty}\big{)} \big{)}.\] For \(z\in\mathbb{C}^{+}\) a classical application of the resolvent identity gives \(\big{|}g_{\Sigma}(z)-g_{\tilde{\Sigma}}(z)\big{|}\leq\left\|\Sigma-\tilde{ \Sigma}\right\|\). The rest of the proof is exactly the same as in [11, Section 3.1]. Proof of Theorem 8.1.: We optimize \(y\) and \(A\) in the above lemma by choosing \(y_{n}\,=\,\left\|\left|\Sigma-\tilde{\Sigma}\right|\right\|^{\frac{1}{4+2\beta}}\) and \(A_{n}\,=\,\left\|\left|\Sigma-\tilde{\Sigma}\right|\right\|^{-1/2}\), which leads to the bound \(D(\mu_{\tilde{\Sigma}},\mu_{\Sigma})\leq O\bigg{(}\left\|\left|\Sigma-\tilde{ \Sigma}\right|\right\|^{\frac{\beta}{4+2\beta}}+D\big{(}\mu_{\tilde{\Sigma}}, \nu^{\infty}\big{)}\bigg{)}\). A final triangular inequality proves the proposition.
2304.12214
Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration
Biologically inspired Spiking Neural Networks (SNNs) have attracted significant attention for their ability to provide extremely energy-efficient machine intelligence through event-driven operation and sparse activities. As artificial intelligence (AI) becomes ever more democratized, there is an increasing need to execute SNN models on edge devices. Existing works adopt weight pruning to reduce SNN model size and accelerate inference. However, these methods mainly focus on how to obtain a sparse model for efficient inference, rather than training efficiency. To overcome these drawbacks, in this paper, we propose a Neurogenesis Dynamics-inspired Spiking Neural Network training acceleration framework, NDSNN. Our framework is computational efficient and trains a model from scratch with dynamic sparsity without sacrificing model fidelity. Specifically, we design a new drop-and-grow strategy with decreasing number of non-zero weights, to maintain extreme high sparsity and high accuracy. We evaluate NDSNN using VGG-16 and ResNet-19 on CIFAR-10, CIFAR-100 and TinyImageNet. Experimental results show that NDSNN achieves up to 20.52\% improvement in accuracy on Tiny-ImageNet using ResNet-19 (with a sparsity of 99\%) as compared to other SOTA methods (e.g., Lottery Ticket Hypothesis (LTH), SET-SNN, RigL-SNN). In addition, the training cost of NDSNN is only 40.89\% of the LTH training cost on ResNet-19 and 31.35\% of the LTH training cost on VGG-16 on CIFAR-10.
Shaoyi Huang, Haowen Fang, Kaleel Mahmood, Bowen Lei, Nuo Xu, Bin Lei, Yue Sun, Dongkuan Xu, Wujie Wen, Caiwen Ding
2023-04-24T15:54:22Z
http://arxiv.org/abs/2304.12214v1
# Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration ###### Abstract Biologically inspired Spiking Neural Networks (SNNs) have attracted significant attention for their ability to provide extremely energy-efficient machine intelligence through event-driven operation and sparse activities. As artificial intelligence (AI) becomes ever more democratized, there is an increasing need to execute SNN models on edge devices. Existing works adopt weight pruning to reduce SNN model size and accelerate inference. However, these methods mainly focus on how to obtain a sparse model for efficient inference, rather than training efficiency. To overcome these drawbacks, in this paper, we propose a Neurogenesis Dynamics-inspired Spiking Neural Network training acceleration framework, NDSNN. Our framework is computational efficient and trains a model from scratch with dynamic sparsity without sacrificing model fidelity. Specifically, we design a new drop-and-grow strategy with decreasing number of non-zero weights, to maintain extreme high sparsity and high accuracy. We evaluate NDSNN using VGG-16 and ResNet-19 on CIFAR-10, CIFAR-100 and TinyImageNet. Experimental results show that NDSNN achieves up to 20.5% improvement in accuracy on Tiny-ImageNet using ResNet-19 (with a sparsity of 99%) as compared to other SOTA methods (e.g., Lottery Ticket Hypothesis (LTH), SET-SNN, RigL-SNN). In addition, the training cost of NDSNN is only 40.89% of the LTH training cost training cost on ResNet-19 and 31.35% of the LTH training cost on VGG-16 on CIFAR-10. spiking neural network, neural network pruning, sparse training, neuromorphic computing ## I Introduction Biologically inspired Spiking Neural Networks (SNNs) have attracted significant attention for their ability to provide extremely energy-efficient machine intelligence. SNNs achieve this performance through event-driven operation (e.g., computation is only performed on demand) and the sparse activities of spikes. As artificial intelligence (AI) becomes ever more democratized, there is an increasing need to execute SNN models on edge devices with limited memory and restricted computational resources [1]. However, modern SNNs typically consist of at least millions to hundreds of millions of parameters (i.e., weights), which requires large memory storage and computations [2, 3, 4]. Therefore, it is desirable to investigate efficient implementation techniques for SNNs. Recently, the use of sparsity to compress SNN model size and accelerate inference has attracted a surge of attention [5, 6], including the train-prune-retrain method (e.g, alternating direction method of multipliers (ADMM) pruning [5, 7, 8, 9]), iterative pruning (e.g., lottery ticket hypothesis (LTH) [6, 10])). The aforementioned methods are shown in Fig. 1 and mainly focus on how to obtain a sparse model for efficient inference. However, the training process to obtain a sparse model is not efficient. To illustrate consider the case VGG-16 on CIFAR-10, for train-prune-retrain [5, 11] (orange line), the first 150 training epoches are dense (zero sparsity); For iterative pruning [6], the sparsity gradually increases in the first 150 training epoches. As shown in the highlighted grey area, both methods have low sparsity hence low training efficiency. In the field of neuroscience, the total number of neurons declines with age during the process of neuron's degeneration (i.e., old neuron's death) and redifferentiation (i.e., new neuron's birth), in human hippocampus, referred as Neurogenesis Dynamics [12, 13]. In this paper, inspired by the Neurogenesis Dynamics, we propose an efficient Spiking Neural Network training acceleration framework, NDSNN. We analogize the neuron's death-and-birth renewal scheme to the drop-and-grow schedule in SNN sparse training. We dynamically reduce the number of neuron connections in SNN sparse training, to reduce training memory footprint and improve training efficiency [14]. The number of zeros decreases in the dynamically changing process of weight mask tensor (i.e., a binary tensor which has the same size as weight, 0s / 1s denotes zeros / non-zeros in corresponding weight tensor). The sparsity during NDSNN training is illustrated in Fig. 1 as the green curve. We could train from a highly sparsified model (e.g., initial sparsity is 80%) and achieve the final sparsity (e.g., 95%). Overall our paper makes the following contributions: * Inspired by neurogenesis dynamics, we propose an energy efficient spiking neural network training workflow. Fig. 1: Sparsity change of different sparsification methods on VGG-16 / ResNet-19 CIFAR-10. * To reach high sparsity and high energy efficiency with dense model like accuracy, we design a new drop-and-grow strategy with decreasing number of non-zero weights in the process of dynamically updating sparse mask. * We evaluate the training efficiency of NDSNN via normalizing spike rate. Results show that the cost of NDSNN on ResNet-19 and VGG-16 is 40.89% and 31.35% of state-of-the-art (SOTA), respectively. * We demonstrate extremely high sparsity (i.e., 99%) model performance in SNN based vision tasks with acceptable accuracy degradation. We evaluate NDSNN using VGG-16 and ResNet on CIFAR-10, CIFAR-100 and TinyImageNet. Experimental results show that NDSNN achieves even high accuracy than dense model for ResNet-19 on CIFAR-10. On Tiny-ImageNet, NDSNN achieves up to 20.52% increase in accuracy compared to the SOTA at a sparsity of 99%. The training cost of NDSNN VGG-16 is 10.5% of training a dense model. ## II Related Work and Background ### _Related Work on Sparsity Exploration in SNN_ Several network compression schemes for SNNs have been proposed. In [5] the alternating direction method of multipliers (ADMMs) pruning is employed to compress the SNNs on various datasets. However, this technique has significant accuracy loss, especially when the model has high sparsity. Although IMP could find highly sparse neural network with high accuracy, it is time consuming (e.g. it takes 2720 epochs to achieve 89.91% sparsity on both CIFAR-10 and CIFAR-100) [6]. In [15] they propose a Spike Timing Dependent Plasticity (STDP) based pruning method. Connections between presynaptic and post-synaptic neurons with low spike correlation are pruned. The correlation is tracked by STDP algorithm. The performance of this method is limited as the original model only achieves 93.2% accuracy on MNIST, and accuracy drops to 91.5% after 92% weights are pruned. In [16] they propose a technique to prune connections during training. Weights will be pruned if they are less than a certain threshold or decrease significantly in a number of training iterations. However, the method's evaluation is limited, as it is only tested on a single dataset Caltech-101. ### _Spiking Neural Network_ A key difference of SNN from DNN is that spiking neuron is a stateful system that can be modeled by different equations. The commonly used Leaky Integrate and Fire (LIF) spiking neuron is defined as follows. \[v[t]=\alpha v[t-1]+\sum_{i}w_{i}s_{i}[t]-\vartheta o[t-1] \tag{1a}\] \[o[t]=u(v[t]-\vartheta)\] (1b) \[u(x)=0,x<0\text{ otherwise 1} \tag{1c}\] where \(t\) indicates time. Eq. (1a) depicts the dynamics of the neuron's membrane potential \(v[t]\). \(\alpha\in(0,1]\) determines \(v[t]\) the decay speed. \(s_{i}[t]\in\{0,1\}\) is a sequence which consists of only 0 and 1 to represent the \(i-th\) input spike train and \(w_{i}\) is the corresponding weight. \(o[t]\in\{0,1\}\) is the neuron's output spike train, \(u(x)\) is the Heaviside step function. Note that Eq. (1a) is recursive in the temporal domain, so it is possible to use Backpropagation Through Time (BPTT) to train SNNs. However, an issue arises with Eq. (1c), whose derivative is the Dirac Delta function \(\Delta(x)\). To overcome this, surrogate gradient method can be used [17] so that the derivative of \(u(x)\) is approximated by the derivative of a smooth function. In the forward pass, the SNN still outputs spikes, while in backward pass, \(\Delta(x)\) is replaced by a surrogate function so the Heaviside step function has an approximate derivative. The BPTT for SNNs using a surrogate gradient is derived as follows. Let \(L\) be the loss, \(\delta_{l}[t]\)\(=\)\(\frac{\partial L}{\partial o[t]}\) be the error signal at layer \(l\) time step \(t\), \(\epsilon_{l}[t]\)\(=\)\(\frac{\partial L}{\partial v_{l}[t]}\). \(\delta_{l}[t]\) is propagated recursively as following rules, and gradient \(l^{th}\) layer weight \(w_{l}\) is calculated using Eq. (3). \[\delta_{l}[t]=\epsilon_{l+1}[t]w_{l+1} \tag{2a}\] \[\epsilon_{l}[t]=\delta_{l}[t]\phi_{l}[t]+\alpha\epsilon_{l}[t]\] (2b) \[\frac{\partial L}{\partial w_{l}}=\sum_{t=0}^{T-1}\epsilon_{l}[t]\cdot[s_{l}[t ]]^{\intercal} \tag{2c}\] Fig. 2: (a) shows the neurogenesis dynamics of nerve cells in the nervous system. 1 indicates inflammatory factors accumulating in nerve system. 2 indicates neuron degeneration and redifferentiation process. 2 is the final nerve system. (b) shows drop and grow process of the neural network. The total number of nonzero weights decreases with the increasing of drop-and-grow times. where \(\phi_{l}[t]{=}\frac{\partial o_{l}[t]-\partial}{\partial v_{l}[t]}{=}\frac{ \partial u(v_{l}[t]-\partial)}{\partial v_{l}[t]}\). Note that \(u(x)\) does not have a well-defined derivative, so we use the gradient surrogate function proposed in [18] to approximate it, such that: \[\frac{\partial u(x)}{\partial x}\approx\frac{1}{1+\pi^{2}x^{2}} \tag{3}\] ## III Neurogenesis Dynamics-inspired Sparse Training on SNN We illustrate the overall workflow of the biological and corresponding computational methods in Fig. 2. ### _Analogizing Neurogenesis Dynamics in Sparse Training_ In human hippocampus, the total number of neurons declines with age during the process of neuron's degeneration (i.e., old neuron's death) and redifferentiation (i.e., neuron's birth) [13]. We analogize the neuron's death-and-birth renewal scheme to the drop-and-grow schedule in sparse training [19, 20, 21, 22]. Here _drop_ means the insignificant connections are deactivated (weights with least absolute magnitude are set as zeros). In our formulation _grow_ refers to creating new connections (weights with high importance are updated to nonzeros). For the dynamics of neurogenesis in the human hippocampus, the neurons declines with age [13]. Similarily in our framework, we reduce the number of connections or reduce the number of activated weights in the sparse training process in consideration of the memory limitation of neuromorphic chips [14]. ### _Problem Definiton_ We aim to achieve high sparsity (low memory overhead) during training and high energy efficiency (through SNN implementation) without noticeable accuracy loss. The problem is formally defined as: consider a \(L\)-layer SNN with dense weights \(W=[W_{1},W_{2},...,W_{L}]\), a dataset \(\mathcal{X}\), and a target sparsity \(\theta_{f}\), our goal is to develop an training workflow such that the training process requires less memory overhead and less computation, and the trained model achieves high accuracy. ### _Neurogenesis Dynamics-inspired Spiking Neural Network (NDSNN) Training Acceleration Framework_ Fig. 2 shows the overview of neurogenesis dynamics-inspired spiking neural network (NDSNN) workflow. Fig. 2(a) demonstrates the neuron cell loss or degeneration (the grey neuron cells) and redifferentiation process (the green neuron cells). In Fig. 2(b) we illustrate the training process of NDSNN, where we drop the weights (i.e., setting the smallest positive weights and the largest negative weights as zeros) in grey color and grow the weights (i.e., update the zeros weights to nonzeros) in green color, every \(\Delta T\) iterations. The number of weights we dropped is larger than the grown ones each drop-and-grow schedule. Thus, the number of nonzero weights decreases with the increasing of drop-and-grow times. The goal of the proposed training method is to reduce memory footprint and computations during the whole training process. To achieve it, our proposed method uses less weights and gradients than SOTA methods via dynamically updating the sparse mask and training from scratch. Specifically, we denote \(\theta_{i}\) and \(\theta_{f}\) as the initial and target sparsity, respectively. \(t_{0}\) is the starting step of training, \(\Delta T\) is the pruning frequency. The full training workflow is be formed in the following steps. **First round / last round weight sparsity distributions across different layers.** Let \(\Theta_{i}=\theta_{i}^{1},\theta_{i}^{2},...,\theta_{i}^{L}\) denote the initial sparsity distribution (i.e., sparsity of different layers at the beginning of training) of SNN model and \(\Theta_{f}=\theta_{f}^{1},\theta_{f}^{2},...,\theta_{f}^{L}\) denote the final sparsity distribution (i.e., sparsity of different layers at the end of training) of the model. Here, we use ERK [23] to distributing the non-zero weights across the layers while maintaining the overall sparsity. We denote \(n^{l}\) as the number of neurons at \(l\)-th layer and \(w^{l}\), \(h^{l}\) as the width and height of the \(l\)-th convolutional kernel,then the number of parameters of the sparse convolutional layers are scaled proportional to \(1-\frac{n^{l-1}\cdot n^{l-1}+w^{l}+h^{l}}{n^{l-1}-\pi^{2}\pi^{l}w^{l}+h^{l}}\). In our case, the overall sparsity at the beginning of training \(\theta_{i}\) is less than the one at the end of training \(\theta_{f}\). Following the same scaling proportion distribution, the sparsity of each separate convolutional layer at the beginning of training is smaller than it's sparsity at the end of training (i.e., for \(l\)-th layer, we have \(\theta_{i}^{l}\leq\theta_{f}^{l}\)). The sparsity of \(l\)-th layer at \(t\)-th iteration is formulated as: \[\begin{split}\theta_{t}^{l}&=\theta_{f}^{l}+(\theta _{i}^{l}-\theta_{f}^{l})(1-\frac{t-t_{0}}{n\Delta t})^{3},\\ t&\in\{t_{0},t_{0}+\Delta T,...,t_{0}+n\Delta T \},l\in\{1,2,...,L\}.\end{split} \tag{4}\] **Training.** We define non-active weights as weights has value of zeros and active weights as weights has value of non-zeros. For each iteration, we only update the active weights. Fig. 3: A toy example of NDSNN training process. Red arrows denote dropping weights and green arrows denote growing weights. In backward path, gradients are calculated using BPTT with surrogate gradient method, and forward pass is carried out like standard neural network training. \(\mathbf{\SIUnit{1}}\) **Dropping (neuron death).** During training, the sparse masks are updated every \(\Delta T\) iteration, i.e., for \(l\)-th layer, we drop \(D_{d}^{l}\) weights that are closest to zero (i.e., the smallest positive weights and the largest negative weights). we denote \(d_{0}\) as the initial death ratio (i.e., the ratio of weights to prune from non-zeros) and \(d_{t}\) as the death ratio at step \(t\). We use the cosine annealing learning rate scheduler [24] for death ratio updating. Then, we have \[\begin{split} d_{t}=& d_{min}+0.5(d_{0}-d_{min})(1+ cos(\frac{\pi t}{n\Delta t})),\\ & t\in\{t_{0},t_{0}+\Delta T,...,t_{0}+n\Delta T\},\end{split} \tag{5}\] where \(d_{min}\) is the minimum death rate during the training. At \(q^{th}\) round, the number of 1s in sparse mask of \(l\)-th layer \({N_{pre}}_{q}^{l}\) before dropping is \[{N_{pre}}_{q}^{l}=N^{l}(1-\theta_{q-1}^{l}),1\leq q\leq n,l\in\{1,2,...,L\}. \tag{6}\] where \(N^{l}\) is the number of all weight elements in \(l\)-th layer and \(\theta_{q-1}^{l}\) is the training sparsity of \(l\)-th layer at \((q-1)\)-th round. We denote the number of dropped weights of \(l\)-th layer at \(q\)-th round as \(D_{q}^{l}\), then, we have \[D_{q}^{l}=d_{t}\times{N_{pre}}_{q}^{l},1\leq q\leq n,l\in\{1,2,...,L\}. \tag{7}\] \(\mathbf{\SIUnit{1}}\) **Growing (neuron birth).** After dropping weights, the number of 1s in \(l\)-th layer sparse mask \({N_{post}}_{q}^{l}\) is \[{N_{post}}_{q}^{l}={N_{pre}}_{q}^{l}-D_{q}^{l},1\leq q\leq n,l\in\{1,2,...,L\}. \tag{8}\] Combining Equation 4 and 8, we obtain the number of weights to be grown, which is denoted as \(G_{q}^{l}\), we have \[G_{q}^{l}=N^{l}-{N_{post}}_{q}^{l}-\theta_{t}^{l}\times N^{l},1\leq q\leq n,l \in\{1,2,...,L\}. \tag{9}\] The toy example of the training process is shown in Fig. 3. ### _Memory Footprint Analysis_ We further investigate the training efficiency of our proposed method in terms of memory footprint. Suppose a sparse SNN model with a sparsity ratio (the percentage of number of zeros in weight) of \(\theta\in[0,1]\). In each round of forward and backward propagation, \(N\) weights and \(tN\) gradients are saved. For training, we use single precision (FP32) for weights and gradients to guarantee training accuracy. For inference, the weight precision \(b_{w}\) is platform/implementation specific, for example Intel Loihi uses 8 bits [14], mixed-signal design HICANN [26] has 4 bits for weights, FPGA-based designs such as [27] employes mixed precision (4 bits - 16 bits). For sparse models, we use indices (denoted by \(b_{idx}\)-bit numbers) to represent the sparse topology of weights/gradients within the dense model. Compressed sparse row (CSR) is a commonly used sparse matrix storage format. Consider a 2-D weight tensor reshaping from a 4-D tensor. Each row of the 2-D weight tensor denotes the weight from a filter. For the \(l\)-th layer, we denote \(F_{l}\), \(Ch_{l}\), and \(K_{l}\) as the number of filters (output channels), number of channels (input channels), and kernel size, respectively. Thus, the size of the weight matrix is \(F_{l}\) rows by \(Ch_{l}\cdot K_{l}^{2}\) columns. Thus, the total number of indices of the entire network is \((1-\theta)\cdot N+\sum_{l}(F_{l}+1)\). And the memory footprint of model representation together with gradients for unstructured sparsity is \((1-\theta)\cdot((1+t)N\cdot b_{w}+N\cdot b_{idx})+\sum_{l}((F_{l}+1)\cdot b_{ idx})\). Since the number of filters is much smaller than the total number of weights, we approximate the memory footprint as \((1-\theta)\cdot((1+t)N\cdot b_{w}+N\cdot b_{idx})\). Given same timestep \(t\), higher sparsity means the lower memory overhead, which support the effectiveness of proposed method in reducing training memory since it has much higher training sparsity than SOTAs. ## IV Experimental Results ### _Experimental Setup_ #### Iv-A1 Architectures and Datasets. We evaluate NDSNN on two popular neural network architectures (i.e., VGG-16 and ResNet-19) for three datasets (i.e., CIFAR-10, CIFAR-100 and Tiny-ImageNet). For fair comparison, we set the total number of training epochs as 300 on both CIFAR-10 and CIFAR-100, while as 100 on Tiny-ImageNet as LTH-SNN. We use SGD as the optimizer while setting the momentum as 0.9 and weight decay as \(5e-4\). Also, we follow the setting in [6] and set the training batch size as 128, initial learning rate as \(3e-1\) and timesteps as 5 across all experiments. #### Iv-A2 Baselines. We train VGG-16 / ResNet-19 dense SNNs on various datasets and use them as our dense baselines. Other baselines are divided into two types based on the initial sparsity status of the training process (i.e., dense or sparse). For the former, we choose the SOTA pruning methods (i.e., LTH and ADMM) on SNN. For the latter, we implement the sparse training methods (i.e., SET [23], RigL [25]) on SNN models (i.e., SET-SNN, RigL-SNN). #### Iv-A3 Evaluation Platform We conduct all experiments using PyTorch with CUDA 11.4 on Quadro RTX6000 GPU and Intel(R) Xeon(R) Gold 6244 @ 3.60GHz CPU. We use SpikingJelly [28] package for SNNs implementation. ### _Accuracy Evaluations of NDSNN_ #### V-B1 CIFAR-10 and CIFAR-100 Evaluation results on CIFAR-10 and CIFAR-100 using VGG-16 and ResNet-19 are shown in Table I. We compare NDSNN with baselines at sparsity ratios of 90%, 95%, 98% and 99% on different models and datasets. Experimental results show that NDSNN outperforms the SOTA baselines on each dataset for VGG-16 and ResNet-19. Specifically, on CIFAR-100, for VGG-16, our proposed method has up to 3.66%, 3.26%, 5.47%, 7.24% increase in accuracy (that is relatively 5.68%, 5.14%, 9.42% and 14.24% higher accuracy) at four different sparsity, respectively. While for ResNet-19, NDSNN has 15.42%, 14.17%, 23.88% and 18.15% increase in accuracy (that is relatively 28.2%, 14.17%, 23.38%, 18.15% higher accuracy) compared to LTH-SNN, obtains 1.96%, 4.30%, 7.99%, 10.5% higher accuracy than SET-SNN and achieves 2.75%, 3.72%, 8.52%, 11.65% higher accuracy than RigL-SNN at a sparsity of 90%, 95%, 98% and 99%, respectively. On CIFAR-10, for VGG-16, NDSNN has up to 2.07%, 1.34%, 2.36%, 4.73% relatively higher accuracy than SOTA at sparsity of 90%, 95%, 98% and 99%, respectively. While for ResNet-19, NDSNN has even higher accuracy than the dense model at a sparsity of 90% and achieves the highest accuracy compared to other baselines at different sparsity. #### V-B2 Tiny-ImageNet The accuracy results on Tiny-ImageNet are shown in Table I. Overall, for both VGG-16 and ResNet-19, NDSNN outperforms other baselines. More specifically, for VGG-16, NDSNN has up to 7.1% higher accuracy than other methods at a sparsity of 99%. For ResNet-19, NDSNN has 10.85%, 9.71%, 13.75%, 20.52% higher accuracy than LTH-SNN at sparsity of 90%, 95% and 98%, 99%, respectively. Compared to SET-SNN, NDSNN has 7.10% and 14.17% increase in accuracy at the sparsity of 99% for VGG-16 and ResNet-19, independently. Compared to RigL-SNN, NDSNN has up to 5.45% and 17.83% increase in accuracy at a sparsity of 99% for VGG-16 and ResNet-19, respectively. #### V-B3 Comparison with ADMM Pruning We compare NDSNN with ADMM pruning using data from [5] as shown in Table II. It can be seen that the accuracy loss become noticeable when the sparsity reaches 75% on CIFAR-10 using LeNet-5. However, the accuracy loss is almost 0 on CIFAR-10 using VGG-16 at the sparsity of 75% which indicates that NDSNN has less accuracy loss when achieving the same sparsity. ### _Efficiency Evaluations of NDSNN_ We quantitatively analyze the training cost of dense SNN model, LTH and NDSNN, as showed in Fig. 5. Since no computation is required if there is no input spikes or a connection is pruned. Such that the relative computation cost of sparse model with respect to dense model at training epoch \(i\) can be calculated as: \([R_{s}^{i}\times Sparsity_{i}]/R_{d}^{i}\), where \(R_{s}^{i}\) or \(R_{d}^{i}\) is the average spike rate of the sparse model (LTH/NDSNN) or the dense model at epoch \(i\), which can be tracked throughout entire training. \(Sparsity_{i}\) is the sparsity of the model. On CIFAR 10, the training cost of NDSNN VGG-16 is 10.5% of training a dense model. The cost of NDSNN on ResNet-19 and VGG-16 is 40.89% and 31.35% of LTH, respectively. On CIFAR 100, the training cost of NDSNN ResNet-19 is 27.63% and 40.12% of dense model and LTH respectively; The training cost of NDSNN VGG-16 is 11.87% and 36.16% of dense model and LTH respctively. ### _Design Exploration_ #### V-D1 Effects of Different Initial Sparsity As the initial sparsity has influence on the average training sparsity, thus the overall training cost. we study the effects of different initial sparsity on accuracy and training FLOPs. Experimental results on VGG-16 / ResNet-19 models and CIFAR-10 / CIFAR-100 datasets are shown in Table III. It's observed that the accuracy gap is small for different initial sparsity. For high training sparsity, we choose initial sparsity from {0.6, 0.7, 0.8} for experiments on CIFAR-10 / CIFAR-100 / TinyImageNet. \begin{table} \begin{tabular}{|l|c c c|} \hline **Dataset** & \multicolumn{3}{c|}{**CIFAR-10**} \\ \hline **Sparsity ratio** & 40\% & 50\% & 60\% & 75\% \\ \hline **LeNet-5(Dense)** & \multicolumn{3}{c|}{89.53} \\ \hline ADMM [5] & 89.75 & 89.15 & 88.35 & 87.38 \\ \hline Acc. Loss (\%) & 0.18 & -0.38 & -1.18 & -2.15 \\ \hline **VGG-16(Dense)** & \multicolumn{3}{c|}{92.59} \\ \hline NDSNN (ours) & 92.46 & 92.32 & 92.33 & 92.18 \\ \hline Acc. Loss (\%) & -0.001 & -0.003 & -0.003 & -0.004 \\ \hline \end{tabular} \end{table} TABLE II: Comparison of ADMM with NDSNN on CIFAR-10. \begin{table} \begin{tabular}{|l|c c c c|c c c|c c c|c c c|} \hline **Dataset** & \multicolumn{3}{c|}{**CIFAR-10**} & \multicolumn{3}{c|}{**CIFAR-100**} & \multicolumn{3}{c|}{**Tiny-ImageNet**} \\ \hline **Sparsity ratio** & 90\% & 95\% & 98\% & 99\% & 90\% & 95\% & 98\% & 99\% & 90\% & 95\% & 98\% & 99\% \\ \hline **VGG-16(Dense)** & \multicolumn{3}{c|}{92.59} & \multicolumn{3}{c|}{69.86} & \multicolumn{3}{c|}{39.45} \\ \hline LTH-SNN [10] & 89.77 & 89.97 & 88.97 & 88.07 & 64.41 & 64.84 & 62.97 & 51.31 & 38.01 & 37.51 & 35.66 & 30.98 \\ \hline SET-SNN [23] & 91.22 & 90.41 & 87.26 & 83.40 & 66.52 & 63.48 & 58.04 & 50.83 & 38.80 & 37.34 & 33.40 & 26.74 \\ RigL-SNN [25] & 91.64 & 90.06 & 87.30 & 84.08 & 66.59 & 63.47 & 58.21 & 52.26 & 38.96 & 37.75 & 32.94 & 28.39 \\ NDSNN (Ours) & **91.84** & **91.31** & **89.62** & **88.13** & **68.07** & **66.73** & **63.51** & **58.07** & **39.12** & **37.77** & **36.23** & **33.84** \\ \hline **ResNet-19(Dense)** & \multicolumn{3}{c|}{91.10} & \multicolumn{3}{c|}{71.94} \\ \hline LTH-SNN [10] & 87.57 & 87.16 & 85.91 & 82.29 & 54.66 & 54.78 & 42.10 & 41.46 & 38.40 & 37.74 & 31.34 & 21.44 \\ \hline SET-SNN [23] & 90.79 & 90.07 & 87.24 & 83.17 & 68.12 & 64.65 & 57.49 & 49.11 & 49.46 & 42.13 & 37.25 & 27.79 \\ RigL-SNN [25] & 90.69 & 90.02 & 87.19 & 83.26 & 67.33 & 65.23 & 56.96 & 47.96 & **49.49** & 40.40 & 37.98 & 24.13 \\ NDSNN (Ours) & **91.13** & **90.47** & **88.61** & **86.30** & **70.08** & **68.95** & **65.48** & **59.61** & 49.25 & **47.45** & **45.09** & **41.96** \\ \hline \end{tabular} \end{table} TABLE I: Test accuracy of sparse VGG-16 and ResNet-19 on CIFAR-10, CIFAR-100, Tiny-ImageNet datasets. The highest test accuracy scores are marked in bold. The LTH-SNN results are our reproduced accuracy using method from [6]. #### Iv-C2 Effects of Smaller Timesteps We compare the accuracy performance of NDSNN and LTH on a smaller timestep (i.e., \(t=2\)) to further validate the effectiveness of proposed method on a more efficient training approach (i.e., the smaller training timesteps, the smaller training cost in time) as shown in Fig. 4. It's observed that NDSNN outperforms LTH on the four experiments (i.e., VGG-16/CIFAR-10, VGG-16/CIFAR-100, ResNet-19/CIAFR-10, ResNet-19/CIAFR-100). On CIFAR-100, NDSNN has 5.55% and 13.34% improvements in accuracy at a sparsity of 99% on VGG-16 and ResNet-19, respectively. ## V Conclusion In this paper, we propose a novel, computationally efficient, sparse training regime, Neurogenesis Dynamics-inspired Spiking Neural Network training acceleration framework, NDSNN. Our proposed method trains a model from scratch using dynamic sparsity. Within our method, we create a drop-and-grow strategy which is biologically motivated by neurogenesis to promote weight reduction. Our method gives higher accuracy and is computationally less demanding than competing approaches. For example, on CIFAR-100, we can achieve an average increase in accuracy of 13.71% over LTH for ResNet-19 across all sparsities. For all datasets, DNSNN has an average of 6.72% accuracy improvement and 59.9% training cost reduction on ResNet-19. Overall, NDSNN could shed light on energy efficient SNN training on edge devices. ## Acknowledgement This work is partially supported by the National Science Foundation (NSF) under Award CCF-2011236, and Award CCF-2006748.
2301.02801
A variety of globally stable periodic orbits in permutation binary neural networks
The permutation binary neural networks are characterized by global permutation connections and local binary connections. Although the parameter space is not large, the networks exhibit various binary periodic orbits. Since analysis of all the periodic orbits is not easy, we focus on globally stable binary periodic orbits such that almost all initial points fall into the orbits. For efficient analysis, we define the standard permutation connection that represents multiple equivalent permutation connections. Applying the brute force attack to 7-dimensional networks, we present the main result: a list of standard permutation connections for all the globally stable periodic orbits. These results will be developed into detailed analysis of the networks and its engineering applications.
Mikito Onuki, Kento Saka, Toshimichi Saito
2023-01-07T08:01:15Z
http://arxiv.org/abs/2301.02801v1
# A variety of globally stable periodic orbits in permutation binary neural networks ###### Abstract. The permutation binary neural networks are characterized by global permutation connections and local binary connections. Although the parameter space is not large, the networks exhibit various binary periodic orbits. Since analysis of all the periodic orbits is not easy, we focus on globally stable binary periodic orbits such that almost all initial points fall into the orbits. For efficient analysis, we define the standard permutation connection that represents multiple equivalent permutation connections. Applying the brute force attack to 7-dimensional networks, we present the main result: a list of standard permutation connections for all the globally stable periodic orbits. These results will be developed into detailed analysis of the networks and its engineering applications. Key words and phrases:Recurrent neural networks, binary neural networks, permutation, binary periodic orbits, stability \({}^{*}\) Corresponding author: Toshimichi Saito ## 1. Introduction Discrete-time recurrent neural networks (DT-RNNs) are analog dynamical systems characterized by real valued connection parameters and nonlinear activation functions (e.g., sigmoid function) [1][2][3][4]. The dynamics is described by autonomous difference equations of real state variables. Depending on the parameters, the DT-RNNs exhibit various periodic orbits, chaos [6], and related bifurcation phenomena. The real/potential applications include associative memories [1], combinatorial optimization problems solvers [5], and time-series approximation/prediction in reservoir computing [7][8][9]. The DT-RNNs are important systems in both basic study of nonlinear dynamics and engineering applications. However, analysis of the dynamics is hard because of huge parameter space and complexity of the nonlinear phenomena. Stability analysis of various periodic orbits is not easy. The three-layer dynamic binary neural networks (DBNNs [10][11]) are digital dynamical systems characterized by ternary valued connection parameters and the signum activation function. The dynamics is described by autonomous difference equations of binary state variables. Since the state space consists of a finite number of binary variables, the DBNNs cannot generate chaos [6]. However, depending on the parameters and initial conditions, the DBNNs can generate various periodic orbits of binary vectors (binary periodic orbits, ab. BPOs). As compared with the DT-RNNs, the DBNNs bring benefits to hardware implementation. An FPGA based hardware prototype and its application to hexapod walking robots can be found in [12]. We have presented a parameter setting method that guarantees storage and stability of desired BPOs [10]. However, as period of a BPO increases, the number of hidden neurons increases: parameter space becomes wider and analysis becomes harder. In the hardware, power consumption becomes larger. In order to realize efficient analysis and synthesis, reduction of the parameter space is inevitable. Simplifying connection parameters of the DBNNs, the permutation binary neural networks (PBNNs [13]) are constructed. The PBNNs are characterized by two kinds of connections. The first one is local binary connection between input and hidden layers. It is defined by a signum-type neuron from three binary inputs to one binary output. The second one is global one-to-one connection between hidden and output layers. It is defined by a permutation operator. The parameter space of the PBNNs is much smaller than that of DBNNs. Depending on the permutation connections, the PBNNs generate various BPOs. Co-existence of BPOs is possible and the PBNN exhibits one of the BPOs depending on initial condition. Since analysis of multiple BPOs are not easy, this paper focuses on globally stable binary periodic orbits (GBPOs) such that almost all initial points fall into the GBPOs. As a fundamental concept, we define the standard permutation connection that represents multiple equivalent permutation connections. Applying the brute force attack to all the 7-dimensional PBNNs, we present the main result: a list of the standard permutation connections for all the GBPOs. These results provide basic information to realize more detailed analysis of PBNNs and its applications. Real/potential engineering applications of the GBPOs include time-series approximation/prediction [8][14][15], control signals of switching power converters [16][17][18], control signals of walking robots [12][19], and error correcting codes [20]. The approximate/control signals can be globally stable and robust. As novelty of this paper, it should be noted that this is the first paper of the GBPOs and standard permutation connections. ## 2. Permutation binary neural networks and binary periodic orbits This section introduces the 3-layer dynamic binary neural networks (DBNNs, [10]) and the permutation binary neural networks (PBNNs, [13]). After overview of BPOs, we show the objective problem. ### Dynamics binary neural network The DBNNs are recurrent-type 3-layer networks characterized by ternary connection parameters and signum activation function. The dynamics is described by the following autonomous difference equation of \(N\)-dimensional binary state variables: \[\begin{split}& x_{i}^{t+1}=\operatorname{sgn}\left(\sum_{j=1}^{M}c _{ij}y_{j}^{t}+S_{i}\right),\ y_{j}^{t}=\operatorname{sgn}\left(\sum_{i=1}^{N }w_{ji}x_{i}^{t}-T_{j}\right)\\ &\operatorname{sgn}(x)=\left\{\begin{array}{ll}+1&\text{if }x \geq 0,\quad\ i\in\{1,\cdots,N\}\\ -1&\text{if }x<0,\quad\ j\in\{1,\cdots,M\}\end{array}\right.\end{split} \tag{1}\] where \(x_{i}^{t}\in\{-1,+1\}\equiv\boldsymbol{B}\) is the \(i\)-th binary state variable at discrete time \(t\) and \(y_{j}^{t}\in\boldsymbol{B}\) is the \(j\)-th binary hidden variable. As shown in Fig. 1, the binary variables \(x_{i}^{t}\), \(y_{j}^{t}\), and \(x_{i}^{t+1}\) are located in input, hidden, and output layers, respectively. The \(M\) hidden neurons transform \(x_{i}^{t}\) into \(y_{j}^{t}\) through hidden ternary connections (\(w_{ji}\in\{-1,0,+1\}\)). The \(N\) output neurons transform \(y_{j}^{t}\) into \(x_{i}^{t+1}\) through output ternary connections (\(c_{ij}\in\{-1,0,+1\}\)). The threshold parameters \(S_{i}\) and \(T_{j}\) are integers. The output \(x_{i}^{t+1}\) is fed back to the input layer and the DBNNs generate various BPOs. Ref. [10] gives a theoretical result of parameter condition that guarantees storage and stability of desired BPOs. However, as period of a BPO increases, the number of hidden neurons increases. For example, \(p\) hidden neurons are required for storage of a BPO with period \(p\). As \(p\) increases, the parameter space becomes larger and analysis/implementation becomes harder. ### Permutation binary neural networks The PBNNs are described by the following autonomous difference equation of \(N\)-dimensional binary state variables: \[\begin{array}{l}x_{i}^{t+1}=y_{\sigma(i)}^{t},\ y_{i}^{t}=\text{sgn}\left(w_ {a}x_{i-1}^{t}+w_{b}x_{i}^{t}+w_{c}x_{i+1}^{t}\right)\\ \sigma=\left(\begin{array}{cccc}1&2&\cdots&N\\ \sigma(1)&\sigma(2)&\cdots&\sigma(N)\end{array}\right)\ i\in\{1,\cdots,N\},N \geq 3\end{array} \tag{2}\] where \(x_{0}^{t}\equiv x_{N}^{t}\) and \(x_{N+1}^{t}\equiv x_{1}^{t}\) for ring-type connection as shown in Fig. 2. As a binary state vector \(\mathbf{x}^{t}\equiv(x_{1}^{t},\cdots,x_{N}^{t})\in\mathbf{B}^{N}\) is input at time \(t\), the \(\mathbf{x}^{t}\) is transformed into the binary hidden state vector \(\mathbf{y}^{t}\equiv(y_{1}^{t},\cdots,y_{N}^{t})\in\mathbf{B}^{N}\) through hidden neurons with local binary connections. All the hidden neurons have the same characteristics: the signum activation function from three binary inputs to one binary output with local binary connection parameters \((w_{a},w_{b},w_{c})\in\mathbf{B}^{3}\). The \(\mathbf{y}^{t}\) is transformed into \(\mathbf{x}^{t+1}\) through one-to-one global permutation connection defined by the permutation \(\sigma\). The output vector \(\mathbf{x}^{t+1}\) is fed back to the input and the PBNNs generate sequences of binary vectors. In comparison with the DBNNs, the hidden connections \(w_{ij}\) are replaced with the local binary connections and the output connections \(c_{ij}\) are replaced with the global permutation connections. As shown in Fig. 3, the local binary connections are identified by connection numbers: \[\begin{array}{l}\text{CN0}:\mathbf{w}_{l}=(-1,-1,-1)\quad\text{CN1}:\mathbf{w}_{l}=( -1,-1,+1)\quad\text{CN2}:\mathbf{w}_{l}=(-1,+1,-1)\\ \text{CN3}:\mathbf{w}_{l}=(-1,+1,+1)\quad\text{CN4}:\mathbf{w}_{l}=(+1,-1,-1)\quad \text{CN5}:\mathbf{w}_{l}=(+1,-1,+1)\\ \text{CN6}:\mathbf{w}_{l}=(+1,+1,-1)\quad\text{CN7}:\mathbf{w}_{l}=(+1,+1,+1)\end{array}\] where \(\mathbf{w}_{l}\equiv(w_{a},w_{b},w_{c})\). Since CN1 (respectively, CN3) coincides with CN4 (respectively, CN6) by replacement \(x_{i}\to x_{N-i+1}\) for \(i\in\{1,\cdots,N\}\), we consider 6 connection numbers without CN4 and CN6 hereafter. The global permutation Figure 1. Dynamic binary neural network (DBNN) Red and blue branches denote positive and negative connections, respectively. connections are identified by \[\text{Permutation ID: }P(\sigma(1)\cdots\sigma(N)).\] Fig. 2 shows examples of 7-dimensional PBNNs for CN1. For identity permutation P(123456), the PBNN exhibits a BPO with period 14. Applying permutation P(2613754), the PBNN exhibits a BPO with longer period 20. In the DBNN, 20 hidden neurons are necessary for period 20. ### Objective problem In order to visualize the dynamics, we have introduced the digital return map (Dmap). The domain \(\mathbf{B}^{N}\) of the PBNNs is equivalent to a set of \(2^{N}\) points \(L_{N}\equiv\{C_{1},\cdots,C_{2^{N}}\}\), i.e., \(C_{1}\equiv(-1,\cdots,-1)\), \(C_{2}\equiv(+1.-1.\cdots,-1)\), \(\cdots\), \(C_{2^{N}}\equiv(+1,\cdots,+1)\). The dynamics of a PBNN can be integrated into \[\text{Dmap: }\mathbf{x}^{t+1}=f(\mathbf{x}^{t}),\ \mathbf{x}^{t}\in\mathbf{B}^{N}\equiv L_{D} \tag{3}\] where an \(N\)-dimensional binary vector \(\mathbf{x}^{t}\) is denoted by a point \(C_{i}\) in the Dmap. Figure 3. 8 local binary connections. Figure 2. Examples of PBNNs and BPOs for CN1, \(N=7\). Red and blue branches denote positive and negative local binary connections, respectively. black branches correspond to global permutation connections. White and black squares in spatiotemporal patterns denote \(x_{i}^{t}=+1\) and \(x_{i}^{t}=-1\), respectively. (a) \(P(1234567)\). (b) \(P(2613754)\). **Definition 2.1**.: A point \(\mathbf{z}_{p}\in L_{D}\) is said to be a binary periodic point (BPP) with period \(p\) if \(f^{p}(\mathbf{z}_{p})=\mathbf{z}_{p}\) and \(f(\mathbf{z}_{p})\) to \(f^{p}(\mathbf{z}_{p})\) are all different where \(f^{k}\) is the \(k\)-fold composition of \(f\). A sequence of the BPPs, \(\{f(\mathbf{z}_{p}),\cdots,f^{p}(\mathbf{z}_{p})\}\), is said to be a BPO with period \(p\). A point \(\mathbf{z}_{e}\) is said to be an eventually periodic point (EPP) if \(\mathbf{z}_{e}\) is not a BPP but falls into a BPO, i.e., there exists some positive integer \(l\) such that \(f^{l}(\mathbf{z}_{e})\) is a BPP. The BPO in the Dmap is equivalent to the BPO in spatiotemporal pattern from the PBNN. Fig. 4 shows BPOs in Dmaps corresponding to BPOs in spatiotemporal patterns in Fig. 2. As parameters (CN and Permutation ID) vary, the PBNN exhibits a variety of BPOs. The number of CNs (without CN4 and CN6) is 6 whereas the number of hidden connection parameters \(w_{ij}\) is \(3^{N^{2}}\). The number of Permutation IDs is \(N!\) whereas the number of output connection parameters \(c_{ij}\) is \(3^{N^{2}}\). In addition, the DBNNs have \(2N\) integer threshold parameters \(S_{i}\) and \(T_{j}\). It goes without saying that the PBNNs cannot generate more various BPOs than the DBNNs because the PBNNs are included in the DBNNs. However, the PBNN parameter space is much smaller than the DBNN parameter space. The objective problem is _relationship between parameters (Permutation ID and CN) and existence/stability of BPOs._ ## 3. Globally stable binary periodic orbits Depending on parameters, the PBNNs exhibit various BPOs and multiple BPOs can co-exist for initial state. Since analysis of multiple BPOs is hard, we try to analyze representative BPOs: the globally stable binary periodic orbits (GBPOs). This section defines the GBPOs and related concepts. First, we note two exceptional endpoints in \(\mathbf{B}^{N}\): \[\mathbf{x}_{-}\equiv(-1,\cdots,-1)\in\mathbf{B}^{N},\ \mathbf{x}_{+}\equiv(+1,\cdots,+1) \in\mathbf{B}^{N} \tag{4}\] The two endpoints are either fixed points or a BPO with period 2, becuase \[\begin{array}{l}f(\mathbf{x}_{+})=\mathbf{x}_{+},\ f(\mathbf{x}_{-})=\mathbf{x}_{-}\ \text{if}\ w_{a}+w_{b}+w_{c}\geq+1\\ f(\mathbf{x}_{+})=\mathbf{x}_{-},\ f(\mathbf{x}_{-})=\mathbf{x}_{+}\ \text{if}\ w_{a}+w_{b}+w_{c}\leq-1 \end{array} \tag{5}\] Figure 4. Dmap examples (black points) and BPOs (blue orbits) for CN1, \(N=7\). (a) \(P(1234567)\) (the PBNN is Fig. 2 (a)), BPO with period 14. (b) \(P(2613754)\) (the PBNN is Fig. 2 (b)), BPO with period 20. Hereafter we omit the two endpoints. The GBPO is defined by **Definition 3.1**.: A BPO is said to be a globally stable binary periodic orbit (GBPO) if the BPO is unique (except for \(\mathbf{x}_{-}\) and \(\mathbf{x}_{-}\)) and if all the EPPs fall into the BPO where we assume existence of the EPPs. The number of EPPs plus elements of the GBPO is \(2^{N}-2\). Fig. 4(b) shows a GBPO with period \(20\) in the Dmap. In this \(7\)-dimensional example, \((2^{7}-20-2)\) EPPs fall into the GBPO. As shown in Section 4, depending on the parameters (permutation ID and CN), the \(7\)-dimensional PBNNs exhibit a variety of GBPOs and the number of EPPs is more than \(2^{7}/2\). The EPPs represent global stability corresponding to error correction [20] of binary signals. As the number of EPPs increases, the global stability becomes stronger. In the limit case of the M-sequences (e.g., in the linear feedback shift register [21]), the period is \(2^{N}\), no EPP exists and is not stable. Such M-sequences are different category from the GBPOs in this paper. In fundamental viewpoints, uniqueness of the GBPO is convenient to consider existence and stability. Analysis of multiple BPOs is complex. In application viewpoints, the GBPOs are useful as globally stable signal to approximate/predict time-series [15] and to control switching circuits [16][17][18]. For simplicity, we focus on the case where \(N\) is a prime number \(N_{p}\). If an integer \(N\) can be factorized into prime factors, classification of the permutation connections becomes complex. Here, in order to analyze GBPOs, we define several basic concepts. **Definition 3.2**.: Let \(R\) be a shift operator such that \[\begin{split}& R:P_{0}(\sigma_{0}(1)\cdots\sigma_{0}(N_{p}))\to P_{1}( \sigma_{1}(1)\cdots\sigma_{1}(N_{p}))\\ & P_{1}=R(P_{0}),\sigma_{1}(i+1)=\sigma_{0}(i)+1\text{ mod }N_{p},i\in\{1,\cdots,N_{p}\}\end{split} \tag{6}\] where \(\sigma_{1}(N_{p}+1)\equiv\sigma_{1}(1)\). Since the neurons are ring-type connection, the permutation connections \(P_{1}\) and \(P_{0}\) (\(P\) and \(R(P)\)) are equivalent even if the permutation IDs are different. **Definition 3.3**.: Let \(S\) be a set of permutation IDs that give equivalent permutation connections. The set \(S\) is referred to as an equivalent permutation set (EPS). An EPS is represented by a standard permutation ID \(P_{s}(\sigma_{s}(0)\cdots\sigma_{s}(N_{p}))\) that corresponds to the minimum element in the EPS by means of base-\(N_{p}\) number: \[P_{s}(\sigma_{s}(1)\cdots\sigma_{s}(N_{p}))<P_{k}(\sigma_{k}(1)\cdots\sigma_{ k}(N_{p}))\in S,k\neq s\text{ ( base-$N_{p}$ number )}\] Fig. 5 shows an example of standard permutation connection and its equivalent permutation connections for \(N_{p}=7\). In this example, the EPS is \[\begin{split} S=\{P_{s}(1325476),P(7243651),P(2135476),P(7324651),\\ \qquad\qquad\qquad P(2143576),P(7325461),P(2143576)\}\end{split}\] **Definition 3.4**.: A permutation ID \(P_{b}\) is said to be a basic permutation ID if it is a fixed point of the shift operator: \(R(P_{b})=P_{b}\). Since \(R(P_{b})=P_{b}\) iff \(\sigma_{b}(i+1)=\sigma i+1\mod N_{p}\), the number of basic permutation IDs is \(N_{p}\). A basic permutation ID constructs an EPS with one element and is a standard permutation ID. Fig. 6 shows basic permutation connections for \(N_{p}=7\). Then we have **Theorem 3.5**.: _In \(N_{p}\)-dimensional PBNNs, the number of standard permutation IDs (i.e., the number of EPSs) is \((N_{p}-1)!+N_{p}-1\) where \(N_{p}\geq 3\) is a prime number._ (Proof) Except for \(N_{p}\) basic permutations, one standard permutation ID \(P_{s}\) represents \(N_{p}\) equivalent permutation IDs: \[R^{N_{p}}(P_{s})=P_{s},\ R^{k}(P_{s})\neq P_{s}\ \text{for}\ 1\leq k\leq N_{p}-1\] where \(R^{k}(P)=R(R^{k-1}(P_{s}))\) is the \(k\)-fold composition of the shift operator \(R\). If there exists an integer \(l\) (\(2\leq l<N_{p}\)) such that \(R^{l}(P_{s})=P_{s}\), the ring-type connection of \(P_{s}\) is decomposed into the same sub-connections (e.g., 3 sub-connections \(R^{3l}(P_{s})=R^{N_{p}}(P_{s})=P_{s}\) as shown in Fig. 7). However, it is impossible for a prime number \(N_{p}\). Therefore, except for the basic permutations, the number of standard permutation IDs is \((N_{p}!-N_{p})/N_{p}\). Adding the \(N_{p}\) basic permutation IDs, the number of standard permutation IDs is \((N!-N_{p})/N_{p}+N_{p}=(N_{p}-1)!+N_{p}-1\) Figure 5. Equivalent permutation connection examples for \(N_{p}=7\). \(P_{s}\): standard permutation connection. \(R\): shift operator. Figure 6. 7 basic permutation connections for \(N_{p}=7\). Figure 7. Permutation connection examples consisting of 3 sub-connections for \(N=6\). ## 4. **Brute force attack to explore GBPOs.** Table 1 shows the number of standard permutation IDs for prime numbers \(N_{p}\) together with the number of full binary connection parameters between hidden and output layers in the DBNNs for \(N=M=N_{p}\). The number of the permutation connections is much smaller than the number of the full binary connections. However, analysis of the GBPOs becomes harder as \(N_{p}\) increases. For convenience, we consider GBPOs in 7-dimensional PBNNs (\(N_{p}=7\)). In the case \(N_{P}=7\), the number of all the standard permutation connections is \((N_{p}-1)!+N_{p}-1=726\), the number of initial points is \(2^{7}\), and the brute force attack is possible. We can clarify the number and period of all the GBPOs precisely. Analysis of the 7-dimensional GBPOs are fundamental to consider higher-dimensional GBPOs and their engineering applications. We explore the 7-dimensional GBPOs as the following. First, as state earlier, objective connection numbers are CN0, CN2, CN2, CN3, CN5, and CN7 (CN1 \(\equiv\) CN4 and CN3 \(\equiv\) CN6). Second, applying the shift operator \(R\), we obtain the 726 standard permutation IDs. Third, applying the brute force attack to each standard permutation ID and CN, we obtain BPOs and their EPPs where we use the BPO calculation algorithm in [22]. If the number of a BPP plus its EPPs is \(2^{7}-2=126\) then the BPO is declared as the GBPO. The period of the GBPO is stored together with its standard permutation ID. In the exploration, it is confirmed that CN0 and CN7 cannot provide GBPO. The CN0 and CN7 are omitted hereafter. Fig. 8 shows typical examples of PBNNs for CN1, CN2, CN3, and CN5 that generate GBPO with period 42, period 14, period 26, and period 14, respectively. Fig. 9 shows the 4 GBPOs as spatiotemporal patterns and Fig. 10 shows the 4 GBPOs in Dmaps. As a criterion of the period, we give **Definition 4.1**.: For identity permutation (\(P_{b}(1234567)\) for \(N_{P}=7\)), the period of the BPO is said to be basic period. If the PBNN generates multiple BPOs, the maximum period is adopted. For CN1, the basic period is 14 as a BPO in Fig. 2 (a) that is a GBPO. We have confirmed that the identity permutation \(P_{b}(1234567)\) cannot provide a GBPO. In Figs. 8 to 10, we can see that, adjusting permutation IDs from the identity permutation \(P_{b}(1234567)\), the PBNNs can generate a variety of BPOs represented by the GBPOs with longer period. As the main result, tables 2 to 5 show a list of standard permutation IDs for all the GBPOs. As stated in Definition 3.3, each standard permutation ID represents 7 equivalent permutation IDs. We give an overview of the list for CN1, CN2, CN3, and CN5: \begin{table} \begin{tabular}{|c|c|c|} \hline \(N_{p}\) & \# standard permutation IDs & \# full binary connections \\ \hline 3 & 4 & \(2^{9}\) \\ 5 & 28 & \(2^{25}\) \\ 7 & 726 & \(2^{49}\) \\ 11 & 3628810 & \(2^{121}\) \\ 13 & 479001612 & \(2^{169}\) \\ 17 & 20922789888016 & \(2^{289}\) \\ \hline \end{tabular} \end{table} Table 1. The number of standard permutation connections in PBNN and full binary connections between hidden and output layers in DBNN. * CN1: The basic period is \(14\) for \(P_{b}(1234567)\). The PBNNs generate \(27\) GBPOs. The maximum period is \(42\) for \(P_{s}(1357246)\) as shown in Fig. 9 (a). The number of EPPs is \(126-42\). * CN2: The basic period is \(2\). The PBNNs generate \(56\) GBPOs. The maximum period is \(14\) where the number of EPPs is \(126-14\), e.g. \(P_{s}((1462753)\) as shown in Fig. 9 (b). * CN3: The basic period is \(14\). The PBNNs generate \(28\) GBPOs. The maximum period is \(26\) where the number of EPPs is \(126-26\), e.g. \(P_{s}(1256473)\) as shown in Fig. 9 (c). * CN5: The basic period is \(2\). The PBNNs generate \(62\) GBPOs. The maximum period is \(14\) where the number of EPPs is \(126-14\), e.g. \(P_{s}(1463725)\) as shown in Fig. 9 (d). These tables clarify relation between parameters (permutation ID and CN) and periods of the GBPOs. The number of EPPs is \(126\) minus the period. As the parameters vary, the \(7\)-dimensional PBNNs can generate a variety of GBPOs. These results provide fundamental information to analyze various PBNNs and to synthesize PBNNs with desired GBPOs. Figure 8. PBNN examples (exhibit GBPOs) for \(N_{p}=7\). (a) \(P_{s}(1357246)\), CN1. (b) \(P_{s}(1462753)\), CN2. (c) \(P_{s}(1256473)\), CN3. (d) \(P_{s}(1463725)\), CN5. Figure 9. GBPO examples as spatiotemporal patterns for \(N_{p}=7\). (a) \(P_{s}(1357246)\), CN1, GBPO with period 42. (b) \(P_{s}(1462753)\), CN2, GBPO with period 14. (c) \(P_{s}(1256473)\), CN3, GBPO with period 26. (d) \(P_{s}(1463725)\), CN5, GBPO with period 14. Figure 10. GBPO examples in Dmaps. (a) \(P_{s}(1357246)\), CN1, GBPO with period 42. (b) \(P_{s}(1462753)\), CN2, GBPO with period 14. (c) \(P_{s}(1256473)\), CN3, GBPO with period 26. (d) \(P_{s}(1463725)\), CN5, GBPO with period 14. \begin{table} \begin{tabular}{|c c|c c|c c|} \hline ID & period & ID & period & ID & period \\ \hline [MISSING_PAGE_POST] \\ \hline \end{tabular} \end{table} Table 3. Standard permutation ID and period of GBPO for CN2 \begin{table} \begin{tabular}{|c c|c c|c c|} \hline ID & period & ID & period & ID & period \\ \hline 1256374 & 26 & 1625473 & 6 & 2517436 & 18 \\ 1257436 & 18 & 1627435 & 16 & 2576314 & 12 \\ 1273654 & 14 & 1657234 & 12 & 2613754 & 20 \\ 1352476 & 34 & 1657243 & 4 & 2615374 & 12 \\ **1357246** & **42** & 1672453 & 18 & 2675314 & 8 \\ 1375426 & 26 & 1672543 & 2 & 2751436 & 8 \\ 1526374 & 42 & 1673425 & 18 & 2763154 & 20 \\ 1527643 & 14 & 2175346 & 10 & 3416725 & 8 \\ 1576324 & 24 & 2417356 & 14 & 4671325 & 24 \\ \hline \end{tabular} \end{table} Table 2. Standard permutation ID and period of GBPO for CN1 \begin{table} \begin{tabular}{|c c|c c|c c|} \hline ID & period & ID & period & ID & period \\ \hline 1235476 & 14 & 1567243 & 12 & 2761345 & 2 \\ 1246753 & 22 & 1576324 & 24 & 3157426 & 12 \\ **1256473** & **26** & 1652473 & 10 & 3167425 & 8 \\ 1267435 & 26 & 1657243 & 4 & 3176245 & 10 \\ 1362754 & 6 & 2156374 & 22 & 3561724 & 10 \\ 1375462 & 2 & 2417635 & 6 & 3567214 & 24 \\ 1425376 & 10 & 2463175 & 20 & 3612745 & 8 \\ 1463275 & 6 & 2516374 & 2 & 3761425 & 12 \\ 1465273 & 16 & 2516473 & 8 & & \\ 1476235 & 2 & 2641753 & 20 & & \\ \hline \end{tabular} \end{table} Table 4. Standard permutation ID and period of GBPO for CN3 \begin{table} \begin{tabular}{|c c|c c|c c|} \hline ID & period & ID & period & ID & period \\ \hline [MISSING_PAGE_POST] \\ \hline \end{tabular} \end{table} Table 5. Standard permutation ID and period of GBPO for CN5 ## 5 Conclusions Fundamental dynamics of the PBNNs has been studied in this paper. The PBNNs are characterized by global permutation connections and local binary connections. Although the parameter space is much smaller than existing recurrent-type neural networks, the PBNN can exhibit various BPOs. In order to realize precise analysis, we focus on the GBPOs and define standard permutation connections. Applying the brute force attack to 7-dimensional PBNNs, we have presented complete list that clarifies relationship between parameters and periods of GBPOs. Even in the 7-dimensional cases, the PBNNs exhibit a variety of GBPOs. It suggests that higher dimensional PBNNs exhibit a huge variety of BPOs/EPPs. Many problems remain in our future works: * Mechanism to generate the GBPOs. * Classification and stability analysis of various BPOs. Besides the GBPOs, the PBNNs exhibit various BPOs, depending on parameters and initial conditions. * Effective evolutionary algorithms [23][24] for analysis of higher dimensional BPOs where the brute force attack is impossible. * Effective evolutionary algorithms for synthesis of PBNNs with desired BPOs. * Efficient hardware implementation for engineering applications including robust control signals of switching circuits and time-series approximation/prediction. The PBNNs are well suited for FPGA based hardware implementation that transforms the BPOs into electric signals in the applications. ### Declaration of competing interest The authors declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2307.00426
Sparsity-aware generalization theory for deep neural networks
Deep artificial neural networks achieve surprising generalization abilities that remain poorly understood. In this paper, we present a new approach to analyzing generalization for deep feed-forward ReLU networks that takes advantage of the degree of sparsity that is achieved in the hidden layer activations. By developing a framework that accounts for this reduced effective model size for each input sample, we are able to show fundamental trade-offs between sparsity and generalization. Importantly, our results make no strong assumptions about the degree of sparsity achieved by the model, and it improves over recent norm-based approaches. We illustrate our results numerically, demonstrating non-vacuous bounds when coupled with data-dependent priors in specific settings, even in over-parametrized models.
Ramchandran Muthukumar, Jeremias Sulam
2023-07-01T20:59:05Z
http://arxiv.org/abs/2307.00426v2
# Sparsity-aware generalization theory for deep neural networks ###### Abstract Deep artificial neural networks achieve surprising generalization abilities that remain poorly understood. In this paper, we present a new approach to analyzing generalization for deep feed-forward ReLU networks that takes advantage of the degree of sparsity that is achieved in the hidden layer activations. By developing a framework that accounts for this reduced effective model size for each input sample, we are able to show fundamental trade-offs between sparsity and generalization. Importantly, our results make no strong assumptions about the degree of sparsity achieved by the model, and it improves over recent norm-based approaches. We illustrate our results numerically, demonstrating non-vacuous bounds when coupled with data-dependent priors in specific settings, even in over-parametrized models. ## 1 Introduction Statistical learning theory seeks to characterize the generalization ability of machine learning models, obtained from finite training data, to unseen test data. The field is by now relatively mature, and several tools exist to provide upper bounds on the generalization error, \(R(h)\). Often the upper bounds depend on the empirical risk, \(\hat{R}(h)\), and different characterizations of complexity of the hypothesis class as well as potentially specific data-dependent properties. The renewed interest in deep artificial neural network models has demonstrated important limitations of existing tools. For example, VC dimension often simply relates to the number of model parameters and is hence insufficient to explain generalization of overparameterized models (Bartlett et al., 2019). Traditional measures based on Rademacher complexity are also often vacuous, as these networks can indeed be trained to fit random noise (Zhang et al., 2017). Margin bounds have been adapted to deep non-linear networks (Bartlett et al., 2017; Golowich et al., 2018; Neyshabur et al., 2015, 2018), albeit still unable to provide practically informative results. An increasing number of studies advocate for non-uniform data-dependent measures to explain generalization in deep learning (Nagarajan and Kolter, 2019; Perez and Louis, 2020; Wei and Ma, 2019). Of particular interest are those that employ the sensitivity of a data-dependent predictor to parameter perturbations - sometimes also referred to as _flatness_(Shawe-Taylor and Williamson, 1997; Neyshabur et al., 2017; Dziugaite and Roy, 2017; Arora et al., 2018; Li et al., 2018; Nagarajan and Kolter, 2019; Wei and Ma, 2019; Sulam et al., 2020; Banerjee et al., 2020). This observation has received some empirical validation as well (Zhang et al., 2017; Keskar et al., 2017; Izmailov et al., 2018; Neyshabur et al., 2019; Jiang* et al., 2020; Foret et al., 2021). Among the theoretical results of this line of work, Arora et al. (2018) study the generalization properties of a _compressed_ network, and Dziugaite and Roy (2017); Neyshabur et al. (2017) study a stochastic perturbed version of the original network. The work in (Wei and Ma, 2019) provides improved bounds on the generalization error of neural networks as measured by a low Jacobian norm with respect to training data, while Wei and Ma (2020) capture the sensitivity of a neural network to perturbations in intermediate layers. PAC-Bayesian analysis provides an alternate way of studying generalization by incorporating prior knowledge on a distribution of well-performing predictors in a Bayesian setting (McAllester, 1998; Guedj, 2019; Alquier, 2021). Recent results (Dziugaite and Roy, 2017, 2018; Zhou et al., 2019) have further strengthened the standard PAC-Bayesian analysis by optimizing over the posterior distribution to generate non-vacuous bounds on the expected generalization error of stochastic neural networks. Derandomized versions of PAC-Bayes bounds have also been recently developed (Nagarajan and Kolter, 2019; Banerjee et al., 2020) relying on the sensitivity or _noise resilience_ of an obtained predictor. All of these works are insightful, alas important gaps remain in understanding generalization in non-linear, over-parameterized networks (Perez and Louis, 2020). **Our contributions.** In this work we employ tools of sensitivity analysis and PAC-Bayes bounds to provide generalization guarantees on deep ReLU feed-forward networks. Our key contribution is to make explicit use of the sparsity achieved by these networks across their different layers, reflecting the fact that only sub-networks, of reduced sizes and complexities, are active at every sample. Similar in spirit to the observations in Muthukumar and Sulam (2022), we provide conditions under which the set of active neurons (smaller than the number of total neurons) is stable over suitable distributions of networks, with high-probability. In turn, these results allow us to instantiate recent de-randomized PAC-Bayes bounds (Nagarajan and Kolter, 2019) and obtain new guarantees that do not depend on the global Lipschitz constant, nor are they exponential in depth. Importantly, our results provide data-dependent non-uniform guarantees that are able to leverage the structure (sparsity) obtained on a specific predictor. As we show experimentally, this degree of sparsity - the reduced number of active neurons - need not scale linearly with the width of the model or the number of parameters, thus obtaining bounds that are significantly tighter than known results. We also illustrate our generalization results on MNIST for models of different width and depth, providing non-vacuous bounds in certain settings. **Manuscript organization.** After introducing basic notation, definitions and problem settings, we provide a detailed characterization of stable inactive sets in single-layer feed-forward maps in Section 2. Section 3 presents our main results by generalizing our analysis to multiple layers, introducing appropriate distributions over the hypothesis class and tools from de-randomized PAC-Bayes theory. We demonstrate our bounds numerically in Section 4, and conclude in Section 5. ### Notation And Definitions Sets and spaces are denoted by capital (and often calligraphic) letters, with the exception of the set \([K]=\{1,\ldots,K\}\). For a Banach space \(\mathcal{W}\) embedded with norm \(\left\|\cdot\right\|_{\mathcal{W}}\), we denote by \(\mathcal{B}_{r}^{\mathcal{W}}(\mathbf{W})\), a bounded ball centered around \(\mathbf{W}\) with radius \(r\). Throughout this work, scalar quantities are denoted by lower or upper case (not bold) letters, and vectors with bold lower case letters. Matrices are denoted by bold upper case letters: \(\mathbf{W}\) is a matrix with _rows_\(\mathbf{w}[i]\). We denote by \(\mathcal{P}_{I}\), the index selection operator that restricts input to the coordinates specified in the set \(I\). For a vector \(\mathbf{x}\in\mathbb{R}^{d}\) and \(I\subset[d]\), \(\mathcal{P}_{I}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{|I|}\) is defined as \(\mathcal{P}_{I}(\mathbf{x}):=\mathbf{x}[I]\). For a matrix \(\mathbf{W}\in\mathbb{R}^{p\times d}\) and \(I\subset[p]\), \(\mathcal{P}_{I}(\mathbf{W})\in\mathbb{R}^{|I|\times d}\) restricts \(\mathbf{W}\) to the _rows_ specified by \(I\). For row and column index sets \(I\subset[p]\) and \(J\subset[d]\), \(\mathcal{P}_{I,J}(\mathbf{W})\in\mathbb{R}^{|I|\times|J|}\) restricts \(\mathbf{W}\) to the corresponding sub-matrix. Throughout this work, we refer to _sparsity_ as the _number of zeros_ of a vector, so that for \(\mathbf{x}\in\mathbb{R}^{d}\) with degree of sparsity \(s\), \(\left\|\mathbf{x}\right\|_{0}=d-s\). We denote the induced operator norm by \(\left\|\cdot\right\|_{2}\), and the Frobenius norm by \(\left\|\cdot\right\|_{F}\). In addition, we will often use operator norms of reduced matrices induced by sparsity patterns. To this end, the following definition will be used extensively. **Definition 1**: _(Sparse Induced Norms) Let \(\mathbf{W}\in\mathbb{R}^{d_{2}\times d_{1}}\) and \((s_{2},s_{1})\) be sparsity levels such that \(0\leq s_{1}\leq d_{1}-1\) and \(0\leq s_{2}\leq d_{2}-1\). We define the \((s_{2},s_{1})\) sparse induced norm \(\left\|\cdot\right\|_{(s_{2},s_{1})}\) as_ \[\left\|\mathbf{W}\right\|_{(s_{2},s_{1})}:=\max_{|J_{2}|=d_{2}-s_{2}}\ \ \max_{|J_{1}|=d_{1}-s_{1}}\ \ \left\|\mathcal{P}_{J_{2},J_{1}}(\mathbf{W})\right\|_{2}.\] The sparse induced norm \(\left\|\cdot\right\|_{(s_{2},s_{1})}\) measures the induced operator norm of a worst-case sub-matrix. For any two sparsity vectors \((s_{2},s_{1})\preceq(\hat{s}_{2},\hat{s}_{1})\), one can show that \(\left\|\mathbf{W}\right\|_{(\hat{s}_{2},\hat{s}_{1})}\leq\left\|\mathbf{W} \right\|_{(s_{2},s_{1})}\) for any matrix \(\mathbf{W}\) (see Lemma 4). In particular, \[\max_{i,J}\left|\mathbf{W}[i,j]\right|=\left\|\mathbf{W}\right\|_{(d_{2}-1,d _{1}-1)}\leq\left\|\mathbf{W}\right\|_{(s_{2},s_{1})}\leq\left\|\mathbf{W} \right\|_{(0,0)}=\left\|\mathbf{W}\right\|_{2}.\] Thus, the sparse norm interpolates between the maximum absolute entry norm and the operator norm. Frequently in our exposition we rely on the case when \(s_{2}=d_{2}-1\), thus obtaining \(\left\|\mathbf{W}\right\|_{(d_{2}-1,s_{1})}=\max_{i\in[d_{2}]}\max_{|J_{1}|=d_{1} -s_{1}}\left\|\mathcal{P}_{J_{1}}(\mathbf{w}[i])\right\|_{2}\), the maximum norm of any reduced row of matrix \(\mathbf{W}\). Outside of the special cases listed above, computing the sparse norm for a general \((s_{2},s_{1})\) has combinatorial complexity. Instead, a modified version of the babel function (see Tropp et al. (2003)) provides computationally efficient upper bounds1. Footnote 1: The particular definition used in this paper is weaker but more computationally efficient than that introduced in Muthukumar and Sulam (2022). **Definition 2**: _(Reduced Babel Function (Muthukumar and Sulam, 2022)) Let \(\mathbf{W}\in\mathbb{R}^{d_{2}\times d_{1}}\), the reduced babel function at row sparsity level \(s_{2}\in\{0,\ldots,d_{2}-1\}\) and column sparsity level \(s_{1}\in\{0,\ldots,d_{1}-1\}\) is defined as2,_ Footnote 2: When \(s_{2}=d_{2}-1,|J_{2}|=1\), we simply define \(\mu_{(s_{2},s_{1})}(\mathbf{W}):=0\). \[\mu_{s_{2},s_{1}}(\mathbf{W}):=\frac{1}{\left\|\mathbf{W}\right\|_{(d_{2}-1,s_ {1})}^{2}}\max_{\begin{subarray}{c}J_{2}\subset[d_{2}],\\ |J_{2}|=d_{2}-s_{2}\end{subarray}}\max_{j\in J_{2}}\left[\sum_{\begin{subarray} {c}i\in J_{2},\\ i\neq j\end{subarray}}\max_{\begin{subarray}{c}J_{1}\subseteq[d_{1}]\\ i\neq j\end{subarray}}\left|\mathcal{P}_{J_{1}}(\mathbf{w}[i])\mathcal{P}_{J_{ 1}}(\mathbf{w}[j])^{T}\right|\right].\] For the special case when \(s_{2}\) = 0, the reduced babel function is equivalent to the babel function from Tropp et al. (2003) on the transposed matrix \(\mathbf{W}^{T}\). We show in Lemma 5 that the sparse-norm can be bounded using the reduced babel function and the maximum reduced row norm \(\left\|\cdot\right\|_{(d_{2}-1,s_{1})}\), \[\left\|\mathbf{W}\right\|_{s_{2},s_{1}}\leq\left\|\mathbf{W}\right\|_{d_{2}-1,s_{1}}\sqrt{1+\mu_{s_{2},s_{1}}(\mathbf{W})}. \tag{1}\] See Appendix D for a computationally efficient implementation of the reduced babel function. ### Learning Theoretic Framework We consider the task of multi-class classification with a bounded input space \(\mathcal{X}=\{\mathbf{x}\in\mathbb{R}^{d_{0}}\mid\left\|\mathbf{x}\right\|_{2 }\leq\mathbf{M}_{\mathcal{X}}\}\) and labels \(\mathcal{Y}=\{1,\ldots,C\}\) from an unknown distribution \(\mathcal{D}_{\mathcal{Z}}\) over \(\mathcal{Z}:=(\mathcal{X}\times\mathcal{Y})\). We search for a hypothesis in \(\mathcal{H}\subset\{h:\mathcal{X}\rightarrow\mathcal{Y}^{{}^{\prime}}\}\) that is an accurate predictor of label \(y\) given input \(\mathbf{x}\). Note that \(\mathcal{Y}\) and \(\mathcal{Y}^{{}^{\prime}}\) need not be the same. In this work, we consider \(\mathcal{Y}^{{}^{\prime}}=\mathbb{R}^{C}\), and consider the predicted label of the hypothesis \(h\) as \(\hat{y}(\mathbf{x}):=\operatorname*{argmax}_{j}[h(\mathbf{x})]_{j}\)3. The quality of prediction of \(h\) at \(\mathbf{z}=(\mathbf{x},y)\) is informed by the margin defined as \(\rho(h,\mathbf{z}):=\big{(}[h(\mathbf{x})]_{y}-\operatorname*{argmax}_{j\neq }[h(\mathbf{x})]_{j}\big{)}\). If the margin is positive, then the predicted label is correct. For a threshold hyper-parameter \(\gamma\geq 0\), we define a \(\gamma\)-threshold 0/1 loss \(\ell_{\gamma}\) based on the margin as \(\ell_{\gamma}(h,\mathbf{z}):=1\left\{\rho(h,\mathbf{z})<\gamma\right\}\). Note that \(\ell_{\gamma}\) is a stricter version of the traditional zero-one loss \(\ell_{0}\), since \(\ell_{0}(h,\mathbf{z})\leq\ell_{\gamma}(h,\mathbf{z})\) for all \(\gamma\geq 0\). With these elements, the _population risk_ (also referred to as _generalization error_) of a hypothesis \(R_{\gamma}\) is the expected loss it incurs on a randomly sampled data point, \(R_{\gamma}(h):=\mathbb{E}_{\mathbf{z}\sim\mathcal{D}_{\mathcal{Z}}}\left[\ell_{ \gamma}\big{(}h,\mathbf{z}\big{)}\right]\). The goal of supervised learning is to obtain a hypothesis with low population risk \(R_{0}(h)\), the probability of misclassification. While the true distribution \(\mathcal{D}_{\mathcal{Z}}\) is unknown, we assume access to an i.i.d training set \(\mathbf{S}_{T}=\{\mathbf{z}^{(i)},\ldots,\mathbf{z}^{(m)}\}\sim(\mathcal{D}_{ \mathcal{Z}})^{m}\) and we seek to minimize the _empirical risk_\(\hat{R}_{\gamma}\), the average loss incurred on the training sample \(\mathbf{S}_{T}\), i.e. \(\hat{R}_{\gamma}(h):=\frac{1}{m}\sum_{i=1}^{m}\ell_{\gamma}\left(h,\mathbf{z}^{ (i)}\right)\). We shall later see that for any predictor, \(R_{0}(h)\) can be upper bounded using the stricter empirical risk \(\hat{R}_{\gamma}(h)\) for an appropriately chosen \(\gamma>0\). Footnote 3: The argmax here is assumed to break ties deterministically. In this work, we study the hypothesis class \(\mathcal{H}\) containing feed-forward neural networks with \(K\) hidden layers. Each hypothesis \(h\in\mathcal{H}\) is identified with its weights \(\{\mathbf{W}_{k}\}_{k=1}^{K+1}\), and is a sequence of \(K\) linear maps \(\mathbf{W}_{k}\in\mathbb{R}^{d_{k}\times d_{k-1}}\) composed with a nonlinear activation function \(\sigma(\cdot)\) and a final linear map \(\mathbf{W}_{K+1}\in\mathbb{R}^{C\times d_{K}}\), \[h(\mathbf{x}_{0}):=\mathbf{W}_{K+1}\sigma\left(\mathbf{W}_{k}\sigma\left( \mathbf{W}_{K-1}\cdots\sigma\left(\mathbf{W}_{1}\mathbf{x}_{0}\right)\cdots \right)\right).\] We exclude bias from our definitions of feed-forward layers for simplicity4. We denote by \(\mathbf{x}_{k}\) the \(k^{th}\) hidden layer representation of network \(h\) at input \(\mathbf{x}_{0}\), so that \(\mathbf{x}_{k}:=\sigma\left(\mathbf{W}_{k}\mathbf{x}_{k-1}\right)\ \forall 1\leq k\leq K\), and \(h(\mathbf{x}):=\mathbf{W}_{K+1}\mathbf{x}_{K}\). Throughout this work, the activation function is assumed to be the Rectifying Linear Unit, or ReLU, defined by \(\sigma(x)=\max\{x,0\}\), acting entrywise on an input vector. ## 2 Warm Up: Sparsity In Feed-Forward Maps As a precursor to our sensitivity analysis for multi-layer feed-forward networks, we first consider a generic feed-forward map \(\Phi(\mathbf{x}):=\sigma(\mathbf{W}\mathbf{x})\). A naive bound on the norm of the function output is \(\left\|\Phi(\mathbf{x})\right\|_{2}\leq\left\|\mathbf{W}\right\|_{2}\left\| \mathbf{x}\right\|_{2}\), but this ignores the sparsity of the output of the feed-forward map (due to the ReLU). Suppose there exists a set \(I\) of inactive indices such that \(\mathcal{P}_{I}(\Phi(\mathbf{x}))=\mathbf{0}\), i.e. for all \(i\in I\), \(\mathbf{w}[i]\cdot\mathbf{x}\leq 0\). In the presence of such an index set, clearly \(\left\|\Phi(\mathbf{x})\right\|_{2}\leq\left\|\mathcal{P}_{I^{c}}(\mathbf{W} )\right\|_{2}\left\|\mathbf{x}\right\|_{2}\)5. Thus, estimates of the effective size of the feed-forward output, and other notions such as sensitivity to parameter perturbations, can be refined by accounting for the sparsity of activation patterns. Note that the inactive index set \(I\) varies with each input, \(\mathbf{x}\), and with the parameters of predictor, \(\mathbf{W}\). Footnote 5: \(I^{c}\) is the complement of the index set \(I\), also referred to as \(J\) when clear from context. For some \(\zeta_{0},\xi_{1},\eta_{1}>0\) and sparsity levels \(s_{1},s_{0}\), let \(\mathcal{X}_{0}=\left\{\mathbf{x}\in\mathbb{R}^{d_{0}}\mid\|\mathbf{x}\|_{2} \leq\zeta_{0},\;\|\mathbf{x}\|_{0}\leq d_{0}-s_{0}\right\}\) denote a bounded sparse input domain and let \(\mathcal{W}_{1}:=\left\{\mathbf{W}\in\mathbb{R}^{d_{1}\times d_{0}}\mid\| \mathbf{W}\|_{(d_{1}-1,s_{0})}\leq\xi_{1},\;\mu_{s_{1},s_{0}}(\mathbf{W})\leq \eta_{1}\right\}\) denote a parameter space. We now define a radius function that measures the amount of relative perturbation within which a certain inactive index set is stable. **Definition 3**: _(Sparse local radius6) For any weight \(\mathbf{W}\in\mathbb{R}^{d_{1}\times d_{0}}\), input \(\mathbf{x}\in\mathbb{R}^{d_{0}}\) and sparsity level \(1\leq s_{1}\leq d_{1}\), we define a sparse local radius and a sparse local index set as_ Footnote 6: The definition here is inspired by Muthukumar and Sulam (2022) but stronger. \[r_{\text{sparse}}(\mathbf{W},\mathbf{x},s_{1}):=\sigma\left(\text{\sc sort} \left(-\frac{\mathbf{W}\cdot\mathbf{x}}{\xi_{1}\zeta_{0}},\;s_{1}\right) \right),\quad I(\mathbf{W},\mathbf{x},s_{1}):=\text{\sc Top-k}\left(-\frac{ \mathbf{W}\cdot\mathbf{x}}{\xi_{1}\zeta_{0}},s_{1}\right). \tag{2}\] _Here, \(\text{\sc Top-k}(\mathbf{u},j)\) is the index set of the top \(j\) entries in \(\mathbf{u}\), and \(\text{\sc sort}(\mathbf{u},j)\) is its \(j^{th}\) largest entry._ We note that when evaluated on a weight \(\mathbf{W}\in\mathcal{W}_{1}\) and input \(\mathbf{x}\in\mathcal{X}_{0}\), for all sparsity levels the sparse local radius \(r_{\text{sparse}}(\mathbf{W},\mathbf{x},s_{1})\in[0,1]\). We denote the sparse local index set as \(I\) when clear from the context. We now analyze the stability of the sparse local index set and the resulting reduced sensitivity of model output. For brevity, we must defer all proofs to the appendix. **Lemma 1**: _Let \(\epsilon_{0}\in[0,1]\) be a relative input corruption level and let \(\epsilon_{1}\in[0,1]\) be the relative weight corruption. For the feed-forward map \(\Phi\) with weight \(\mathbf{W}\in\mathcal{W}_{1}\) and input \(\mathbf{x}\in\mathcal{X}_{0}\), the following statements hold for any output sparsity level \(1\leq s_{1}\leq d_{1}\),_ 1. _Existence of an inactive index set and bounded outputs:_ _If_ \(r_{\text{sparse}}(\mathbf{W},\mathbf{x},s_{1})>0\)_, then the index set_ \(I(\mathbf{W},\mathbf{x},s_{1})\) _is inactive for_ \(\Phi(\mathbf{x})\)_. Moreover,_ \(\left\|\Phi(\mathbf{x})\right\|_{2}\leq\xi_{1}\sqrt{1+\eta_{1}}\cdot\zeta_{0}\)_._ 2. _Stability of an inactive index set to input and parameter perturbations:_ _Suppose_ \(\hat{\mathbf{x}}\) _and_ \(\hat{\mathbf{W}}\) _are perturbed inputs and weights respectively such that,_ \(\left\|\hat{\mathbf{x}}-\mathbf{x}\right\|_{0}\leq d_{0}-s_{0}\) _and,_ \[\frac{\left\|\hat{\mathbf{x}}-\mathbf{x}\right\|_{2}}{\zeta_{0}}\leq\epsilon_{ 0}\;\text{ and }\;\max\left\{\frac{\left\|\hat{\mathbf{W}}-\mathbf{W}\right\|_{(d_{1}-1,s_{0} )}}{\xi_{1}},\frac{\left\|\hat{\mathbf{W}}-\mathbf{W}\right\|_{(s_{1},s_{0})}}{ \xi_{1}\sqrt{1+\eta_{1}}}\right\}\leq\epsilon_{1},\] _and denote_ \(\hat{\Phi}(\mathbf{x})=\sigma(\hat{\mathbf{W}}\mathbf{x})\)_. If_ \(r_{\text{sparse}}(\mathbf{W},\mathbf{x},s_{1})\geq-1+(1+\epsilon_{0})(1+ \epsilon_{1})\)_, then the index set_ \(I(\mathbf{W},\mathbf{x},s_{1})\) _is inactive and stable to perturbations, i.e._7__\(\mathcal{P}_{I}(\Phi(\mathbf{x}))=\mathcal{P}_{I}(\hat{\Phi}(\hat{\mathbf{x}}))= \mathcal{P}_{I}(\hat{\Phi}(\hat{\mathbf{x}}))=\mathbf{0}\)_. Moreover,_ \(\left\|\hat{\Phi}(\hat{\mathbf{x}})-\Phi(\mathbf{x})\right\|_{2}\leq(-1+(1+ \epsilon_{0})(1+\epsilon_{1}))\cdot\xi_{1}\sqrt{1+\eta_{1}}\cdot\zeta_{0}\)_._ Footnote 7: For notational ease we suppress arguments and let \(I=I(\mathbf{W},\mathbf{x},s_{1})\). 3. _Stability of sparse local radius_: _For a perturbed input_ \(\hat{\mathbf{x}}\) _such that_ \(\left\|\hat{\mathbf{x}}-\mathbf{x}\right\|_{0}\leq d_{0}-s_{0}\)_, and perturbed weight_ \(\hat{\mathbf{W}}\)_, the difference between sparse local radius is bounded_ \[\left|r_{\text{sparse}}(\hat{\mathbf{W}},\hat{\mathbf{x}},s_{1})-r_{\text{ sparse}}(\mathbf{W},\mathbf{x},s_{1})\right|\leq-1+\left(1+\frac{\left\|\hat{ \mathbf{x}}-\mathbf{x}\right\|_{2}}{\zeta_{0}}\right)\left(1+\frac{\left\| \hat{\mathbf{W}}-\mathbf{W}\right\|_{(d_{1}-1,s_{0})}}{\xi_{1}}\right).\] A key takeaway of this Lemma (see Appendix A.1.1 for its proof) is that one can obtain tighter bounds, on both the size of the network output as well as its sensitivity to corruptions, if the corresponding sparse local radius is sufficiently large. The results above quantify these notions for a given sample. In the next section, we will leverage this characterization within the framework of PAC-Bayes analysis to provide a generalization bound for feed-forward networks. ## 3 A Sparsity-Aware Generalization Theory We shall construct non-uniform data-dependent generalization bounds for feed-forward networks based on a local sensitivity analysis of deep ReLU networks, employing the intuition from the previous section. To do so, we will first study the size of the layer outputs using Definition 2, then measure the sensitivity in layer outputs to parameter perturbations using Lemma 1 across multiple layers, and finally leverage a derandomized PAC-Bayes result from Nagarajan and Kolter (2019b) (see Appendix C.2). Before embarking on the analysis, we note the following convenient property of the margin for any two predictors \(h,\hat{h}\) from (Bartlett et al., 2017, Lemma A.3), \[\left|\left(h(\mathbf{x})_{y}-\max_{j\neq y}h(\mathbf{x})_{j}\right)-\left( \hat{h}(\mathbf{x})_{y}-\max_{j\neq y}\hat{h}(\mathbf{x})_{j}\right)\right| \leq 2\left\|\hat{h}(\mathbf{x})-h(\mathbf{x})\right\|_{\infty}.\] Hence, quantifying the sensitivity of the predictor outputs will inform the sensitivity of the loss. Similar to other works (Nagarajan and Kolter, 2019b; Banerjee et al., 2020), our generalization bound will be derived by studying the sensitivity of neural networks upon perturbations to the layer weights. For the entirety of this section, we fix a set of _base hyper-parameters_ that determine a specific class of neural networks, the variance of a posterior distribution over networks, and the resolution (via a sparsity vector) at which the generalization is measured - see Table 1 for reference. We denote by \(\mathbf{s}=\{s_{1},\ldots,s_{K}\}\) a vector of layer-wise sparsity levels, which reflects the inductive bias of the learner on the potential degree of sparsity of a trained network on the training data. Next we define two hyper-parameters, \(\boldsymbol{\xi}:=\{\xi_{1},\ldots,\xi_{K+1}\}\) where \(\xi_{k}>0\) bounds the sparse norm \(\left\|\cdot\right\|_{(d_{k}-1,s_{k-1})}\) of the layer weights and \(\boldsymbol{\eta}:=\{\eta_{1},\ldots,\eta_{K}\}\) where \(\eta_{k}>0\) bounds the reduced babel function \(\mu_{s_{k},s_{k-1}}(\cdot)\) of the layer weights. Finally, we let \(\boldsymbol{\epsilon}:=\{\epsilon_{1},\ldots,\epsilon_{K+1}\}\) with \(\epsilon_{k}>0\) bound the amount of relative perturbation in the weights. This section treats the quartet \((\mathbf{s},\boldsymbol{\xi},\boldsymbol{\eta},\boldsymbol{\epsilon})\) as constants8, while in the next section we shall discuss appropriate values for these hyper-parameters. Footnote 8: Unless otherwise specified we let \(s_{0}=s_{K+1}=0\) and \(\epsilon_{0}=0\). **Definition 4**: _(Norm bounded feed-forward networks) We define below the parameter domain \(\mathcal{W}_{k}\) and a class of feed-forward networks \(\mathcal{H}_{K+1}\) with \(K\)-hidden layers,_ \[\mathcal{W}_{k}:=\left\{\mathbf{W}\in\mathbb{R}^{d_{k}\times d_{k- 1}}\ |\ \left\|\mathbf{W}\right\|_{(d_{k}-1,s_{k-1})}\leq\xi_{k},\quad\mu_{s_{k},s_{k- 1}}(\mathbf{W})\leq\eta_{k},\right\},\;\forall\;k\in[K],\] \[\mathcal{H}:=\left\{h(\cdot):=\mathbf{W}_{K+1}\sigma\left(\mathbf{W} _{K}\cdots\sigma\left(\mathbf{W}_{1}\cdot\right)\right)\ |\ \left\|\mathbf{W}_{K+1}\right\|_{(C-1,s_{K})}\leq\xi_{K+1},\ \mathbf{W}_{k}\in\mathcal{W}_{k},\;\forall\;k\in[K]\right\}.\] \begin{table} \begin{tabular}{|c|c|} \hline \(\mathbf{s}=\{s_{1},\ldots,s_{k}\}\), \(\ 0\leq s_{k}\leq d_{k}-1\) & Layer wise sparsity vector \\ \hline \(\boldsymbol{\xi}=\{\xi_{1},\ldots,\xi_{K+1}\}\), \(\ 0\leq\xi_{k}\) & Layer wise bound on \(\left\|\cdot\right\|_{(d_{k}-1,s_{k-1})}\) \\ \hline \(\boldsymbol{\eta}=\{\eta_{1},\ldots,\eta_{K}\}\), \(\ 0\leq\eta_{k}\) & Layer wise bound on \(\mu_{s_{k},s_{k-1}}(\cdot)\) \\ \hline \(\boldsymbol{\epsilon}=\{\epsilon_{1},\ldots,\epsilon_{K+1}\}\), \(\ 0\leq\epsilon_{k}\) & Layer wise bound on relative perturbation \\ \hline \end{tabular} \end{table} Table 1: Independent base hyper-parameters To measure the local sensitivity of the network outputs, it will be useful to formalize a notion of local neighborhood for networks. **Definition 5**: _(Local Neighbourhood) Given \(h\in\mathcal{H}\), define \(\mathcal{B}(h,\mathbf{\epsilon})\) to be the local neighbourhood around \(h\) containing perturbed networks \(\hat{h}\) with weights \(\{\hat{\mathbf{W}}_{j}\}_{k=1}^{K+1}\) such that at each layer \(k\)9,_ Footnote 9: For the last layer we only require \(\left\|\hat{\mathbf{W}}_{K+1}-\mathbf{W}_{K+1}\right\|_{C-1,s_{K}}\leq\epsilon_ {K+1}\cdot\xi_{K+1}\). \[\max\left\{\frac{\left\|\hat{\mathbf{W}}_{k}-\mathbf{W}_{k}\right\|_{(s_{k},s_ {k-1})}}{\xi_{k}\sqrt{1+\eta_{k}}},\frac{\left\|\hat{\mathbf{W}}_{k}-\mathbf{W }_{k}\right\|_{(d_{k}-1,s_{k-1})}}{\xi_{k}}\right\}\leq\epsilon_{k}.\] It will be useful to understand the probability that \(\hat{h}\in\mathcal{B}(h,\mathbf{\epsilon})\) when the perturbations to each layer weight are random, in particular from Gaussian distributions over feed-forward networks: **Definition 6**: _(Entrywise Gaussian) Let \(h\in\mathcal{H}\) be any network with \(K+1\) layers, and let \(\mathbf{\sigma}^{2}:=\{\sigma_{1}^{2},\ldots,\sigma_{K+1}^{2}\}\) be a layer-wise variance. We denote by \(\mathcal{N}(h,\mathbf{\sigma}^{2})\) a distribution with mean network \(h\) such that for any \(\hat{h}\sim\mathcal{N}(h,\mathbf{\sigma}^{2})\) with layer weights \(\hat{\mathbf{W}}_{k}\), each entry \(\hat{\mathbf{W}}_{k}[i,j]\sim\mathcal{N}(\mathbf{W}_{k}[i,j],\sigma_{k}^{2})\)._ ### Sensitivity Of Network Output Given a predictor \(h\in\mathcal{H}\), note that the size of a network output for any given input is bounded by \(\left\|h(\mathbf{x}_{0})\right\|_{2}\leq\prod_{k=1}^{K+1}\left\|\mathbf{W}_{k} \right\|_{2}\mathsf{M}_{\mathcal{X}}\), which ignores the sparsity of the intermediate layers. We will now generalize the result in Lemma 1 by making use of the inactive index sets at every layer \(I_{k}\), such that \(\mathcal{P}_{I_{k}}(\mathbf{x}_{k})=\mathbf{0}\), obtaining a tighter (input dependent) characterization of sensitivity to perturbations of the network. For notational convenience, we define two additional dependent notations: we let \(\zeta_{0}:=\mathsf{M}_{\mathcal{X}}\) and \(\zeta_{k}:=\xi_{k}\sqrt{1+\eta_{k}}\cdot\zeta_{k-1}=\mathsf{M}_{\mathcal{X}} \prod_{n=1}^{k}\xi_{n}\sqrt{1+\eta_{n}}\) denote a bound on the layer-wise size of the outputs. At the final layer, we let \(\zeta_{K+1}:=\xi_{K+1}\zeta_{K}\) as a bound on the network output. Additionally, we define \(\gamma_{k}:=-1+\prod_{n=1}^{k}(1+\epsilon_{n})\) as a threshold on the sparse local radius evaluated at each layer - see Table 2 for a summary. In the last layer, we let this value \(\gamma_{K+1}\) represent the desired margin. For networks \(\hat{h}\) with perturbed weights \(\hat{\mathbf{W}}\), we denote by \(\hat{\mathbf{x}}_{k}:=\sigma\left(\hat{\mathbf{W}}_{k}\hat{\mathbf{x}}_{k-1}\right)\) the perturbed layer representation corresponding to input \(\mathbf{x}_{0}\). **Definition 7**: _(Layer-wise sparse local radius) Let \(h\) be any feed-forward network with weighs \(\mathbf{W}_{k}\in\mathbb{R}^{d_{k}\times d_{k-1}}\), and let \(\mathbf{x}_{0}\in\mathbb{R}^{d_{0}}\). We define a layer-wise sparse local radius and a layer-wise inactive index set as below,_ \[I_{k}(h,\mathbf{x}_{0}):=\text{Top-k}\left(-\frac{\mathbf{W}_{k}\cdot \mathbf{x}_{k-1}}{\xi_{k}\zeta_{k-1}},s_{k}\right),\quad r_{k}(h,\mathbf{x}_{0 }):=\sigma\left(\text{sort}\left(-\frac{\mathbf{W}_{k}\cdot\mathbf{x}_{k-1}}{ \xi_{k}\zeta_{k-1}},\ s_{k}\right)\right).\] Definition 7 now allows us, by employing Lemma 1, to generalize our previous observations to entire network models, as we now show. **Theorem 1**: _Let \(h\in\mathcal{H}\), if at each layer \(k\) the layer-wise sparse local radius is nontrivial, i.e. \(\forall\ k\in[K],\ \ r_{k}(h,\mathbf{x}_{0})>0\). Then the index sets \(I_{k}(h,\mathbf{x}_{0})\) are inactive at layer \(k\) and the size of the hidden layer representations and the network output are bounded as follows,_ \[\forall\ k\in[K],\quad\left\|\mathbf{x}_{k}\right\|_{2}\leq\zeta_{k},\quad \text{and}\quad\left\|h(\mathbf{x}_{0})\right\|_{\infty}\leq\zeta_{K+1}. \tag{3}\] \begin{table} \begin{tabular}{|c|c|} \hline \(\zeta_{k}:=\xi_{k}\sqrt{1+\eta_{k}}\cdot\zeta_{k-1}\), & \(\forall\ k\in[K]\) \\ \hline \(\zeta_{K+1}:=\xi_{K+1}\zeta_{K}\) & Bound on norm of network output \\ \hline \(\gamma_{k}:=-1+\prod_{n=1}^{k}(1+\epsilon_{n}),\ \ \forall\ k\in[K+1]\) & Layer wise threshold for local radius \\ \hline \(r_{k}(h,\mathbf{z}):=\sigma\left(\text{sort}\left(-\left[\frac{\mathbf{w}_{k}[i] \cdot\mathbf{x}_{k-1}}{\xi_{k}\zeta_{k-1}}\right]_{i=1}^{d_{k}},\ d_{k}-s_{k}\right)\right)\) & Layer-wise sparse local radius \\ \hline \end{tabular} \end{table} Table 2: Layer-wise bounds and thresholds. In a similar vein, we can characterize the sensitivity of the network to parameter perturbations. **Theorem 2**: _Let \(h\in\mathcal{H}\) and let \(\hat{h}\in\mathcal{B}(h,\boldsymbol{\epsilon})\) be a nearby perturbed predictor with weights \(\{\hat{\mathbf{W}}_{k}\}\). If each layer-wise sparse local radius is sufficiently large, i.e. \(\forall\ k\in[K],\ r_{k}(h,\mathbf{x}_{0})\geq\gamma_{k}\), then the index sets \(I_{k}(h,\mathbf{x}_{0})\) are inactive for the perturbed layer representations \(\hat{\mathbf{x}}_{k}\) and the distance between the layer representations and the network output are bounded as follows,_ \[\forall\ k\in[K],\quad\left\|\hat{\mathbf{x}}_{k}-\mathbf{x}_{k}\right\|_{2} \leq\zeta_{k}\cdot\gamma_{k},\quad\text{and}\quad\left\|\hat{h}(\mathbf{x}_{0} )-h(\mathbf{x}_{0})\right\|_{\infty}\leq\zeta_{K+1}\cdot\gamma_{K+1}. \tag{4}\] Proofs of the above propositions can be found in A.1.2 and A.1.3 respectively. ### Sparsity-Aware Generalization We are now ready to state our main theorem on generalization of feed-forward networks that leverages improved sensitivity of network outputs due to stable inactive index sets. **Theorem 3**: _Let \(\mathcal{P}\) be any prior distribution over depth-\((K+1)\) feed-forward network chosen independently of the training sample. Let \(h\in\mathcal{H}\) be any feed-forward network (possibly trained on sample data), with \(\mathcal{H}\) determined by fixed base hyper-parameters \((\mathbf{s},\boldsymbol{\epsilon},\boldsymbol{\delta},\boldsymbol{\eta})\), and denote the sparse loss by \(\ell_{\mathrm{sparse}}(h,\mathbf{x})=\mathbb{I}\{\exists\,k,\ r_{k}(h, \mathbf{x})<3\gamma_{k}\}\). With probability at least \((1-\delta)\) over the choice of i.i.d training sample \(\textbf{S}_{T}\) of size \(m\), the generalization error of \(h\) is bounded as follows,_ \[R_{0}(h)\leq\hat{R}_{4\zeta_{K+1}\gamma_{K+1}}(h)+\frac{2K}{m}\sum_{\mathbf{x }^{(i)}\in\mathcal{S}_{T}}\ell_{\mathrm{sparse}}(h,\mathbf{x}^{(i)})+\tilde{ \mathcal{O}}\left(\sqrt{\frac{\mathrm{KL}\left(\mathcal{N}\left(h,\boldsymbol{ \sigma}_{\mathrm{sparse}}^{2}\right)\ ||\ \mathcal{P}\right)}{m}}\right)\] _where \(\boldsymbol{\sigma}_{\mathrm{sparse}}=\{\sigma_{1},\ldots,\sigma_{K}\}\) is defined by \(\sigma_{k}:=\epsilon_{k}\cdot\frac{\xi_{k}}{4\sqrt{2d_{\mathrm{eff}}+\log\left( 2(K+1)\sqrt{m}\right)}}\), and where \(d_{\mathrm{eff}}:=\max_{k\in[K]}\frac{(d_{k}-s_{k})\log(d_{k})+(d_{k-1}-s_{k-1} )\log(d_{k-1})}{2}\) is an effective layer width10._ Footnote 10: We note the effective width is at worst \(\max_{k}d_{k}\log(d_{k})\) and could be larger than actual width depending on the sparsity vector \(\mathbf{s}\). In contrast, for large \(\mathbf{s}\), \(d_{\mathrm{eff}}\ll\max_{k}d_{k}\). The notation \(\tilde{\mathcal{O}}\) above hides logarithmic factors (see Appendix A.3 for a complete version of the bound). This result bounds the generalization error of a trained predictor as a function of three terms. Besides the empirical risk with margin threshold \(4\zeta_{K+1}\gamma_{K+1}\), the risk is upper bounded by an empirical sparse loss that measures the proportion of samples (in the training data) that do not achieve a sufficiently large sparse radius at any layer. Lastly, as is characteristic in PAC-Bayes bounds, we see a term that depends on the distance between the prior and posterior distributions, the latter centered at the obtained (data-dependent) predictor. The posterior variance \(\boldsymbol{\sigma}_{\mathrm{sparse}}^{2}\) is determined entirely by the base hyper-parameters. Finally, note that the result above holds for any prior distribution \(\mathcal{P}\). Before moving on, we comment on the specific factors influencing this bound. Sparsity.The result above depends on the sparsity by the choice of the parameter \(\mathbf{s}\). One can always instantiate the above result for \(\mathbf{s}=\mathbf{0}\), corresponding to a global sensitivity analysis. At this trivial choice, the sparsity loss vanishes (because the sparse radius is infinite) and the bound is equivalent to an improved (derandomized) version of the results by Neyshabur et al. (2018). The formulation in Theorem 3 enables a continuum of choices (via hyper-parameters) suited to the trained predictor and sample data. A larger degree of sparsity at every layer results in a tighter bound since the upper bounds to the sensitivity of the predictor is reduced (as only reduced matrices are involved in its computation). In turn, this reduced sensitivity leads to a lower empirical margin risk by way of a lower threshold \(4\zeta_{K+1}\gamma_{K+1}\). Furthermore, the effective width - determining the scale of posterior - is at worst \(\max_{k}d_{k}\log(d_{k})\) (for \(\mathbf{s}=0\)), but for large \(\mathbf{s}\), \(d_{\mathrm{eff}}\ll\max_{k}d_{k}\). Sensitivity.Standard sensitivity-based generalization bounds generally depend directly on the global Lipschitz constant that scales as \(\mathcal{O}(\prod_{k=1}^{K}\|\mathbf{W}_{k}\|_{2})\). For even moderate-size models, such dependence can render the bounds vacuous. Further recent studies suggest that the layer norms can even increase with the size of the training sets showing that, even for under-parameterized models, generalization bounds may be vacuous (Nagarajan and Kolter, 2019). Our generalization bound does _not_ scale with the reduced Lipschitz constant \(\zeta_{K+1}\): while larger (reduced) Lipschitz constants can render the empirical sparse loss closer to its maximum value of \(1\), the bound remains controlled due to our choice of modelling _relative_ perturbations of model parameters. Dependence On Depth.Unlike recent results (Bartlett et al., 2017; Neyshabur et al., 2015, 2018, 2019), our bound is not exponential with depth. However, the sensitivity bounds \(\zeta_{k}\) and radius thresholds \(\gamma_{k}\) are themselves exponential in depth. While the empirical risk and sparse loss terms in the generalization bounds depend on \(\zeta_{k},\gamma_{k}\), they are bounded in \([0,1]\). In turn, by choosing the prior to be a Gaussian \(P=\mathcal{N}(h_{\mathrm{prior}},\mathbf{\sigma}_{\mathrm{sparse}}^{2})\), the KL-divergence term can be decomposed into layer-wise contributions, \(\mathrm{KL}\left(\mathcal{N}\left(h,\mathbf{\sigma}_{\mathrm{sparse}}^{2} \right)\;||\;\mathcal{N}(h_{\mathrm{prior}},\mathbf{\sigma}_{\mathrm{sparse}}^ {2})\right)=\sum_{k=1}^{K+1}\frac{\|\mathbf{W}_{k}-\mathbf{W}_{\mathrm{prior},k}\|_{2}^{2}}{2\sigma_{k}^{2}}\). Hence, the KL divergence term does not scale with the product of the relative perturbations (like \(\gamma_{k}\)) or the product of layer norms (like \(\zeta_{k}\)). Comparison To Related Work.Besides the relation to some of the works that have been mentioned previously, our contribution is most closely related to those approaches that employ different notions of reduced effective models in developing generalization bounds. Arora et al. (2018) do this via a _compression_ argument, alas the resulting bound holds for the compressed network and not the original one. Neyshabur et al. (2017) develops PAC-Bayes bounds that clearly reflect the importance of _flatness_, which in our terms refers to the loss effective sensitivity of the obtained predictor. Similar in spirit to our results, Nagarajan and Kolter (2019) capture a notion of reduced active size of the model and presenting their derandomized PAC-Bayes bound (which we centrally employ here). While avoiding exponential dependence on depth, their result depends inversely with the minimum absolute pre-activation level at each layer, which can be arbitrarily small (and thus, the bound becomes arbitrarily large). Our analysis, as represented by Lemma 1, circumvents this limitation. Our constructions on normalized sparse radius have close connections with the _normalized margins_ from Wei and Ma (2020), and our use of augmented loss function (such as our _sparse loss_) resemble the ones proposed in Wei and Ma (2019). Most recently, Galanti et al. (2023) analyze the complexity of compositionally sparse networks, however the sparsity stems from the convolutional nature of the filters rather than as a data-dependent (and sample dependent) property. ### Hyper-Parameter Search For any fixed predictor \(h\), there can be multiple choices of \(\mathbf{s},\mathbf{\xi},\mathbf{\eta}\) such that \(h\) is in the corresponding hypothesis class. In the following, we discuss strategies to search for suitable hyper-parameters that can provide tighter generalization bounds. To do so, one can instantiate a grid of candidate values for each hyper-parameter that is independent of data. Let the grid sizes be \((T_{\mathbf{s}},T_{\mathbf{\xi}},T_{\mathbf{\eta}},T_{\mathbf{\epsilon}})\), respectively. We then instantiate the generalization bound in Theorem 3 for each choice of hyper-parameters in the cartesian product of grids with a reduced failure probability \(\delta_{\mathrm{red}}=\frac{\delta}{T_{\mathbf{\tau}}T_{\mathbf{\xi}}T_{\mathbf{\eta}}T_{ \mathbf{\epsilon}}}\). By a simple union-bound argument, all these bounds hold simultaneously with probability \((1-\delta)\). In this way, for a fixed \(\delta\), the statistical cost above is \(\sqrt{\log(T_{\mathbf{\ast}}T_{\mathbf{\xi}}T_{\mathbf{\eta}}T_{\mathbf{\epsilon}})}\) as the failure probability dependence in Theorem 3 is \(\sqrt{\log\left(\frac{1}{\delta_{\mathrm{red}}}\right)}\). The computational cost of a naive search is \(\mathcal{O}(T_{\mathbf{s}}T_{\mathbf{\xi}}T_{\mathbf{\eta}}T_{\mathbf{\epsilon}})\). In particular, for multilayer networks, to exhaustively search for a sparsity vector requires a grid of size \(T_{\mathbf{s}}:=\prod_{k=1}^{K}d_{k}\) rendering the search infeasible. Nonetheless, we shall soon show that by employing a greedy algorithm one can still obtain tighter generalization bounds with significantly lesser computational cost. Moreover, these hyper-parameters are not independent, and so we briefly describe here how this optimization can be performed with manageable complexity. Norm Hyper-Parameters (\(\mathbf{\xi},\mathbf{\eta}\)):One can choose \((\mathbf{\xi},\mathbf{\eta})\) from a grid (fixed in advance) of candidate values, to closely match the true properties of the predictor. For networks with zero bias, w.l.o.g. one can normalize each layer weight \(\mathbf{W}_{k}\rightarrow\tilde{\mathbf{W}}_{k}:=\frac{1}{\|\mathbf{W}_{k\|(d_{k }-1,\epsilon_{k-1})}}\mathbf{W}_{k}\) to ensure that \(\left\|\tilde{\mathbf{W}}_{k}\right\|_{(d^{k}-1,s_{k-1})}=1\) without changing the prediction11. The predicted labels, label function, sparse local radius, margin and the generalization bound in Theorem 3 are all invariant to such a scaling. For the normalized network we can simply let \(\xi_{k}:=1\) for all \(k\). Fixing \(\mathbf{\xi}\) this way results in no statistical or computational cost (beyond normalization). For discretizing \(\mathbf{\eta}\), we can leverage the fact that for all \((s_{k},s_{k-1})\), the reduced babel function is always less than \(d_{k}-s_{k}-1\) - since the inner products are scaled by the square of the sparse norms. Thus, we can construct a grid in \([0,1]\) with \(T_{\eta}\) elements, which can be searched efficiently (see Appendix B for further details). Footnote 11: This is not true for networks with non-zero bias. In networks with bias, one can still employ a grid search like in Bartlett et al. (2017). Sparsity Parameter s:The sparsity vector \(\mathbf{s}\) determines the degree of structure at which we evaluate the generalization of a fixed predictor. For a fixed predictor and relative sensitivity vector \(\mathbf{\epsilon}\), a good choice of \(\mathbf{s}\) is one that has sufficiently large sparse local radii on the training sample resulting in small average sparse loss, \(\frac{1}{n}\sum_{\mathbf{x}^{(i)}\in\mathbf{S}_{T}}\ell_{\text{sparse}}(h, \mathbf{x}^{(i)})\). At the trivial choice of sparsity \(\mathbf{s}=\mathbf{0}\), for any choice of \(\mathbf{\epsilon}\), the above loss is exactly zero. In general, at a fixed \(\mathbf{\epsilon}\), this loss increases with larger (entrywise) \(\mathbf{s}\). At the same time, the empirical margin loss term \(\hat{R}_{4\zeta_{K+1}\gamma_{K+1}}(h)\) decreases with increasing \(\mathbf{s}\) (since \(\zeta_{K+1}\) grows). This reflects an inherent tradeoff in the choice of \((\mathbf{s},\mathbf{\epsilon})\) to balance the margin loss and the sparse loss (in addition to the KL-divergence). For any \(\mathbf{\epsilon}\) and a data point \(\mathbf{z}=(\mathbf{x},y)\), we employ a greedy algorithm to find a sparsity vector \(s^{*}(\mathbf{x},\mathbf{\epsilon})\) in a layer wise fashion such that the loss incurred is zero, i.e. so that \(r_{k}(h,\mathbf{x})\geq 3\gamma_{k}\) for all \(k\). At each layer, we simply take the maximum sparsity level with sufficiently large radius. The computational cost of such an approach is \(\log_{2}\left(\prod_{k=1}^{K}d_{k}\right)\). One can thus collect the sparsity vectors \(s^{*}(\mathbf{x},\mathbf{\epsilon})\) across the training set and choose the one with sample-wise minimum, so that the average sparse loss vanishes. Of course, one does not necessarily need the sparse loss to vanish; one can instead choose \(\mathbf{s}\) simply to _control_ the sparse loss to a level of \(\frac{\alpha}{\sqrt{m}}\). We expand in Appendix B how this can done. Sensitivity Vector \(\mathbf{\epsilon}\):Lastly, the relative sensitivity vector \(\mathbf{\epsilon}\) represents the size of the posterior and desired level of sensitivity in layer outputs upon parameter perturbations. Since \(\epsilon_{k}\) denotes _relative perturbation_ we can simply let it be the same across all layers. i.e. \(\mathbf{\epsilon}=\epsilon\cdot[1,\ldots,1]\). In summary, as we expand in Appendix B, we can compute a best in-grid generalization bound in \(\mathcal{O}\left(T_{\mathbf{\epsilon}}\cdot\log_{2}\left(\prod_{k=1}^{K}d_{k} \right)\cdot\log_{2}(T_{\mathbf{\eta}})\cdot(\sum_{k=1}^{K}d_{k}d_{k-1})\right).\) ## 4 Numerical Experiments In this last section we intend to demonstrate the derived bounds on a series of feed-forward networks, of varying width and depth, on MNIST. As we now show, the resulting bounds are controlled and sometimes non-vacuous upon the optimization over a discrete grid for hyper-parameters, as explained above. Experimental Setup:We train feed-forward networks \(h\) with weights \(\{\mathbf{W}_{k}\}_{k=1}^{K+1}\) where \(\mathbf{W}_{k}\in\mathbb{R}^{d_{k}\times d_{k-1}}\) using the cross-entropy loss with stochastic gradient descent (SGD) for 5,000 steps with a batch size of 100 and learning rate of 0.01. The MNIST training set is randomly split into train and validation data (55,000 : 5,000). The models are optimized on the training data and the resulting measures are computed on validation data. To evaluate scaling with the number of samples, \(m\), we train networks on randomly sampled subsets of the training data of increasing sizes from 20% to 100% of the training set. Because of the chosen architectures, all of these models are over-parametrized (i.e. having more parameters than training samples). Recall that the bound on generalization error in Theorem 3 depends on the KL divergence between a posterior centered at trained predictor \(h\), \(\mathcal{N}(h,\mathbf{\sigma}_{\text{sparse}}^{2})\), and the prior \(P=\mathcal{N}(h_{\text{prior}},\mathbf{\sigma}_{\text{sparse}}^{2})\). Thus, each model is encouraged to be close to its initialization via a regularization term. In this way, we minimize the following regularized empirical risk based on the cross-entropy loss as well as a regularization term with penalty \(\lambda\) (set as \(\lambda=1.0\) for all experiments for simplicity), \[\min_{\left\{\mathbf{W}_{k}\right\}_{k=1}^{K+1}} \frac{1}{m}\sum_{i=1}^{m}\ell_{\text{cross}-\text{ent}}\Big{(}h, \left(\mathbf{x}_{i},y_{i}\right)\Big{)}+\frac{\lambda}{K+1}\sum_{k=1}^{K+1} \left\|\mathbf{W}_{k}-\mathbf{W}_{\text{prior},k}\right\|_{F}^{2}.\] Choice Of Prior:As with any PAC-Bayes bound, choosing a prior distribution with an appropriate inductive bias is important. For example, optimizing the choice of prior by instantiating multiple priors simultaneously was shown to be an effective procedure to obtain good generalization bounds (Langford and Caruana, 2001; Dziugaite and Roy, 2017). In this work, we evaluate our bounds for two choices of the prior: _a)_ a data-independent prior, \(P_{0}:=\mathcal{N}(h_{\mathbf{0}},\mathbf{\sigma}_{\text{sparse}}^{2})\) centered at a model with zero weights, \(h_{\mathbf{0}}\); and _b)_ a data-dependent prior \(P_{\text{data}}:=\mathcal{N}(h_{\text{init}},\mathbf{\sigma}_{\text{sparse}}^{2})\) centered at a model \(h_{\text{init}}\) obtained by training on a small fraction of the training data (\(5\%\) of all training data). Note that this choice is valid, as the base hyper-parameter \((\mathbf{s},\mathbf{\xi},\mathbf{\eta},\mathbf{\epsilon})\) are chosen independent of data, and the empirical risk terms in the bound are not evaluated on the small subset of data \(h_{\text{init}}\) is trained on. Generalization Bounds Across Width:We first train a 2-layer (1 hidden layer) fully connected neural network with increasing widths, from 100 to 1,000 neurons. Note that in all cases these models are over-parametrized. In Figures 0(a) to 0(c) we plot the true risk (orange curve) and the generalization bounds (blue curve) from Theorem 3 across different sizes of training data and for the two choices of priors mentioned above. We observe that our analysis, when coupled with data-dependent prior \(P_{\text{data}}\), generates non-vacuous bounds for a network with width of 100. Even for the naive choice of the prior \(P_{0}\), the bound is controlled and close to 1. Furthermore, note that our bounds remain controlled for larger widths. In Appendix E, we include complementary results depicting our generalization bounds for 3-layer networks. Figure 1: Generalization error of a 2-layer model of different widths trained on MNIST. Effective Activity Ratio:Lastly, we intend to illustrate the degree of sparsity achieved in the obtained models that allow for the bounds presented in Figure 1. For each data point \(\mathbf{x}\) and relative perturbation level \(\epsilon\), we define the Effective Activity ratio \(\kappa(\mathbf{x},\epsilon):=\frac{\sum_{k}(d_{k}-s_{k})(d_{k-1}-s_{k-1})}{\sum_{ k}d_{k}d_{k-1}}\) where \(\mathbf{s}=s^{*}(\mathbf{x},\epsilon)\), the greedy sparsity vector chosen such that the sparse loss in Theorem 3 is zero. In this way, \(\kappa(\mathbf{x},\epsilon)\) measures the reduced local dimensionality of the model at input \(\mathbf{x}\) under perturbations of relative size \(\epsilon\). When \(\kappa(\mathbf{x},\epsilon)=1\), there are no sparse activation patterns that are stable under perturbations, and the full model is considered at that point. On the other hand, when \(0<\kappa(\mathbf{x},\epsilon)\ll 1\), the size of stable sparse activation patterns \(s^{*}(\mathbf{x},\epsilon)_{k}\) at each layer is close to the layer dimension \(d_{k}\). Theorem 3 enables a theory of generalization that accounts for this local reduced dimensionality. We present the effective activity rations for a trained 3-layer model in Figure 2, and include the corresponding results for the 2-layer model in Appendix E for completeness. The central observation from these results is that trained networks with larger width have _smaller_ effective activity ratios across the training data. In Figure 1(a) (as well as in Figure 1(a) for the 2-layer model), the distribution of effective activity ratio across the training data at \(\epsilon=10^{-4}\) shows that smaller width networks have less stable sparsity. In turn, Figure 1(b) and Figure 1(b) demonstrate that this effect is stronger for smaller relative perturbation levels. This observation is likely the central reason of why our generalization bounds do not increase drastically with model size. ## 5 Conclusion This work makes explicit use of the degree of sparsity that is achieved by ReLU feed-forward networks, reflecting the level of structure present in data-driven models, but without making any strong distributional assumptions on the data. Sparse activations imply that only a subset of the network is active at a given point. By studying the stability of these local sub-networks, and employing tools of derandomized PAC-Bayes analysis, we are able to provide bounds that exploit this effective reduced dimensionality of the predictors, as well as avoiding exponential dependence on the sensitivity of the function and of depth. Our empirical validation on MNIST illustrates our results, which are always controlled and sometimes result in non-vacuous bounds on the test error. Note that our strategy to instantiate our bound for practical models relied on a discretization of the space of hyper-parameters and a greedy selection of these values. This is likely suboptimal, and the grid of hyper-parameters could be further tuned for each model. Moreover, in light of the works in (Dziugaite and Roy, 2017, 2018; Zhou et al., 2019), we envision optimizing our bounds directly, leading to even tighter solutions. Figure 2: Effective activity ratio \(\kappa(\mathbf{x},\epsilon)\) based on greedy sparsity vector \(s^{*}(\mathbf{x},\epsilon)\) for 3-layer networks (smaller implies sparser stable activations). ## Acknowledgments We kindly thank Vaishnavh Nagarajan for helpful conversations that motivated the use of de-randomized PAC-Bayesian analysis. This work was supported by NSF grant CCF 2007649.
2305.07524
Joint MR sequence optimization beats pure neural network approaches for spin-echo MRI super-resolution
Current MRI super-resolution (SR) methods only use existing contrasts acquired from typical clinical sequences as input for the neural network (NN). In turbo spin echo sequences (TSE) the sequence parameters can have a strong influence on the actual resolution of the acquired image and have consequently a considera-ble impact on the performance of the NN. We propose a known-operator learning approach to perform an end-to-end optimization of MR sequence and neural net-work parameters for SR-TSE. This MR-physics-informed training procedure jointly optimizes the radiofrequency pulse train of a proton density- (PD-) and T2-weighted TSE and a subsequently applied convolutional neural network to predict the corresponding PDw and T2w super-resolution TSE images. The found radiofrequency pulse train designs generate an optimal signal for the NN to perform the SR task. Our method generalizes from the simulation-based optimi-zation to in vivo measurements and the acquired physics-informed SR images show higher correlation with a time-consuming segmented high-resolution TSE sequence compared to a pure network training approach.
Hoai Nam Dang, Vladimir Golkov, Thomas Wimmer, Daniel Cremers, Andreas Maier, Moritz Zaiss
2023-05-12T14:40:25Z
http://arxiv.org/abs/2305.07524v1
Joint MR sequence optimization beats pure neural network approaches for spin-echo MRI super-resolution ###### Abstract Current MRI super-resolution (SR) methods only use existing contrasts acquired from typical clinical sequences as input for the neural network (NN). In turbo spin echo sequences (TSE) the sequence parameters can have a strong influence on the actual resolution of the acquired image and have consequently a considerable impact on the performance of the NN. We propose a known-operator learning approach to perform an end-to-end optimization of MR sequence and neural network parameters for SR-TSE. This MR-physics-informed training procedure jointly optimizes the radiofrequency pulse train of a proton density- (PD-) and T2-weighted TSE and a subsequently applied convolutional neural network to predict the corresponding PDw and T2w super-resolution TSE images. The found radiofrequency pulse train designs generate an optimal signal for the NN to perform the SR task. Our method generalizes from the simulation-based optimization to in vivo measurements and the acquired physics-informed SR images show higher correlation with a time-consuming segmented high-resolution TSE sequence compared to a pure network training approach. Keywords:super-resolution, turbo spin echo, joint optimization ## 1 Introduction Magnetic resonance imaging plays an essential role in clinical diagnosis by acquiring the structural information of biological tissue. Spatial resolution is a crucial aspect in MRI for the precise evaluation of the acquired images. However, there is an inherent trade-off between the spatial resolution of the images and the time required for acquiring them [1]. In order to obtain high-resolution (HR) MR images, patients are required to remain stable in the MR scanner for long time, which leads to patients' discomfort and inevitably introduces motion artifacts that again compromise image quality and actual resolution [2]. Since super-resolution (SR) can improve the image quality without changing the MRI hardware, this post-processing tool has been widely used to overcome the challenge of obtaining HR MRI scans [3]. Using model-based methods like interpolation algorithms [4] and iterative deblurring algorithms [5] or learning-based methods such as dictionary learning [6], SR achieved the restauration of fine structures and contours. In recent years, deep learning has become a main-stream approach for super-resolution imaging, and a number of neural network-based SR models were proposed [7]. Among the proposed model- or learning-based methods, convolutional neural networks (CNN) produce superior SR results with better clarity and less artifacts [8]. Super-resolution for MRI data has been only recently applied [9-12]. In [10] a CNN is proposed for cardiac MRI to estimate an end-to-end non-linear mapping between the upscaled low-resolution (LR) images and corresponding HR images to rebuild a HR 3D volume. In other work, motion compensation for the fetal brain was achieved by a CNN architecture [11] to solve the 3D re-construction problems. SR MRI has been also applied to low-field MR brain imaging [12]. However, these existing methods only used single contrast MRI images and did not make full use of multi-contrast information. In the clinical routine, T1, T2 and PD weighted images are often acquired together for diagnosis with complementary information. Although each weighted image highlights only certain types of tissues, they reflect the same anatomy, and can provide synergy when used in combination [8]. Fast imaging techniques like Turbo-Spin-Echo (TSE) [13] sequences can also be utilized to sample more data in given timeframe, thus allowing a higher resolution. However, due to the long echo-train duration the T2-decay is significant during the signal acquisition. This process acts as a voxel-T2-dependent k-space filter that lowers the actual resolution w.r.t the nominal resolution due to a broadening of the point-spread-function (PSF) [14]. However, by adjusting the refocusing radiofrequency (RF) pulses, the signal decay can be reduced during the TSE echo-train [15]. The RF pulse train strongly influence the signal dynamic in a highly complex fashion, as each RF pulse affects all future signal contributions. Current MRI super-resolution methods use contrasts acquired from typical clinical protocols as input for the neural network and disregard the influence of the MR sequence parameters for optimization. Using so-called known operator learning [16], we propose an approach that utilizes a MR physics model during the optimization to not only train a neural network for super-resolution, but also adapt the refocusing RF pulses to directly influence the PSF. This approach also allows the use of the uncorrupted theoretical contrast as ground truth, which is only available during the simulation. By using two different encoding schemes in our sequences, we gain additional information from the two different contrasts PD and T2 that are used as input for the CNN and both will have different PSFs, thus provide valuable information for the SR task. Both sequences are optimized jointly to allow generation of optimal contrasts for the SR task of the neural network. The main contribution and the novelty of our work is the end-to-end optimization of MR sequence and neural network parameters for super-resolution TSE. For this purpose, we use a fully differentiable Bloch simulation embedded in the forward propagation to jointly optimize the RF pulse train of proton density (PD) and T2 weighted TSE sequences and a subsequent applied convolutional neural network to predict the corresponding PDw and T2w super-resolution TSE images. The ground truth targets are directly generated by the simulation and represent the uncorrupted MR contrast. Our jointly optimized approach is compared to a network trained on a TSE with a 180\({}^{\circ}\) RF pulse train. The optimized sequences and networks are verified at the real scanner system by performing in vivo measurements of a healthy subject and compared to a highly segmented, high-resolution vendor-provided sequence. ## 2 Theoretical Background ### Image Degradation in TSE-Sequences. When there is no relaxation decay during the echo-train, the k-space signal obtained from a TSE pulse sequence \(S(k_{x},k_{y})\) yields the true spatial distribution of the theoretical transverse magnetization \(M_{\perp}(x,y)\) via the Fourier transform (FT): \[M_{\perp}(x,y)=\int\int_{k_{x},k_{y}}S\big{(}k_{x},k_{y}\big{)}\cdot e^{i\big{(} k_{x}x+k_{y}y\big{)}}dk_{x}dk_{y} \tag{1}\] When considering the signal relaxation behavior during acquisition an additional filtering function in k-space, the Modulation Transfer Function (MTF) for each tissue type (tt), has to be applied: \[\widetilde{M}_{\perp}(x,y) =\sum\nolimits_{tt}M(x,y)\] \[=\sum\nolimits_{tt}\int_{k_{x},k_{y}}S_{tt}\big{(}k_{x},k_{y} \big{)}\cdot MTF_{tt}\big{(}k_{x},k_{y}\big{)}\cdot e^{i\big{(}k_{x}x+k_{y}y \big{)}}dk_{x}dk_{y}\] \[=\sum\nolimits_{tt}\int_{k_{x},k_{y}}S_{tt}\big{(}k_{x},k_{y} \big{)}\cdot e^{i\big{(}k_{x}x+k_{y}y\big{)}}dk_{x}dk_{y}\] \[\qquad\qquad*\int_{k_{x},k_{y}}MTF_{tt}\big{(}k_{x},k_{y}\big{)} \cdot e^{i\big{(}k_{x}x+k_{y}y\big{)}}dk_{x}dk_{y}\] \[=\sum\nolimits_{tt}M_{\perp,tt}(x,y)\cdot B_{tt}(x,y),\] where \(*\) denotes a convolution and \(B(x,y)\) is a blur kernel equal to the Fourier-transformed MTF. For a single-shot 180\({}^{\circ}\) TSE sequence with constant RF pulses the filtering function in k-space can be described as: \[MTF_{tt}\big{(}k_{x},k_{y}\big{)}=\sum\nolimits_{tt}e^{-\frac{t\big{(}k_{x}x \cdot k_{y}\big{)}}{T2tt}}\,,\qquad\quad(2)\] which is unique for each tissue type with different T2 value. Using variable RF pulses, the MTF can become more homogeneous across the k-space and therefore reduce the width of the PSF. Methods ### Sequences A single-shot 2D TSE sequence is being used as default sequence for our optimization. The acquisition time for the single-shot 2D TSE is 0.76 s at 1.56 mm in-plane resolution. Single slice acquisition was used for all sequences. The refocusing RF pulses of the PDw TSE with TE=12 ms and T2w TSE with TE=96 ms were optimized jointly. The PDw TSE sequence uses a centric phase-encoding reordering. For T2w imaging the centric phase-encoding reordering is shifted to have the central k-space line encoded at the given echo time TE, which for TE=96 ms is at the 8th echo. Other parameters were as follows: acquisition matrix of 128\(\times\)128, undersampling factor in phase: 2x, reconstructed with GRAPPA[17], FOV=200 mm\(\times\)200 mm, slice thickness of 8 mm and bandwidth of 133 Hz/pixel. For all sequences, the 90\({}^{\circ}\) excitation pulse was kept fixed. ### Simulation & Optimization All simulations and optimizations were performed in a fully differentiable Bloch simulation framework [18]. The framework generates MR sequences and corresponding reconstruction automatically based on the target contrast of interest. The optimization is carried out in an MR scanner simulation environment mirroring the acquisition of a real MR scanner. The forward simulation consists of a chain of tensor-tensor multiplication operations, representing the Bloch equations, that are differentiable in all parameters and supports an analytic derivative-driven nonlinear optimization. The entire process - MRI sequence, reconstruction, and evaluation - is modelled as one computational chain and is part of the forward and backwards propagation during the optimization, as depicted in Figure 1. The optimization problem is described by: \[\Psi^{*},\Theta^{*}=\operatorname*{argmin}_{\Psi,\Theta}\left(\sum_{i}\left\| M_{\lambda,i}-\operatorname*{NN}_{\Theta}\left(RECO\left(\operatorname{SCAN}_{\Psi}(P_{i}) \right)\right)\right\|_{p}\right), \tag{3}\] where \(\Psi\) are the optimized sequence parameters and \(\Theta\) the neural network parameters. For given tissue maps \(P_{i}\) for each Voxel \(i\) the Bloch simulation \(SCAN\) outputs the MR signal and is reconstructed by the algorithm \(RECO\). Signal simulations were performed by a fully differentiable extension of the Extended Phase Graph (EPG) [19] formalism. The simulation is done with PyTorch [20] complex-valued datatype and outputs a complex-valued signal. The forward simulation outputs the TSE signal which is conventionally reconstructed to magnitude images, and in addition the corresponding contrast as ground truth target as given in equation [1]. For the SR network a CNN, DenseNet [21] was adapted, which receives the magnitude TSE images of PDw and T2w TSE as input. To prevent scaling discrepancy, the TSE images are normalized to have maximum value of 1 before applying the CNN for both cases in simulation and in vivo. The DenseNet consist of 4 Dense blocks (Convolution-\(>\)BatchNorm\(>\)PReLu\(>\)Concat) followed by an UpsampleBlock (bicubic upsampling-\(>\)Convolution-\(>\)BatchNorm\(>\)PReLu) and a final CNN Layer. Each convolution had a 3\(\times\)3 kernel size, except for the first layer with a kernel size of 7\(\times\)7. In total, the model had 174,706 trainable parameters. Of the sequence parameters \(\Psi\), the amplitude of the refocusing RF pulses of the TSE sequences and the NN parameters \(\Theta\) were optimized jointly in an end-to-end training procedure using the Adam optimizer [22] in PyTorch. We follow a known-operator approach [16], where the conventional reconstruction, including parallel imaging by means of GRAPPA, is fixed, but fully differentiable. The simulation is fully differentiable and all parameters except the refocusing RF pulses are fixed. The gradient update propagates back through the whole chain of differentiable operators. The complete RF pulse train and CNN are updated at each iteration step. The RF pulses are initialized with random values around 50\({}^{\circ}\) and standard deviation of 0.5\({}^{\circ}\). The training data consisted of synthetic brain samples based on the BrainWeb [23] database. The fuzzy model segments were filled with in vivo-like tissue parameters: Proton density PD values were taken from [24], T1 and T2 from [25], T2' was calculated from T2 and T2* values [26] and diffusion coefficient D was taken from [27]. B0 and B1 were assumed to be without inhomogeneities. In total, 19 subject volumes each consisting of 70 slices were used as training data and one separate subject volume as test dataset. The simulation uses coil sensitivity maps acquired at the MR system and calculated using ESPIRiT [28]. The optimizations were performed on an Intel Xeon E5-2650L with 256GB RAM. A full optimization on CPU took 4 days with memory consumption of 230GB RAM. The learning rate for the model parameters and sequence parameters were lr_model=0.001, lr_rf=0.01, respectively. Other hyperparameters of the optimization were: batch size = 1, n_epoch=10, damping factors of Adam (0.9, 0.999). ### Data acquisition at a real MR system After the optimization process, all sequences were exported using the Pulseq standard [29] and the populseq tool [30]. Pulseq files could then be interpreted on a real MRI scanner including all necessary safety checks, and were executed on a PRISMA 3T scanner (Siemens Healthineers, Erlangen, Germany) using a 20-channel head coil. Raw Figure 1: Overview of the proposed processing pipeline: The MR signal of PDw and T2w TSE is simulated for a given RF pulse train; GRAPPA reconstruction and SR CNN are applied subsequently. The output is compared to the actual theoretical uncorrupted HR contrasts at TE\({}_{\text{eff}}\) and a gradient descent is performed to update refocusing FA and NN parameters, simultaneously. data were automatically sent back to the terminal and the same reconstruction pipeline used for the simulated data was used for measured images. As high-resolution reference a vendor-provided TSE sequence was acquired with following parameters: 32-shot segmented, GRAPPA2, TE=12/96 ms, TR=12 s, FOV=200 mm\(\times\)200 mm, matrix of 256\(\times\)256, FA=180\({}^{\circ}\). All MRI scans were under approval of the local ethics board and were performed after written informed consent was obtained. Measurements were performed on a healthy volunteer. ### Reconstruction and Evaluation Signals of the TSE sequences were reordered, and reconstructed with GRAPPA. The optimization was based solely on magnitude images. Structural similarity index measure (SSIM) [31] and the Peak Signal-to-Noise Ratio (PSNR) were calculated for evaluation of simulation and in vivo measurements w.r.t. the simulated ground truth and the HR segmented in vivo measurement, respectively. The evaluation was performed in Matlab [32] with the build-in functions for SSIM and PSNR. ## 4 Results ### Qualitative Visual Results The original LR TSE image with the zero-filled image and the reconstructed SR image are compared to our optimized RF pulse train design and a conventional 180\({}^{\circ}\) RF pulse train TSE sequence for each contrast in Figure 2. The optimization process can be seen in Supporting Figure S1 and Supporting Animation S3. Starting from the initialized values, the RF pulses converge to the optimal RF pulse train, while the NN parameters are optimized simultaneously. The converged RF pulse state has been found to be independent from the initialization. The final optimized RF pulse design for the PDw and T2w TSE sequences is shown in Figure 2a. It can be observed, that in all cases the SR image leads to an improvement over the LR TSE image by showing clearer resolved borders between white and gray matter. The optimized RF pulse train further improves the nominal resolution, which can be observed by a clear increase of sharpness of the sulcus between Gyrus cinguli and Gyrus frontalis superior as indicated by the red arrows. The optimized sequence and CNN translate well to in vivo measurements, where similar improvements as seen in the simulated images can be observed (Figure 2d,e). ### Quantitative Metrics Results Table 1 and Table 2 report the quantitative metrics scores of PSNR and SSIM for the images shown in Figure 2. The quantitative metrics agree with our visual observations and show that our end-to-end optimization approach performs better than the SR based on existing conventional 180\({}^{\circ}\) TSE sequence data only. Compared to the acquisition time of the segmented reference sequence with 192.85s, our optimized single-shot sequence only requires an acquisition time of 0.76s. Thus, the SR performance can potentially be further increased by a multi-shot scan sacrificing a little more time. To find the best procedure multiple ablation studies were performed (Supporting Information). \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline & PSNR - PDw & SSIM - PDw & PSNR - T2w & SSIM – T2w \\ \hline LR – 180\({}^{\circ}\) & 10.4 & 0.39 & 11.6 & 0.40 \\ \hline ZF – 180\({}^{\circ}\) & 14.7 & 0.73 & 22.2 & 0.79 \\ \hline HR NN – 180\({}^{\circ}\) & 28.7 & 0.93 & 29.4 & 0.93 \\ \hline LR – opt. FA & 10.8 & 0.38 & 11.2 & 0.39 \\ \hline ZF – opt. FA & 17.5 & 0.75 & 23.6 & 0.80 \\ \hline \end{tabular} \end{table} Table 1: PSNR and SSIM for simulation in Figure 2 using the high-resolution GT as reference. Figure 2: (a) Optimized RF pulse train and phase encoding for both contrast (due to centric reordering, the central k-space line k_y=0 is acquired at repetition 0 and 7 for the PDw and T2w TSE, respectively). (b) Simulation of a static 180\({}^{\circ}\) RF pulse train and (c) optimized RF pulse train compared to the uncorrupted ground truth. (d) In vivo measurements of a static 180\({}^{\circ}\) RF pulse train and (e) optimized RF pulse train compared to the vendor’s TSE sequence is shown as a high-resolution reference. In both cases the improvement of the optimized RF pulses over the constant 180\({}^{\circ}\) RF pulse train can be observed by a better resolved border between white and gray matter as indicated by the red arrows. SSIM and PSNR values are shown in Table 1 and 2. ## 5 Discussion We demonstrated a new end-to-end learning process for TSE super-resolution by jointly optimizing refocusing RF pulse trains and neural network parameters. This approach utilizes a differentiable MR physics simulation embedded in the forward and backward propagation. The joint-optimization outperforms a pure neural network training. Although our approach is solely based on simulated data, the optimized sequence and trained CNN translate well to in vivo data. By using simulation-based training data, we are able to use the theoretical uncorrupted contrast as ground truth target. Apart from the expensive acquisition of HR in vivo data, real measured target data also have inherent drawbacks compared to their LR counterpart. Due to the longer scan time motion artifacts become more significant and to acquire the same contrast the bandwidth has to be increased, leading to a decrease of SNR [33]. However, we also admit the limitation of a simulation-based optimization, as the performance is bound to the accuracy of the model behind the simulation. We can observe in our results, that our approach is not able to resolve small vessel structures, as these are not existing in our synthetic brain database. Fortunately, the NN does not hallucinate details, when encountering these structures. Using real measured data to finetune the trained network could be a possible solution for this problem. Another way could be including uncertainty quantification layers [34] in the CNN to handle unknown structures. Our approach is compatible with any network architecture e.g. [35-37] to further improve the SR task. Furthermore, the training objective can be also extended to requirements on the MR sequence by including constraints in the loss function e.g. reduced RF pulse amplitudes for decrease of energy deposition SAR or increase of SNR. To conclude, we propose an end-to-end optimization of MR sequence and neural network parameters for TSE super-resolution. This flexible and general end-to-end approach benefits from a MR physics informed training procedure, allowing a simple target-based problem formulation, and outperforms pure neural network training. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline & PSNR - PDw & SSIM - PDw & PSNR - T2w & SSIM – T2w \\ \hline LR \(-180^{\circ}\) & 12.6 & 0.35 & 16.2 & 0.38 \\ \hline ZF \(-180^{\circ}\) & 14.1 & 0.59 & 22.5 & 0.70 \\ \hline HR NN – \(180^{\circ}\) & 23.2 & 0.84 & 26.2 & 0.89 \\ \hline LR – opt. FA & 13.2 & 0.37 & 15.9 & 0.38 \\ \hline ZF – opt. FA & 15.5 & 0.65 & 22.8 & 0.70 \\ \hline HR NN – opt. FA & **26.0** & **0.87** & **27.0** & **0.90** \\ \hline \end{tabular} \end{table} Table 2: PSNR and SSIM for in vivo measurements in Figure 2 using the segmented high-resolution TSE as reference.
2306.07074
Using a neural network approach to accelerate disequilibrium chemistry calculations in exoplanet atmospheres
In this era of exoplanet characterisation with JWST, the need for a fast implementation of classical forward models to understand the chemical and physical processes in exoplanet atmospheres is more important than ever. Notably, the time-dependent ordinary differential equations to be solved by chemical kinetics codes are very time-consuming to compute. In this study, we focus on the implementation of neural networks to replace mathematical frameworks in one-dimensional chemical kinetics codes. Using the gravity profile, temperature-pressure profiles, initial mixing ratios, and stellar flux of a sample of hot-Jupiters atmospheres as free parameters, the neural network is built to predict the mixing ratio outputs in steady state. The architecture of the network is composed of individual autoencoders for each input variable to reduce the input dimensionality, which is then used as the input training data for an LSTM-like neural network. Results show that the autoencoders for the mixing ratios, stellar spectra, and pressure profiles are exceedingly successful in encoding and decoding the data. Our results show that in 90% of the cases, the fully trained model is able to predict the evolved mixing ratios of the species in the hot-Jupiter atmosphere simulations. The fully trained model is ~1000 times faster than the simulations done with the forward, chemical kinetics model while making accurate predictions.
Julius L. A. M. Hendrix, Amy J. Louca, Yamila Miguel
2023-06-12T12:39:21Z
http://arxiv.org/abs/2306.07074v1
Using a neural network approach to accelerate disequilibrium chemistry calculations in exoplanet atmospheres ###### Abstract In this era of exoplanet characterisation with JWST, the need for a fast implementation of classical forward models to understand the chemical and physical processes in exoplanet atmospheres is more important than ever. Notably, the time-dependent ordinary differential equations to be solved by chemical kinetics codes are very time-consuming to compute. In this study, we focus on the implementation of neural networks to replace mathematical frameworks in one-dimensional chemical kinetics codes. Using the gravity profile, temperature-pressure profiles, initial mixing ratios and stellar flux of a sample of hot-Jupiter atmospheres as free parameters, the neural network is built to predict the mixing ratio outputs in steady state. The architecture of the network is composed of individual autoencoders for each input variable to reduce the input dimensionality, which is then used as the input training data for an LSTM-like neural network. Results show that the autoencoders for the mixing ratios, stellar spectra, and pressure profiles are exceedingly successful in encoding and decoding the data. Our results show that in 90% of the cases, the fully trained model is able to predict the evolved mixing ratios of the species in the hot-Jupiter atmosphere simulations. The fully trained model is \(\sim 10^{3}\) times faster than the simulations done with the forward, chemical kinetics model while making accurate predictions. keywords: planets and satellites: gaseous planets - planets and satellites: atmospheres - exoplanets ## 1 Introduction There are two methods commonly used for calculating the abundance of different species in an atmosphere: thermochemical equilibrium and chemical kinetics (Bahn & Zukoski, 1960; Zeleznik & Gordon, 1968). Thermochemical equilibrium calculations treat each species independently and do not require an extensive list of reactions between different species. Consequently, this method is fast for estimating the abundance of different species in an exoplanet atmosphere and has been widely used in the community (e.g., Stock et al., 2018; Woitke et al., 2018). However, the atmospheres of exoplanets are dynamic environments. Both physical and chemical processes can alter the compositions and thermal structures of the atmosphere. In particular, atmospheric processes like photochemistry, mixing and condensation of different species can affect atmospheric abundances, deviating the concentrations observed from what would be found by chemical equilibrium calculations (Cooper & Showman, 2006; Swain et al., 2008; Moses et al., 2011; Kawashima & Min, 2021; Roudier et al., 2021; Baxter et al., 2021). For example, the recent detection of SO\({}_{2}\)(Feinstein et al., 2022; Ahrer et al., 2022; Alderson et al., 2022; Rusatmarkov et al., 2022) and the determination of this species as direct evidence of photo-chemical processes shaping the atmosphere of WASP 39b (Tsai et al., 2022), suggest that certain exoplanet atmospheres are in disequilibrium and we need chemical disequilibrium models using chemical kinetics to correctly interpret the observations. Chemical kinetics codes consider the effects that lead to a non-equilibrium state in the atmosphere. These codes incorporate a wide range of atmospheric processes such as the radiation from the host start that can drive the dissociation of molecules -or photochemistry-, the mixing of species at different pressures due to the planet's winds, or the diffusion of species, and calculate the one-dimensional abundances of species in exoplanetary atmospheres (e.g. Moses et al., 2011; Venot et al., 2012; Miguel & Kaltenegger, 2014; Tsai et al., 2017; Hobbs et al., 2019). However, To calculate the abundance of different species using chemical kinetics, a system of coupled differential equations involving all the species must be solved, and prior knowledge of reaction rates and a reaction list is necessary to estimate the production and loss of each species. Therefore, as more species and reactions are incorporated into the chemical networks, the complexity of these simulations increases, and so does the computational cost these simulations require. The result is that chemical kinetics codes have long computational times, and can not be used by more detailed calculations (e.g. circulation models) or as a fast way of interpreting observations (by retrieval codes), which are usually subject to simplifications. For the past few decades, the use of machine learning techniques, specifically neural networks (NN), has become more prevalent in research fields outside of computer science. Within astronomy, neural networks have been used for applications like image processing (Dattilo et al., 2019), adaptive optics (Landman et al., 2021), exoplanet detection (Shallue and Vanderburg, 2018), exoplanetary atmospheric retrieval (Cobb et al., 2019) and chemical modelling (Holdship et al., 2021), and more traditional machine learning techniques have been used for applications like exoplanetary atmospheric retrieval (Nixon and Madhusudhan, 2020) and chemistry modelling of protoplanetary disks (Smirnov-Pinchukov et al., 2022). Trained neural networks are fast to use, so a neural network trained to accurately reproduce the outcomes of chemical kinetics codes could greatly reduce computational time. Such a neural network could simulate a large amount of atmospheric conditions in a short period of time, which is for example useful for atmospheric retrievals from observational constraints. It could also be incorporated into a multi-dimensional atmospheric simulation that connects a multitude of individual one-dimensional simulations by the implementation of atmospheric mixing and other global processes. In this study, we investigate the feasibility of machine learning techniques for speeding up a one-dimensional chemical kinetics code. To this end, we perform calculations on a fiducial giant planet as an example to show how this technique can be used to bring the best of these two worlds: the detailed information of chemical kinetics calculations and the speed of Neural Networks techniques. In the next section, we explain in more detail how we obtain the dataset and the specifics of the architectures used. The results of our networks are presented in the following section (section 3), which are discussed afterwards in section 4. Finally, we summarise and conclude our findings in section 5. ## 2 Methods ### Chemical Kinetics Chemical kinetics is the most realistic way of calculating abundances and is necessary, particularly at low temperatures (T < 2000 K) and pressures (P < 10 - 100 bars), where timescales of processes such as mixing in the atmosphere are shorter than chemical equilibrium and dominate the chemistry and abundances in the atmosphere. We make use of the one-dimensional chemical kinetics code VULCAN (Tsai et al., 2017, 2021), to create a large dataset on the atmospheres of gaseous exoplanets. The code is validated for hot-Jupiter atmospheres from 500 K to 2500 K. VULCAN calculates a set of mass differential equations: \[\frac{\partial n_{i}}{\partial t}=\mathcal{P}_{i}-\mathcal{L}_{i}-\frac{ \partial\Phi_{i}}{\partial z}, \tag{1}\] where \(n_{i}\) is the number density of the species \(i\), \(t\) is the time, \(\mathcal{P}_{i}\) and \(\mathcal{L}_{i}\) are the production and loss rates of the \(i\)-th species, and \(\Phi_{i}\) its transport flux that includes the effects of dynamics caused by convection and turbulence in the atmosphere. For a more complete derivation of this equation from the general diffusion equation, we refer the reader to Hu et al. (2012). VULCAN starts from initial atmospheric abundances calculated using the chemical equilibrium chemistry code _FastChem_(Stock et al., 2018), although we note that the final disequilibrium abundances are not affected by the choice of initial values adopted Tsai et al. (2017), and further evolves these abundances by solving a set of Eulerian continuity equations that includes various physical processes (e.g. vertical mixing and photochemistry). To solve these partial differential equations, VULCAN numerically transforms them into a set of stiff ordinary differential equations (ODEs). These ODEs are solved using the _Rosenbrock_ method, which is described in detail in the appendix of Tsai et al. (2017). In this study, we make use of machine learning techniques to solve these equations and hence speed up the process. ### Building the dataset #### Parameter Space To construct the dataset, we vary the following parameters: 1. **Planet mass, M** [M\({}_{\rm J}\)]: within the range [0.5, 20] M\({}_{\rm J}\). 2. **Orbit radius, r** [AU]: within the range [0.01, 0.5] AU. 3. **Stellar radius, R\({}_{\star}\)** [R\({}_{\odot}\)]: within the range [1, 1.5] R\({}_{\odot}\). Other parameters such as surface gravity, irradiation temperature, and stellar effective temperature are derived from these free parameters. 1. **Planet radius** [R\({}_{\rm Jup}\)]: This is derived from the planet mass using the relation from Chen and Kipping (2017), shown in Equation 2, where \(R\) is the planet radius and \(M\) is the planet mass: \[\frac{R}{R_{\oplus}}=17.78\left(\frac{M}{M_{\oplus}}\right)^{-0.044}.\] (2) We note that our aim is to present the results for a simple general case, and the mass-radius relation we use is suitable for this purpose. However, we must emphasize that the relation between mass and radius for giant exoplanets is not unique and depends on various factors, such as the mass of metals, core mass, irradiation received by the planet, and their effect on the inflation of radius. All of these factors can impact the evolution path of giant planets and their final radius, leading to a dispersion in the mass and radius relation. 1. **Temperature-pressure profile**: As our aim is to demonstrate the use of neural networks for calculating non-equilibrium chemical abundances in a general case, we have utilized an analytical, non-inverted temperature-pressure profile from Heng et al. (2014). While these analytical profiles are simplistic, they are widely used in the literature to explore general cases and are suitable for our purposes. However, for calculating the chemistry of a real planet, more detailed calculations that take into account the opacities of different species and their abundances in the atmosphere should be included. The assumptions for this calculation are \(T_{int}=120\) K, \(\kappa_{L}=0.1\), \(\kappa_{S}=0.02\), \(\beta_{S}=1\) and \(\beta_{L}=1\), based on the default values included in the VULCAN code Tsai et al. (2017). The pressure profile is constructed within the range [\(10^{-2}\), \(10^{9}\)] dyne cm\({}^{-2}\). This calculation is an important step, as it determines whether the set of parameters is valid for the dataset. If any part of the temperature profile falls outside of the range [500, 2500] K, the temperature range for which VULCAN is validated, the example is rejected from the dataset. 2. **Stellar flux**: the stellar spectra used for the dataset have two sources: the Measurements of the Ultraviolet Spectra Characteristics of Low-mass Exoplanetary Systems (MUSCLES) collaboration (France et al., 2016; Youngblood et al., 2016; Loyd et al., 2016) and the PHOENIX Stellar and Planetary Atmosphere Code (Baron et al., 2010). The MUSCLES database contains observations from M- and K-dwarf exoplanet host stars in the optical, UV, and X-ray regime, and is used for stars with an effective temperature lower than 6000 K. For effective temperatures of 6000 K and above, stellar spectra are generated by the PHOENIX model. Flux values below \(10^{-14}\) erg nm\({}^{-1}\) cm\({}^{-2}\) s\({}^{-1}\) are cut-off. The remaining parameters of the VULCAN configuration files are kept constant throughout the dataset. Eddy- and molecular diffusion are both taken into account as fixed parameters. For the eddy diffusion constant, we make use of a constant of \(K_{zz}=10^{10}\) cm\({}^{2}\)/s. The molecular diffusion constant is taken for a hydrogen-dominated gas as described in Banks and Kockarts (1973). As standard chemical network, we make use of VULCAN's reduced default N-C-H-O network that includes photochemistry. We assume experimental abundances for the hot-Jupiters, and we make use of 150 pressure levels for the height layers. The output of VULCAN is saved every 10 steps. In total, 13291 valid configurations are generated within the parameter space. #### 2.2.2 Formatting In order to limit the computation times when training the network, the input-output pairs do not contain all of the information supplied by VULCAN. For the inputs, a selection of six properties is made. These properties are extracted from VULCAN before time integration starts, so they can be interpreted as the initial conditions of the simulation. The six properties are: 1. **Initial mixing ratios**: the initial mixing ratios of the species in the simulation. These mixing ratios are calculated by VULCAN using FastChem (Stock et al., 2018). The shape of the array containing the mixing ratios is (69, 150), as the mixing ratios are defined for 69 species for 150 height layers each. 2. **Temperature profile**: the temperature profile as calculated by the analytical expression from Heng et al. (2014). The temperature is defined for every height layer, so it has a shape of (150). 3. **Pressure profile**: the pressure profile that is calculated as part of the temperature profile calculation. It is of the same shape, (150). 4. **Gravitational profile**: the gravitational acceleration per height layer. It has the shape (150,). 5. **Stellar flux component**: one of the two components that make up the stellar spectrum contains the flux values. This is generated from either the MUSCLES database and/or the PHOENIX model and interpolated to a shape of (2500,). 6. **Stellar wavelength component**: the second component of the stellar spectrum contains the wavelengths corresponding to the flux values. It has the same shape, (2500,). For the outputs, we make use of the time-dependent mixing ratios. Because not every simulation takes the same amount of time to converge to a solution, the number of saved abundances differs per VULCAN simulations. To include the information contained in the evolution of the abundances through time, 10 sets of abundances, including the steady-state abundances, are saved in each output. This set of abundances is evenly spaced through time, so the simulation time between abundances will vary for different VULCAN simulation runs. Before the abundances are saved, they are converted to mixing ratios. The shape of the outputs is (10, 69, 150). #### 2.2.3 Data Standardisation The inputs and outputs of the various components differ by several orders in magnitude. To ensure that the neural network trained on the data set is not biased towards higher-valued parameters, the data has to be standardised. First, the distributions of the properties are standardised according to Equation 3: \[p_{s}=\frac{\log_{10}(p)-\mu}{\sigma}, \tag{3}\] with \[\mu=\frac{1}{n}\sum_{i=0}^{n}\log_{10}(p_{i}), \tag{4}\] and \[\sigma=\sqrt{\frac{1}{n}\sum_{i=0}^{n}\left(\log_{10}(p_{i})-\mu\right)^{2}}, \tag{5}\] where \(p\) is the property to be scaled, \(n\) is the size of the dataset and \(p_{s}\) is the standardised property. After standardisation, the properties are normalised in the range [0, 1]: \[p_{s,n}=\frac{p_{s}-\min(p_{s})}{\max(p_{s})-\min(p_{s})}, \tag{6}\] where \(p_{s,n}\) is the final normalised property. Once the input properties are normalised, the output mixing ratios are normalised with the same scaling parameters as were used for the input mixing ratios. When the trained neural network is presented with an input for which to predict the mixing ratios, it only has information about the scaling parameters of the inputs. To be able to unnormalise the outputs, they need to be scaled with the same scaling parameters. ### Model Architecture #### 2.3.1 Autoencoder Structure The input of each configuration within the dataset consists of roughly 15800 values. To speed up the training process and complexity of the neural network we make use of an _autoencoder_ (AE) for reducing the dimensionality of the examples in the dataset. In previous studies this approach has been shown an effective way to reduce dimensionality within chemical kinetics (e.g. Grassi et al., 2022). An autoencoder consists of two collaborating neural networks: an _encoder_ and a _decoder_. The task of the encoder is to reduce the dimensionality of the input data by extracting characterising features from the example and encoding them in a lower dimensionality representation called the _latent representation_. The task of the decoder is to take the latent representation and use it to reconstruct the original input data with as little loss of information as possible. The encoder and decoder are trained simultaneously, and no restraints are placed on the way the autoencoder uses its _latent space_, apart from the size of the latent representations. As is discussed in Section 2.2.2, the inputs of the model consist of six properties. Because these properties do not share the same shape, we cannot encode and decode them using a single autoencoder. Instead, we construct six unique autoencoders, one for each property of the model inputs. Figure 1 shows an overview of the process of encoding the initial conditions. The decoding process is not shown but is symmetrical to the encoding process. Each encoder conceals a specific property into a corresponding latent representation. To get the latent presentation of the entire input example, \(l_{i}\), the property latent representations \(\{l_{MR},l_{F},l_{W},l_{T},l_{P},l_{G}\}\) are concatenated. When decoding the latent representation of the input, the latent vector \(l_{i}\) is split back into the different property latent representations, and given to that property's decoder. Every encoder-decoder pair is trained separately. The hyperparameters of each autoencoder are optimized by trial and error. A summary of each set of hyperparameters is shown in table 1. ### Mixing Ratios Autoencoder The mixing ratios are the largest contributor to the size of the model inputs. Compressing each of the species' mixing ratios efficiently reduces the size of the input latent representation, \(l_{i}\), by a substantial amount. Because of the limited size of the training dataset, a compromise has to be made to successfully train this autoencoder. Rather than concurrently encoding all 69 species for each example, each species is encoded individually. As a result, the training dataset expands by a factor of 69, while disregarding any potential correlations in species abundances during the encoding procedure. Figure 2 shows the application of such an autoencoder: for a given input, each of the 69 species' mixing ratios is encoded into corresponding latent vectors \(\{l_{1},l_{2},l_{3},...,l_{69}\}\). The concatenation of these 69 latent vectors then makes up the latent representation of the mixing ratios \(l_{MR}\). All encoders and decoders are multilayer perceptron (MLP) neural networks. For the mixing ratio autoencoder (MRAE), the encoder and decoder both consist of 7 fully connected layers, followed by hyperbolic tangent activation functions. The encoder input layer has a size of 150 and the output layer has a size of 30. The hidden layers have a size of 256. Adversely, the decoder has an input layer size of 30 and an output layer size of 150. The compression factor of the MRAE is therefore \(150/30=5\). To train the MRAE, the dataset is split into a train dataset (70%), a validation dataset (20%), and a test dataset (10%). To increase the size of the dataset, the MRAE is trained on a shuffled set1 of the mixing ratios of both the inputs and the output of the chemical kinetics simulations. The performance of the autoencoder is measured using the loss function in equation 7: Footnote 1: Using a random sampler function within the PyTorch package. \[\mathcal{L}=\frac{1}{N}\sum_{i=1}^{N}\left(\frac{p_{i}-a_{i}}{a_{i}}\right)^{ 2}. \tag{7}\] Figure 1: The different properties and their corresponding encoders that together encode the input data. The decoding process is symmetrical to this encoding process, where every property has a corresponding property decoder. Figure 2: A more detailed sketch of the architecture of the mixing ratio autoencoder. MR \(i\) denotes the mixing ratio of a certain species, \(i\), for all height layers, and \(l_{i}\) denotes the encoded mixing ratios for species \(i\). where \(\mathcal{L}\) is the loss, \(N\) is the number of elements in the actual/predicted vector, and \(p_{i}\) and \(a_{i}\) are \(i\)-th elements of the predicted and actual vectors, respectively. The MRAE is optimised using the Adam optimiser (Kingma and Ba, 2014), with a learning rate of \(10^{-5}\). A batch size of 32 is used, and the model is trained for 200 epochs. These hyperparameters can also be found in table 1. ### Atmospheric profile Autoencoders The temperature, pressure, and gravity profiles all have the same shape of (150,), so their autoencoders can use the same architecture. Moreover, the atmospheric profiles share their shape with the mixing ratios of individual species (i.e. the height layers in the atmosphere). Therefore, a very similar neural network structure to that of the MRAE is used for the atmospheric profile autoencoders. The encoder input layer shape, the decoder output layer shape, and the hidden layer shapes are taken directly from the MRAE for all atmospheric profile autoencoders. An important parameter to tune for each atmospheric profile autoencoder separately is the size of the latent representations. The pressure profile is set logarithmically for all examples in the dataset. By taking the logarithm of the pressures the spacing becomes linear, \(\log(P_{i})-\log(P_{i+1})\). Theoretically, we then only need two values to fully describe the pressure profile: the pressure at the first and last height layers. To encode these values, no autoencoder is needed. One could take this one step further, and provide input parameters like mass and radius, from which the pressure and gravity profile are dependent, directly to the core model as inputs. While this more specialised approach is suitable for these input parameters, it is not generalisable to other input parameters. To keep the model architecture more general and adaptable to different input parameters, an autoencoder is used nonetheless. The size of the pressure profile autoencoder (PAE) latent representations is set to 2. This corresponds to a compression factor of \(150/2=75\). The temperature and gravity profiles are not linear. For both the temperature autoencoder (TAE) and gravity autoencoder (GAE), a latent representation size of 30 is used. This corresponds to a compression factor of \(150/30=5\), the same as for the MRAE. All profile autoencoders are evaluated using the loss function previously defined in equation 7 and are optimised using the Adam optimiser. The TAE and GAE use a learning rate of \(10^{-5}\), and the PAE uses a learning rate of \(10^{-6}\). All profile autoencoders are trained with a batch size of 4, for 100 epochs (see also table 1). ### Stellar Spectrum Autoencoders After the mixing ratios, the stellar spectrum components contribute predominantly to the input data size. The stellar spectrum is comprised of a flux and a wavelength component. These components share the same shape, so one NN structure can be used for both autoencoders. The structure of the encoder and decoder is similar to that of the MRAE: a 7-layer, fully connected MLP, with hyperbolic tangent activation functions after each layer. The encoder input layer and decoder output layer have a size of 2500, and the hidden layers have a size of 1024. Similarly to the PAE, the wavelength bins are spaced logarithmically. Again only two values are needed to fully describe the wavelength range. The latent representation size for the wavelength autoencoder (WAE) is, therefore, also 2. The compression factor for this network is \(2500/2=1250\). The flux autoencoder (FAE) has a latent representation size of 256, which gives it a compression factor of \(2500/256\approx 10\). Both autoencoders are evaluated using the loss function from equation 7. They are optimised using the Adam optimiser, the WAE with a learning rate of \(10^{-7}\), and the FAE with a learning rate of \(10^{-5}\). They are both trained for 200 epochs with batches of 4 examples (see also table 1). #### 2.3.2 Core Network As mentioned before, the outputs are also large in dimensionality. Because the outputs contain mixing ratios for all species, for 10-time steps (Section 2.2), they can be encoded using the MRAE. Figure 3 shows how the autoencoder would encode both the inputs and the last time step of the outputs to their latent representations \(l_{i}\) and \(l_{o}\), respectively. Note that even though the autoencoder is shown twice in this figure, the same autoencoder is used to encode both the inputs and the outputs. In the middle of the figure, connecting the two latent spaces, a second neural network called the _core network_ is located. The function of the core network is to learn a mapping between the latent representations of the inputs and the evolved outputs. The design of the core network takes advantage of some of the characteristics of VULCAN. From Section 2.1 we know that VULCAN solves ODEs for specific atmospheric configurations over a simulated period of time. To impart this sense of time in the core neural network, a _Long-Short Term Memory_ (LSTM) is used as the base of the design. The LSTM was chosen for its proven performance in numerous applications, from stellar variability (e.g. Jamal and Bloom, 2020) to Core-collapse supernovae search (e.g. Iess et al., 2023) and solar radio spectrum classification (e.g. Xu et al., 2019), as well as the ease of implementation. The LSTM has known shortcomings like the vanishing gradient problem and long training times when dealing with long sequences. However, with the short sequence length used with our model (i.e. 10 timesteps), these shortcomings are not considered problematic for this proof of concept. The input of the core network is not sequential in nature. With some changes, we can use the LSTM in a 'one-to-many' configuration. In this configuration, the initial output of the LSTM \(h_{0}\) is given to an MLP. This MLP produces a vector with the same shape as the initial input \(x_{0}\), which can be interpreted as the 'evolved' input \(x_{1}\). This evolved input is fed back into the LSTM to produce \(h_{1}\), from which the MLP produces \(x_{2}\), and so forth. This can be repeated for an arbitrary number of steps. Figure 3: An overview of the model architecture. The core neural network maps between the latent representations of the VULCAN inputs \(l_{i}\) and outputs \(l_{o}\). The design of the core network is visualised in Figure 4. We interpret the latent representation of the inputs \(l_{i}\) as the initial value \(x_{0}\). The LSTM and MLP configuration produces 9 intermediary 'evolved' latent representations \(\{x_{1},...,x_{9}\}\) before arriving at the final evolved latent representation \(x_{10}\). We interpret this latent representation as the prediction of the latent representation of the evolved output \(I_{o}\). #### Training When the core model predicts a sequence of 10 latent representations, it is essentially traversing the latent space. We can guide the network to learn to traverse the latent space similarly to how VULCAN simulations evolve by using the sequence of outputs saved in the dataset (Section 2.2). We do this in two ways: first, we construct a loss function that not only depends on the accuracy of the prediction of the latent representation of the final output \(I_{o}\), but also on the accuracy of the intermittent latent representation predictions: \[\mathcal{L}=\sum_{t=1}^{10}\left(\frac{1}{N}\sum_{i=1}^{N}\left(p_{t,i}-a_{t, t}\right)^{2}\right), \tag{8}\] where \(\mathcal{L}\) is the loss, \(N\) is the number of elements in the actual/predicted vector, \(p_{t,i}\) is the \(i\)-th element of the latent representation prediction vector at time step \(t\), and \(a_{t,i}\) is the \(i\)-th element of the latent representation vector of the output at time step \(t\). With this notation, \(a_{10}=I_{o}\). By training a network with this loss function, we force the core network to evolve the latent mixing ratios similarly to how VULCAN evolves mixing ratios. It should be noted that the latent representation of the inputs \(l_{i}\) is larger than the latent representation of the outputs \(I_{o}\), as it contains more properties than just mixing ratios. The predicted latent representations \(x_{t}\) are therefore also larger than \(I_{o}\). To account for this, we only look at the elements corresponding to the encoded mixing ratios in \(l_{i}\) when comparing the predicted latent representations \(x_{t}\) and the output mixing ratios \(a_{t}/I_{o}\). To further incentivise the core network to adhere to VULCAN's evolution patterns, we can intercept the predicted latent representations \(x_{t}\) before they get fed back into the LSTM, and replace them with the latent representation of the actual output of the corresponding time step. This way, the core network is always learning from latent representations that follow VULCAN's evolution, even if the network is predicting poorly. This is only done during the training of the network when the true VULCAN outputs are known. During validation and testing, the predicted latent representations \(x_{t}\) are not altered. The core model LSTM has a hidden- and cell size of 4096. The MLP has only two layers: an input layer of size 4096, and an output layer of the same size as the latent representation of the inputs \(l_{i}\), followed by a hyperbolic tangent function. It is optimised with the Adam optimiser, with a learning rate of \(10^{-4}\) and a batch size of 8. It is trained for 100 epochs. #### 2.3.3 Deployment When the trained model is deployed on the validation- and test dataset, the first step is encoding the inputs into their latent representation \(l_{i}\) using the encoder part of the autoencoder (top left \begin{table} \begin{tabular}{l c c c c c} **model** & **hidden size** & **latent size** & **optimiser** & **learning rate** & **batch size** & **epochs** \\ \hline \hline MRAE & 256 & 30 & Adam & \(10^{-5}\) & 32 & 200 \\ PAE & 256 & 2 & Adam & \(10^{-6}\) & 4 & 100 \\ TAE & 256 & 30 & Adam & \(10^{-5}\) & 4 & 100 \\ GAE & 256 & 30 & Adam & \(10^{-5}\) & 4 & 100 \\ FAE & 1024 & 256 & Adam & \(10^{-5}\) & 4 & 200 \\ WAE & 1024 & 2 & Adam & \(10^{-7}\) & 4 & 200 \\ \end{tabular} \end{table} Table 1: Hyperparameters for the property autoencoders. Each autoencoder has an encoder and decoder neural network. These are MLPs, consisting of 7 fully connected layer with hyperbolic tangent activation functions. Figure 4: The design of the core network. It consists of a one-to-many LSTM + MLP configuration that is run for 10 steps. section in figure 3). The core network then predicts the latent representation of the evolved VULCAN output \(I_{o}\) by traversing the latent space in 10 steps (centre section in figure 3). The prediction of the latent representation of the VULCAN output is then decoded by the decoder part of the autoencoder to obtain the predicted mixing ratios (bottom right section in figure 3). ## 3 Results ### Autoencoders #### 3.1.1 Mixing Ratio Autoencoder The top row of figure 5 shows the reconstructed mixing ratio values against their actual value (left plot), for all examples from the test dataset. For the entire range of mixing ratios, the majority of the reconstructions lie within an order of magnitude of the diagonal line that marks perfect reconstructions, with an R-squared value of \(R^{2}=0.9997\). The right plot of the top row of figure 5 shows the reconstruction error of the mixing ratios in logarithmic space. This scale is chosen because the autoencoders are trained in log space (see section 2.2). The solid line shows the median and the dashed lines show the 5th and 95th percentiles. From the right figure, we can see that 90% of the reconstructions have an error between -0.39 and 0.40 orders of magnitude. #### 3.1.2 Flux Autoencoder The middle row of figure 5 (left) shows the reconstructed flux values against their actual value. All reconstructed flux values are within 0.5 order magnitude of the graph diagonal. At fluxes with values around \(10^{6}\) erg m\({}^{-1}\) cm\({}^{-2}\) s\({}^{-1}\), the FAE is slightly underpredicting the actual flux values. From the reconstruction error plot (right) we can see that 90% of the reconstructions have an error between -0.024 and 0.031 orders of magnitude. In this figure, we see a distinct underprediction of a small number of examples, which are the high flux values we see being underpredicted. #### 3.1.3 Wavelength Autoencoder The bottom row of figure 5 (left) shows the reconstructed wavelength values against their actual value. All reconstructed wavelength values are close to the actual values, deviating less than \(\sim 10\) nm from the graph diagonal. We can see that for wavelengths with values around 650 nm, the FAE has a tendency to underpredict. From the reconstruction error of the wavelength values plot (right), we can see that 90% of the reconstructions have an error between -0.001 and 0.001 orders of magnitude. The slight underprediction of higher wavelength values is also visible in this figure. #### 3.1.4 Pressure Profile Autoencoder The top row of figure 6 (left) shows the reconstructed pressure values against their actual value. All reconstructed pressure values are well within 0.1 order magnitude of the graph diagonal. From the reconstruction error plot of the pressure values (right) we can see that 90% of the reconstructions have an error between -0.0003 and 0.0003 orders of magnitude. This figure also shows a very minor underprediction of some pressure values, which correspond to pressure values of \(\sim 10^{3}\) bar. #### 3.1.5 Temperature Profile Autoencoder The middle row of figure 6 (left) shows the reconstructed temperature values against their actual value, for samples from the test dataset. It is immediately obvious that this autoencoder cannot accurately reconstruct the temperature profiles. A fraction of temperatures in the range \(\sim 750\) K \(<\) T \(\lesssim 1400\) K are reconstructed close to the graph diagonal, but the FAE is largely overpredicting temperatures below \(\sim 750\) K and underpredicting temperatures above \(\sim 750\) K. The histogram of reconstruction errors of the temperature values (right) shows under- and over-prediction. 90% of the reconstructions have an error between 15.8 K and 754.97 K. Most predictions outside this range are underpredictions. #### 3.1.6 Gravity Profile Autoencoder The GAE shows similar behaviour to the FAE. The bottom row of figure 6 (left) shows the reconstructed gravity values against their actual value, for samples from the test dataset. Gravity values below \(\sim 4000\) cm s\({}^{-2}\) are underpredicted by the autoencoder, while values above \(\sim 4000\) cm s\({}^{-2}\) are underpredicted. Only values around \(\sim 4000\) cm s\({}^{-2}\) are predicted accurately by the GAE. In the reconstruction errors plot of the GAE of figure 6 (right) we can see that gravity values are consistently being over- and underpredicted. 90% of the reconstructions have an error between 197.46 and 4939.6 cm s\({}^{-2}\). Most predictions outside this range are over-predictions. ### Core Network Because the FAE and GAE do not accurately reconstruct the temperature and gravity profiles, these profiles were not encoded for the final model. Instead, they were put directly in the latent representations of the inputs. This way, no information contained in these profiles is lost. The left plot in figure 7 shows the mixing ratios predicted by the trained neural network model against the actual mixing ratios for the test dataset. The histogram shows that most of the model predictions lie within \(\sim 1\) order magnitude of the diagonal of the graph. A notable exception is the predictions for the few mixing ratios with values lower than \(\sim 10^{-44}\), for which the model overpredicts. These species can be neglected since they are not abundant enough to play a big role in the chemistry or to show features in the observable spectra. Figure 7 (right) shows the mixing ratio prediction error of the neural network model, in log-space. The solid line shows the median and the dashed lines show the 5th and 95th percentiles. From the figure, we can see that 90% of the model predictions have an error between -0.66 and 0.65 orders magnitude. Outside of this range, the model does not show a clear tendency to either over- or underpredict. Figure 8 shows selected examples (best, typical, and worst cases) of predictions by the neural network compared with the output of the VULCAN model, for a selection of seven species. The best case (top panel) shows a prediction that is almost indistinguishable from the actual mixing ratios. The examples in the typical case (middle panel) and worst case (lower panel) show larger prediction errors. In the typical case, CO\({}_{2}\), CO\({}_{3}\), and HCN have the largest prediction errors in the lower atmosphere, though still negligible. The worst case shows the largest prediction errors, with H having prediction errors of up to almost 1 order of magnitude in the upper atmosphere. Notable is that this case has very strong photochemistry in the upper atmosphere as it's positioned nearby its host star. To compare the computational efficiency of VULCAN and the Figure 5: The reconstructed against the actual input values (left column), and the reconstruction error in log space (right column) for the mixing ratios (top row), stellar flux (middle row), and the wavelengths (bottom row). The diagonal dashed line in the reconstructed vs. actual mixing ratios plot shows the performance of a perfectly reconstructing model. Here the colour represents the number of examples within each bin. In the reconstruction error figure, the solid line shows the median value, and the dashed lines show the 5th and 95th percentiles. The R\({}^{2}\) values of each reconstruction plot are shown in the left column. Figure 6: The reconstructed against the actual input values (left column), and the reconstruction error in log space (right column) for the pressure profile (top row), temperature profile (middle row), and the gravity profile (bottom row). Note that the reconstruction errors plot of the temperature and gravity values are calculated in linear space. neural network model, the computational time to calculate or predict every example in the full dataset was recorded. The results are presented in table 2. It should be noted that the VULCAN simulations were run on similar, but older hardware than the neural network model. The median computational times show a \(\sim 7.5\cdot 10^{3}\times\) decrease in computational time for the neural network model. The longest computational time required by the neural network model still shows a \(\sim 10^{3}\times\) decrease in computational time compared to the fastest VULCAN simulation. ## 4 Discussion In this study, we successfully used autoencoders to extract most of the characterising input features and encode them into latent representations for the mixing ratios, stellar flux, wavelengths, and pressure profiles.Within these four groups, the largest prediction errors stem from the MRAE due to the high variability in input values, as opposed to the other input sources. We included initial and evolved mixing ratios of 69 species over 150 height layers. Additionally, the mixing ratio profiles among species differed significantly from one another (e.g. CH\({}_{4}\) and CO in figure 8). This made the complexity, of extracting and encoding the fundamental input features, highest for this particular autoencoder. In opposition, the variety in the flux from stellar spectra was much less. We obtained the stellar spectra from either the MUSCLES database or generated them using the PHOENIX model. The spectra from these different sources were quite distinct from each other in the EUV (0.5 - 200 nm). The PHOENIX models assume the spectra to follow blackbodies, while, in reality, M- and K stars have shown to be highly active in the EUV (Reiners and Basri (2008)), as was observed by the MUSCLES collaboration. Nonetheless, the spectra within each method seemed largely similar, which made it more straightforward for the FAE to learn how to accurately reconstruct them. For both the PAE and the WAE, the profiles were linearly spaced in logarithmic space, which made it easy for the autoencoders to learn how to encode these parameters. It is remarkable, however, that the WAE was not able to perfectly reproduce the wavelengths. A solution would be to make use of a handcrafted algorithm that encodes merely the first and last elements in the array. It is recommended to make use of such an algorithm for future use. Finally, the temperature- and gravity profile autoencoders were not successful at encoding and reconstructing their inputs. Both autoencoders produced the same solutions for each input example. The limited data set size and large variations in the temperature and gravity example cases could explain the autoencoders to be prone to errors. Future studies could focus on improving these specific autoencoders by performing root cause analysis. However, a more specialised approach to encoding the pressure and gravity profiles would be to provide hyperparameters, such as the planet mass and radius, directly to the core model. Such an approach negates the need to train autoencoders for these input parameters. The prediction of the core network (LSTM) is within one order of magnitude for the majority (>90%) of the predictions. These errors are comparable with the discrepancies between different chemical kinetics codes (Venot et al., 2012). However, the accuracy of predictions of different examples varies. This inconsistency can arise due to some bias within the data set. Example cases similar to the best-case scenario (see 8) were more prevalent in the data set, causing the core network to produce better predictions of this type of hot-Jupiters. Additionally, by plotting the loss of each validation case against input parameters (see figure 9) it becomes apparent that some specific system parameters perform better than others. From figure 9, we see that planets with smaller orbit radii seem to have worse predictions. One explanation could be that these planets endure more irradiation from their host star, ensuring photochemistry being the dominant process in the upper atmosphere. The abrupt and severe \begin{table} \begin{tabular}{c c c c} **code** & **median** & **minimum** & **maximum** \\ \hline \hline VULCAN & 5994.3 s & 1236.7 s & 102223.0 s \\ NN model & 0.77 s & 0.73 s & 0.93 s \\ \end{tabular} \end{table} Table 2: Median, minimum and maximum running times of VULCAN and the neural network model for all configurations in the dataset. VULCAN runs were performed on a single CPU core using an Intel(R) Xeon(R) CPU E5-4620 0 0 2.26GHz. The neural network model was run on a single core using an Intel(R) Xeon(R) W-1250 CPU @ 3.36GHz Figure 7: The LSTM predicted mixing ratios plotted against the actual mixing ratios (left) and the LSTM mixing ratio prediction error in log space (right). The dashed diagonal line in the left plot shows the performance of a perfectly predicting model and the colour of each bin represents the number of predictions. The solid line in the right plot shows the median value, and the dashed lines show the 5th and 95th percentiles. Figure 8: The mixing ratios per height layer for the best (top), typical (middle), and the worst (bottom) case of the validation set. The planet parameters for each case are given at the top of the plot. The solid lines show the actual mixing ratios as calculated by VULCAN, and the dashed lines show the neural network model predictions. changes in abundances for some species due to photodissociation in the upper atmosphere could be difficult for the core network to learn with a limited dataset as provided in this study. Also noteworthy is the correlation between the planetary mass and the performance of the core network. Higher-mass planets tend to have lower losses as compared to lower-mass planets. Future work could focus on improving the prediction losses of the chemistry profiles for lower-mass planets and planets that orbit their host star close in. We also showed that the trained model consistently over-predicts mixing ratios that have a value lower than \(10^{-44}\). This can again be explained by the lack of examples that have such lower values. Species with mixing ratios this low are small contributors to the atmospheric composition nevertheless and are not expected to affect forward models. Finally, we want to note that the hyperparameters used in this study have been found by trial and error and have not been proven to be the most optimal values. Future studies could focus on a hyperparameter search for each individual autoencoder and core network to find the most optimal parameters. Due to all mentioned caveats, there is room for improvement in future work. Here we detail some of the aspects that are out of the scope of this paper, but we will be looking into them in future publications. Evidently, a larger size of the data set, with which the network is trained, is expected to improve the results significantly. In order to train a neural network to be more generalised and less biased, a more diverse and extensive data set should be created. Free parameters that can be taken into account, which were not explored in this study, are variables such as the eddy diffusion coefficient 2, condensation, and composition of the atmosphere (e.g. varying metallicity and the C/O ratio). Footnote 2: Note that vertical mixing is taken into account in every simulation, but is kept constant throughout the dataset. Another approach to improve possibly the results is to change the model itself. The traditional autoencoders can be replaced with _variational autoencoders_, VAEs (Kingma and Welling, 2013). These types of autoencoders are based on Bayesian statistics. It is possible to regulate the latent space such that similar input examples have similar latent representations that lie close to each other within the latent space. The core network then might be able to learn how to traverse a regulated latent space and predict more accurately. The core network itself can be improved by, for example, including more time steps within the LSTM. Adding more time steps will ensure that the network predicts the solutions in a more similar way as VULCAN integrates toward the solution. A disadvantage of this is that the training time for the network will increase. Lastly, the recurrent neural network architecture could be changed to a _transformer_ design. Recently, the transformer neural network architecture, proposed by Vaswani et al. (2017), revolutionised the field of sequence transactions within machine learning. By using a so-called _attention_ mechanism, the transformer neural network outperforms recurrent neural networks in accuracy and efficiency. Because of the similarity between transformer and recurrent neural network applications, the core model may perform better when changing to this new type of architecture. However, because transformers require tokenized inputs, the autoencoders will also have to be changed to produce the expected outputs. The implementation of a transformer would therefore increase the complexity of the entire model and should be done carefully. A different direction in approach would be, for example, to use interpolation methods in the already existing data set. The limitation of such methods is the ease of distribution. The size of the data set used in this study is 600GB, as opposed to a size of 3GB for the weights of the neural network used in this study. ## 5 Summary & Conclusions In this study, we investigated the ability of a neural network to replace the time-dependent ordinary differential equations in the chemical kinetics code VULCAN (Tsai et al., 2017, 2021). The aim of this research was to explore the LSTM architecture for solving ordinary differential equations that include vertical mixing and photo-chemistry. We first created a data set that contains the in- and outputs of VULCAN simulations of hot-Jupiter atmospheres. We made use of the planetary mass (0.5 - 20 \(M_{J}\)), the semi-major axis (0.01 - 0.5 AU), and the stellar radius (1 - 1.5 \(R_{\odot}\)) as free parameters. Other parameters for the VULCAN configurations were derived either from analytical relations or kept constant throughout the data set. The input of the data set comprises the initial mixing ratios, the stellar spectrum, the temperature- and pressure profiles, and the gravity profiles. Note that the neural network trained in this study is limited to the chosen free parameters and can not be used for atmospheric models that include e.g. condensation. The outputs of the data set contain the mixing ratios of the species in the atmosphere, taken from 10-time steps (including the steady state) during the VULCAN simulation. This data was used to train a neural network that consists of two parts: the _autoencoder_ network and the _core_ network. The autoencoder was used to reduce the dimensionality of the input and output data from the data set by encoding them into lower dimensionality _latent representations_. The autoencoder network consisted of six smaller autoencoders, designed and trained to encode and decode the mixing ratios, flux, wavelengths, and temperature-, pressure-, and gravity profiles to and from their respective latent representations. The total input latent representation was the concatenation of these 6 smaller ones. The core network was designed to have an LSTM-based architecture and it mapped between the latent representation of the inputs to the encoded evolved output by traversing the _latent space_ in ten steps. During the training, the latent representations at these ten steps were compared to the ten sets of mixing ratios saved in the outputs of the data set to ensure that the core network is evolving the Figure 9: The loss as a function of the semi-major axis of each validation case. The colour represents the planet mass in \(\mathbf{M_{J}}\) and the size of each scatter point represents the size of the host star which ranges between 1 \(M_{\odot}\) and 1.5 \(M_{\odot}\). The loss is calculated by making use of eq. 7 latent representation in a similar fashion as the VULCAN simulation evolves the mixing ratios. To summarise, we found that: * the mixing ratios, flux, wavelengths, and pressure profile autoencoders were able to efficiently encode and accurately reconstruct their respective input properties * the autoencoders were not able to encode and decode the temperature- and gravity profiles successfully. These autoencoders were, therefore, not used and instead, these profiles were put directly into the latent representation of the inputs * the fully trained model (i.e. including the core network) was able to predict the mixing ratios of the species with the errors in the range [-0.66, 0.65] orders magnitude for 90% of the cases. Due to imbalances in the dataset, the model is biased to more accurately solve for some examples as compared to others * the fully trained model is \(\sim 10^{3}\) times faster than the VULCAN simulations Overall, this study has shown that machine learning is a suitable approach to accelerate chemical kinetics codes for modelling exoplanet atmospheres. ## Data Availability All simulated data created in this study will be shared upon reasonable request to the corresponding author. The code and results are publicly available on github.com/JuliusHendrix/MRP.
2303.03382
Globally Optimal Training of Neural Networks with Threshold Activation Functions
Threshold activation functions are highly preferable in neural networks due to their efficiency in hardware implementations. Moreover, their mode of operation is more interpretable and resembles that of biological neurons. However, traditional gradient based algorithms such as Gradient Descent cannot be used to train the parameters of neural networks with threshold activations since the activation function has zero gradient except at a single non-differentiable point. To this end, we study weight decay regularized training problems of deep neural networks with threshold activations. We first show that regularized deep threshold network training problems can be equivalently formulated as a standard convex optimization problem, which parallels the LASSO method, provided that the last hidden layer width exceeds a certain threshold. We also derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network. We corroborate our theoretical results with various numerical experiments.
Tolga Ergen, Halil Ibrahim Gulluk, Jonathan Lacotte, Mert Pilanci
2023-03-06T18:59:13Z
http://arxiv.org/abs/2303.03382v1
# Globally Optimal Training of Neural Networks with Threshold Activation Functions ###### Abstract Threshold activation functions are highly preferable in neural networks due to their efficiency in hardware implementations. Moreover, their mode of operation is more interpretable and resembles that of biological neurons. However, traditional gradient based algorithms such as Gradient Descent cannot be used to train the parameters of neural networks with threshold activations since the activation function has zero gradient except at a single non-differentiable point. To this end, we study weight decay regularized training problems of deep neural networks with threshold activations. We first show that regularized deep threshold network training problems can be equivalently formulated as a standard convex optimization problem, which parallels the LASSO method, provided that the last hidden layer width exceeds a certain threshold. We also derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network. We corroborate our theoretical results with various numerical experiments. ## 1 Introduction In the past decade, deep neural networks have proven remarkably useful in solving challenging problems and become popular in many applications. The choice of activation plays a crucial role in their performance and practical implementation. In particular, even though neural networks with popular activation functions such as ReLU are successfully employed, they require advanced computational resources in training and evaluation, e.g., Graphical Processing Units (GPUs) (Coates et al., 2013). Consequently, training such deep networks is challenging especially without sophisticated hardware. On the other hand, the threshold activation offers a multitude of advantages: (1) computational efficiency, (2) compression/quantization to binary latent dimension, (3) interpretability. Unfortunately, gradient based optimization methods fail in optimizing threshold activation networks due to the fact that the gradient is zero almost everywhere. To close this gap, we analyze the training problem of deep neural networks with the threshold activation function defined as \[\sigma_{s}(x)\!:=s\mathbbm{1}\{x\geq 0\}=\begin{cases}s&\text{if }x\geq 0\\ 0&\text{otherwise}\end{cases}, \tag{1}\] where \(s\in\mathbb{R}\) is a trainable amplitude parameter for the neuron. Our main result is that globally optimal deep threshold networks can be trained by solving a convex optimization problem. ### Why should we care about threshold networks? Neural networks with threshold activations are highly desirable due to the following reasons: * Since the threshold activation (1) is restricted to take values in \(\{0,s\}\), threshold neural network models are far more suitable for hardware implementations (Barlett and Downs, 1992; Corwin et al., 1994). Specifically, these networks have significantly lower memory footprint, less computational complexity, and consume less energy (Helwegen et al., 2019). * Modern neural networks have extremely large number of full precision trainable parameters so that several computational barriers emerge during hardware implementations. One approach to mitigate these issues is reducing the network size by grouping the parameters via a hash function (Hubara et al., 2017; Chen et al., 2015). However, this still requires full precision training before the application of the hash function and thus fails to remedy the computational issues. On the other hand, neural networks with threshold activations need a minimal amount of bits. * Another approach to reduce the complexity is to quantize the weights and activations of the network (Hubara et al., 2017) and the threshold activation is inherently in a two level quantized form. * The threshold activation is a valid model to simulate the behaviour of a biological neuron as detailed in Jain et al. (1996). Therefore, progress in this research field could shed light into the connection between biological and artificial neural networks. ### Related Work Although threshold networks are essential for several practical applications as detailed in the previous section, training their parameters is a difficult non-differentiable optimization problem due to the discrete nature in (1). For training of deep neural networks with popular activations, the common practice is to use first order gradient based algorithms such as Gradient Descent (GD) since the well known backpropagation algorithm efficiently calculates the gradient with respect to parameters. However, the threshold activation in (1) has zero gradient except at a single non-differentiable point zero, and therefore, one cannot directly use gradient based algorithms to train the parameters of the network. In order to remedy this issue numerous heuristic algorithms have been proposed in the literature as detailed below but they still fail to globally optimize the training objective (see Figure 1). The Straight-Through Estimator (STE) is a widely used heuristics to train threshold networks (Bengio et al., 2013; Hinton, 2012). Since the gradient is zero almost everywhere, Bengio et al. (2013); Hinton (2012) proposed replacing the threshold activation with the identity function during only the backward pass. Later on, this approach is extended to employ various forms of the ReLU activation function, e.g., clipped ReLU, vanilla ReLU, Leaky ReLU (Yin et al., 2019; Cai et al., 2017; Xiao et al.), during the backward pass. Additionally, clipped versions of the identity function were also used as an alternative to STE (Hubara et al., 2017; Courbariaux et al., 2016; Rastegari et al., 2016). ### Contributions * We introduce polynomial-time trainable convex formulations of regularized deep threshold network training problems provided that a layer width exceeds a threshold detailed in Table 1. * In Theorem 2.2, we prove that the original non-convex training problem for two-layer networks is equivalent to standard convex optimization problems. * We show that deep threshold network training problems are equivalent to standard convex optimization problems in Theorem 3.2. In stark contrast to two-layer networks, deep threshold networks can have a richer set of hyperplane arrangements due to multiple nonlinear layers (see Lemma 3.5). * In Section 3.1, we characterize the evolution of the set of hyperplane arrangements and consequently hidden layer representation space as a recursive process (see Figure 3) as the network gets deeper. * We prove that when a certain layer width exceeds \(\mathcal{O}(\sqrt{n}/L)\), the regularized \(L\)-layer threshold network training further simplifies to a problem that can be solved in \(\mathcal{O}(n)\) time. Figure 1: Training comparison of our convex program in (7) with the non-convex training heuristic STE. We also indicate the time taken to solve the convex programs with markers. For non-convex STE, we repeat the training with \(5\) different initializations. In each case, our convex training algorithms achieve lower objective than all the non-convex heuristics (see Appendix B.5 for details). ## 2 Two-layer threshold networks We first consider the following two-layer threshold network \[f_{\theta,2}(\mathbf{X}){=}\sigma_{\mathbf{s}}(\mathbf{X}\mathbf{W}^{(1)}) \mathbf{w}^{(2)}{=}\sum_{j=1}^{m}s_{j}\mathbb{1}\left\{\mathbf{X}\mathbf{w}_{j}^{ (1)}{\geq}0\right\}w_{j}^{(2)}, \tag{2}\] where the set of the trainable parameters are \(\mathbf{W}^{(1)}\in\mathbb{R}^{d\times m},\mathbf{s}\in\mathbb{R}^{m},\mathbf{ w}^{(2)}\in\mathbb{R}^{m}\) and \(\theta\) is a compact representation for the parameters, i.e., \(\theta:=\{\mathbf{W}^{(1)},\mathbf{s},\mathbf{w}^{(2)}\}\). Note that we include bias terms by concatenating a vector of ones to \(\mathbf{X}\). Next, consider the weight decay regularized training objective \[\mathcal{P}_{2}^{\mathrm{noncvx}}\,{:=}\,\min_{\mathbf{W}^{(1)},\mathbf{s}, \mathbf{w}^{(2)}}\frac{1}{2}\left\|f_{\theta,2}(\mathbf{X})-\mathbf{y}\right\| _{2}^{2}+\frac{\beta}{2}\sum_{j=1}^{m}\left(\|\mathbf{w}_{j}^{(1)}\|_{2}^{2}+ |s_{j}|^{2}+|w_{j}^{(2)}|^{2}\right)\,. \tag{3}\] Now, we apply a scaling between variables \(s_{j}\) and \(w_{j}^{(2)}\) to reach an equivalent optimization problem. **Lemma 2.1** (Optimal scaling).: _The training problem in (3) can be equivalently stated as_ \[\mathcal{P}_{2}^{\mathrm{noncvx}}=\min_{\theta\in\Theta,}\frac{1}{2}\left\|f_ {\theta,2}(\mathbf{X})-\mathbf{y}\right\|_{2}^{2}+\beta\|\mathbf{w}^{(2)}\|_{ 1}\,, \tag{4}\] _where \(\Theta_{s}{:=}\left\{\theta:|s_{j}|=1,\forall j\in[m]\right\}\)._ We next define the set of hyperplane arrangement patterns of the data matrix \(\mathbf{X}\) as \[\mathcal{H}(\mathbf{X}){:=}\left\{\mathbb{1}\{\mathbf{X}\mathbf{w}\geq 0\}: \mathbf{w}\in\mathbb{R}^{d}\right\}\subset\{0,1\}^{n}. \tag{5}\] We denote the distinct elements of the set \(\mathcal{H}(\mathbf{X})\) by \(\mathbf{d}_{1},\ldots,\mathbf{d}_{P}\in\{0,1\}^{n}\), where \(P{:=}\left|\mathcal{H}(\mathbf{X})\right|\) is the number of hyperplane arrangements. Using this fixed set of hyperplane arrangements \(\{\mathbf{d}_{i}\}_{i=1}^{P}\), we next prove that (4) is equivalent to the standard Lasso method (Tibshirani, 1996). **Theorem 2.2**.: _Let \(m\geq m^{*}\), then the non-convex regularized training problem (4) is equivalent to_ \[\mathcal{P}_{2}^{\mathrm{cvx}}=\min_{\mathbf{w}\in\mathbb{R}^{P}}\frac{1}{2} \left\|\mathbf{D}\mathbf{w}-\mathbf{y}\right\|_{2}^{2}+\beta\|\mathbf{w}\|_{ 1}, \tag{6}\] _where \(\mathbf{D}=[\mathbf{d}_{1}\quad\mathbf{d}_{2}\quad\ldots\quad\mathbf{d}_{P}]\) is a fixed \(n\times P\) matrix. Here, \(m^{*}\) is the cardinality of the optimal solution, which satisfies \(m^{*}\leq n+1\). Also, it holds that \(\mathcal{P}_{2}^{\mathrm{noncvx}}=\mathcal{P}_{2}^{\mathrm{cvx}}\)._ Theorem 2.2 proves that the original non-convex training problem in (3) can be equivalently solved as a standard convex Lasso problem using the hyperplane arrangement patterns as features. Surprisingly, the non-zero support of the convex optimizer in (6) matches to that of the optimal weight-decay regularized threshold activation neurons in the non-convex problem (3). This brings us two major advantages over the standard non-convex training: * Since (6) is a standard convex problem, it can be globally optimized without resorting to non-convex optimization heuristics, e.g., initialization schemes, learning rate schedules etc. * Since (6) is a convex Lasso problem, there exists many efficient solvers (Efron et al., 2004). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Result** & **Depth** & **Complexity** & **Minimum width** & **Globally optimal** \\ \hline Theorem 2.2 & \(2\) & \(\mathcal{O}(n^{\alpha})\) & \(m\geq m^{*}\) & \(\mathcal{J}(\text{convex opt})\) \\ \hline Theorem 2.3 & \(2\) & \(\mathcal{O}(n)\) & \(m\geq n+2\) & \(\mathcal{J}(\text{convex opt})\) \\ \hline Theorem 3.2 & \(L\) & \(\mathcal{O}(n^{\gamma}\mathbb{I}_{\mathbb{I}_{\mathbb{I}_{\mathbb{I}_{\mathbb{I} _{\mathbb{I}_{\mathbb{I}_{\mathbb{I}_{\mathbb{I}_{\mathbb{I}_{\mathbb{I}}}}}}}}}}}^ {m_{1}})\) & \(m_{L-1}\geq m^{*}\) & \(\mathcal{J}(\text{convex opt})\) \\ \hline Corollary 3.4 & \(L\) & \(\mathcal{O}(n)\) & \(\exists l:m\geq\mathcal{O}(\sqrt{n}/L)\) & \(\mathcal{J}(\text{convex opt})\) \\ \hline \end{tabular} \end{table} Table 1: Summary of our results for the optimization of weight decay regularized threshold network training problems (\(n\): \(\#\) of data samples, \(d\): feature dimension, \(m_{l}\): \(\#\) of hidden neurons in layer \(l\), \(r\): rank of the training data matrix, \(m^{*}\): critical width, i.e., \(\#\) of neurons that obeys \(0\leq m^{*}\leq n+1\)) ### Simplified convex formulation for complete arrangements We now show that if the set of hyperplane arrangements of \(\mathbf{X}\) is complete, i.e., \(\mathcal{H}=\{0,1\}^{n}\) contains all boolean sequences of length \(n\), then the non-convex optimization problem in (3) can be simplified. We call these instances _complete arrangements_. In the case of two-layer threshold networks, complete arrangements emerge when the width of the network exceeds a threshold, specifically \(m\geq n+2\). We note that the \(m\geq n\) regime, also known as memorization, has been extensively studied in the recent literature (Bubeck et al., 2020; de Dios and Bruna, 2020; Pilanci and Ergen, 2020; Rosset et al., 2007). Particularly, these studies showed that as long as the width exceeds the number of samples, there exists a neural network model that can exactly fit an arbitrary dataset. Vershynin (2020); Bartlett et al. (2019) further improved the condition on the width by utilizing the expressive power of deeper networks and developed more sophisticated weight construction algorithms to fit the data. **Theorem 2.3**.: _We assume that the set of hyperplane arrangements of \(\mathbf{X}\) is complete, i.e., equal to the set of all length-\(n\) Boolean sequences" \(\mathcal{H}=\{0,1\}^{n}\). Suppose \(m\geq n+2\), then (4) is equivalent to_ \[\mathcal{P}^{\mathrm{cvx}}_{v2}\!:=\min_{\mathbf{\delta}\in\mathbb{R}^{n}}\frac{ 1}{2}\|\mathbf{\delta}-\mathbf{y}\|_{2}^{2}+\beta(\|(\mathbf{\delta})_{+}\|_{\infty}+ \|(-\mathbf{\delta})_{+}\|_{\infty})\,. \tag{7}\] _and it holds that \(\mathcal{P}^{\mathrm{convx}}_{2}=\mathcal{P}^{\mathrm{cvx}}_{2}\). Also, one can construct an optimal network with \(n+2\) neurons in time \(\mathcal{O}(n)\) based on the optimal solution to the convex problem (7)._ Based on Theorem 2.3, when the data matrix can be shattered, i.e., all \(2^{n}\) possible \(\mathbf{y}\in\{0,1\}^{n}\) labelings of the data points can be separated via a linear classifier, it follows that the set of hyperplane arrangements is complete. Consequently, the non-convex problem in (3) further simplifies to (7). ### Training complexity We first briefly summarize our complexity results for solving (6) and (7), and then provide the details of the derivations below. Our analysis reveals two interesting regimes: * (incomplete arrangements) When \(n+1\geq m\geq m^{*}\), we can solve (6) in \(\mathcal{O}(n^{3r})\) complexity, where \(r:=\text{rank}(\mathbf{X})\). Notice this is **polynomial-time** whenever the rank \(r\) is fixed. * (complete arrangements) When \(m\geq n+2\), we can solve (7) in closed-form, and the reconstruction of the non-convex parameters \(\mathbf{W}^{(1)}\) and \(\mathbf{w}^{(2)}\) only \(\mathcal{O}(n)\) time independent of \(d\). **Computational complexity of (6):** To solve the optimization problem in (6), we first enumerate all possible hyperplane arrangements \(\{\mathbf{d}_{i}\}_{i=1}^{P}\). It is well known that given a rank-\(r\) data matrix the number of hyperplane arrangements \(P\) is upper bounded by (Stanley et al., 2004; Cover, 1965) \[P\leq 2\sum_{k=0}^{r-1}\binom{n-1}{k}\leq 2r\left(\frac{e(n-1)}{r}\right)^{r}, \tag{8}\] where \(r=\text{rank}(\mathbf{X})\leq\min(n,d)\). Furthermore, these can be enumerated in \(\mathcal{O}(n^{r})\)(Edelsbrunner et al., 1986). Then, the complexity for solving (6) is \(\mathcal{O}(P^{3})\approx\mathcal{O}(n^{3r})\)(Efron et al., 2004). **Computational complexity of (7):** The problem in (7) is the proximal operator of a polyhedral norm. Since the problem is separable over the positive and negative parts of the parameter vector \(\mathbf{\delta}\), the optimal solution can be obtained by applying two proximal steps (Parikh and Boyd, 2014). As noted in Theorem 2.3, the reconstruction of the non-convex parameters \(\mathbf{W}^{(1)}\) and \(\mathbf{w}^{(2)}\) requires \(\mathcal{O}(n)\) time. ### A geometric interpretation To provide a geometric interpretation, we consider the weakly regularized case where \(\beta\to 0\). In this case, (6) reduces to the following minimum norm interpolation problem \[\min_{\mathbf{w}\in\mathbb{R}^{P}}\|\mathbf{w}\|_{1}\quad\mathrm{s.t.}\quad \mathbf{D}\mathbf{w}=\mathbf{y}. \tag{9}\] **Proposition 2.4**.: _The minimum \(\ell_{1}\) norm interpolation problem in (9) can be equivalently stated as_ \[\min_{t\geq 0}t\quad\mathrm{s.t.}\quad\mathbf{y}\in t\mathrm{Conv}\{\pm\mathbf{ d}_{j},\forall j\in[P]\},\] _where \(\mathrm{Conv}(\mathcal{A})\) denotes the convex hull of a set \(\mathcal{A}\). This corresponds to the gauge function (see Rockafellar (2015)) of the hyperplane arrangement patterns and their negatives. We provide visualizations of the convex set \(\mathrm{Conv}\{\pm\mathbf{d}_{j},\forall j\in[P]\}\) in Figures 3 and 4 (see Appendix) for Example 3.1._ Proposition 2.4 implies that the non-convex threshold network training problem in (3) implicitly represents the label vector \(\mathbf{y}\) as the convex combination of the hyperplane arrangements determined by the data matrix \(\mathbf{X}\). Therefore, we explicitly characterize the representation space of threshold networks. We also remark that this interpretation extends to arbitrary depth as shown in Theorem 3.2. ### How to optimize the hidden layer weights? After training via the proposed convex programs in (6) and (7), we need to reconstruct the layer weights of the non-convex model in (2). We first construct the optimal hyperplane arrangements \(\mathbf{d}_{i}\) for (7) as detailed in Appendix A.4. Then, we have the following prediction model (2). Notice changing \(\mathbf{w}_{j}^{(1)}\) to any other vector \(\mathbf{w}_{j}^{\prime}\) with same norm and such that \(\mathbb{I}\left\{\mathbf{X}\mathbf{w}_{j}^{(1)}\geq 0\right\}=\mathbb{I} \left\{\mathbf{X}\mathbf{w}_{j}^{\prime}\geq 0\right\}\) does not change the optimal training objective. Therefore, there are multiple global optima and the one we choose might impact the generalization performance as discussed in Section 5. ## 3 Deep threshold networks We now analyze \(L\)-layer parallel deep threshold networks model with \(m_{L-1}\) subnetworks defined as \[f_{\theta,L}(\mathbf{X})=\sum_{k=1}^{m_{L-1}}\sigma_{\mathbf{s}^{(L-1)}}( \mathbf{X}_{k}^{(L-2)}\mathbf{w}_{k}^{(L-1)})w_{k}^{(L)}, \tag{10}\] where \(\theta\!:=\ \{\{\mathbf{W}_{k}^{(l)},\mathbf{s}_{k}^{(l)}\}_{l=1}^{L}\}_{k=1}^{m_{L-1}}\), \(\theta\in\Theta\!:=\{\theta:\mathbf{W}_{k}^{(l)}\in\mathbb{R}^{m_{l-1}\times m _{l}},\mathbf{s}_{k}^{(l)}\in\mathbb{R}^{m_{l}},\forall l,k\}\), \[\mathbf{X}_{k}^{(0)}\!:=\mathbf{X}, \mathbf{X}_{k}^{(l)}\!:=\sigma_{\mathbf{s}_{k}^{(l)}}(\mathbf{X}_ {k}^{(l-1)}\mathbf{W}_{k}^{(l)}),\forall l\in[L-1],\] and the subscript \(k\) is the index for the subnetworks (see Ergen and Pilanci (2021a,b); Wang et al. (2023) for more details about parallel networks). We next show that the standard weight decay regularized training problem can be cast as an \(\ell_{1}\)-norm minimization problem as in Lemma 2.1. **Lemma 3.1**.: _The following \(L\)-layer regularized threshold network training problem_ \[\mathcal{P}_{L}^{\mathrm{noncvx}}\!\!=\!\!\!\min_{\theta\in\Theta}\!\frac{1}{ 2}\|f_{\theta,L}(\mathbf{X})\!-\!\mathbf{y}\|_{2}^{2}\!+\!\frac{\beta}{2}\sum _{k=1}^{m_{L-1}}\!\sum_{l=1}^{L}(\|\mathbf{W}_{k}^{(l)}\|_{F}^{2}\!+\!\| \mathbf{s}_{k}^{(l)}\|_{2}^{2}) \tag{11}\] _can be reformulated as_ \[\mathcal{P}_{L}^{\mathrm{noncvx}}=\min_{\theta\in\Theta}\frac{1}{2}\left\|f _{\theta,L}(\mathbf{X})-\mathbf{y}\right\|_{2}^{2}+\beta\|\mathbf{w}^{(L)}\|_ {1}, \tag{12}\] _where \(\Theta_{s}\!:=\{\theta\in\Theta:|s_{k}^{(L-1)}|=1\}\)._ ### Characterizing the set of hyperplane arrangements for deep networks We first define hyperplane arrangements for \(L\)-layer networks with a single subnetwork (i.e., \(m_{L-1}=1\), thus, we drop the index \(k\)). We denote the set of hyperplane arrangements as \[\mathcal{H}_{L}(\mathbf{X})\!:=\{\mathbb{I}\left\{\mathbf{X}^{(L-2)}\mathbf{w }^{(L-1)}\geq 0\right\}:\theta\in\Theta\}.\] We also denote the elements of the set \(\mathcal{H}_{L}(\mathbf{X})\) by \(\mathbf{d}_{1},\ldots,\mathbf{d}_{P_{L-1}}\in\{0,1\}^{n}\) and \(P_{L-1}=|\mathcal{H}_{L}(\mathbf{X})|\) is the number of hyperplane arrangements in the layer \(L-1\). To construct the hyperplane arrangement matrix \(\mathbf{D}^{(l)}\), we define a matrix valued operator as follows \[\mathbf{D}^{(1)}\!:=\mathcal{A}\left(\mathbf{X}\right)=[\mathbf{d}_{1}\quad \mathbf{d}_{2}\quad\ldots\quad\mathbf{d}_{P_{1}}]\,\ \mathbf{D}^{(l+1)}\!:=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Here, the operator \(\mathcal{A}\left(\cdot\right)\) outputs a matrix whose columns contain all possible hyperplane arrangements corresponding to its input matrix as in (5). In particular, \(\mathbf{D}^{(1)}\) denotes the arrangements for the first layer given the input matrix \(\mathbf{X}\). The notation \(\mathbf{D}^{(l)}_{\mathcal{S}}\in\{0,1\}^{n\times m_{l}}\) denotes the submatrix of \(\mathbf{D}^{(l)}\in\{0,1\}^{n\times P_{l}}\) indexed by the subset \(\mathcal{S}\) of its columns, where the index \(\mathcal{S}\) runs over all subsets of size \(m_{l}\). Finally, \(\sqcup\) is an operator that takes a union of these column vectors and outputs a matrix of size \(n\times P_{l+1}\) containing these as columns. Note that we may omit repeated columns and denote the total number of unique columns as \(P_{l+1}\), since this does not change the value of our convex program. We next provide an analytical example describing the construction of the matrix \(\mathbf{D}^{(l)}\). **Example 3.1**.: _We illustrate an example with the training data \(\mathbf{X}=[-1\ 1;0\ 1;1\ 1]\in\mathbb{R}^{3\times 2}\). Inspecting the data samples (rows of \(\mathbf{X}\)), we observe that all possible arrangement patterns are_ \[\mathbf{D}^{(1)}=\mathcal{A}\left(\mathbf{X}\right)=\begin{bmatrix}0&0&0&1&1& 1\\ 0&0&1&1&1&0\\ 0&1&1&1&0&0\end{bmatrix}\implies P_{1}=6. \tag{14}\] _For the second layer, we first specify the number of neurons in the first layer as \(m_{1}=2\). Thus, we need to consider all possible column pairs in (14). We have_ \[\mathbf{D}^{(1)}_{\{1,2\}}=\begin{bmatrix}0&0\\ 0&0\\ 0&1\end{bmatrix} \implies\mathcal{A}\left(\begin{bmatrix}0&0\\ 0&0\\ 0&1\end{bmatrix}\right)=\begin{bmatrix}0&0&1&1\\ 0&0&1&1\\ 0&1&1&0\end{bmatrix}\] \[\mathbf{D}^{(1)}_{\{1,3\}}=\begin{bmatrix}0&0\\ 0&1\\ 0&1\end{bmatrix} \implies\mathcal{A}\left(\begin{bmatrix}0&0\\ 0&1\\ 0&1\end{bmatrix}\right)=\begin{bmatrix}0&0&1&1\\ 0&1&1&0\\ 0&1&1&0\end{bmatrix}\] \[\vdots\] _We then construct the hyperplane arrangement matrix as_ \[\mathbf{D}^{(2)}=\bigsqcup_{|\mathcal{S}|=2}\mathcal{A}\left(\mathbf{D}^{(1)}_ {\mathcal{S}}\right)=\begin{bmatrix}0&0&0&0&1&1&1&1\\ 0&0&1&1&0&0&1&1\\ 0&1&0&1&0&1&0&1\end{bmatrix},\] _which shows that \(P_{2}=8\). Consequently, we obtain the maximum possible arrangement patterns, i.e., \(\{0,1\}^{3}\), in the second layer even though we are not able to obtain some of these patterns in the first layer in (14). We also provide a three dimensional visualization of this example in Figure 3._ ### Polynomial-time trainable convex formulation Based on the procedure described in Section 3.1 to compute the arrangement matrix \(\mathbf{D}^{(l)}\), we now derive an exact formulation for the non-convex training problem in (12). **Theorem 3.2**.: _Suppose that \(m_{L-1}\geq m^{*}\), then the non-convex training problem (12) is equivalent to_ \[\mathcal{P}^{\mathrm{cvx}}_{L}=\min_{\mathbf{w}\in\mathbb{R}^{P_{L-1}}}\frac{1 }{2}\left\|\mathbf{D}^{(L-1)}\mathbf{w}-\mathbf{y}\right\|_{2}^{2}+\beta\| \mathbf{w}\|_{1}, \tag{15}\] _where \(\mathbf{D}^{(L-1)}\in\{0,1\}^{n\times P_{L-1}}\) is a fixed matrix constructed via (13) and \(m^{*}\) denotes the cardinality of the optimal solution, which satisfies where \(m^{*}\leq n+1\). Also, it holds that \(\mathcal{P}^{\mathrm{noncvx}}_{L}=\mathcal{P}^{\mathrm{cvx}}_{L}\)._ Theorem 3.2 shows that two-layer and deep networks simplify to very similar convex Lasso problems (i.e. (6) and (15)). However, the set of hyperplane arrangements is larger for deep networks as analyzed in Section 3.1. Thus, the structure of the diagonal matrix \(\mathbf{D}\) and the problem dimensionality are significantly different for these problems. ### Simplified convex formulation Here, we show that the data \(\mathbf{X}\) can be shattered at a certain layer, i.e., \(\mathcal{H}_{l}(\mathbf{X})=\{0,1\}^{n}\) for a certain \(l\in[L]\), if the number of hidden neurons in a certain layer \(m_{l}\) satisfies \(m_{l}\geq C\sqrt{n}/L\). Then we can alternatively formulate (11) as a simpler convex problem. Therefore, compared to the two-layer networks in Section 2.1, we substantially improve the condition on layer width by benefiting from the depth \(L\), which also confirms the benign impact of the depth on the optimization. **Lemma 3.3**.: _If \(\exists l,C\) such that \(m_{l}\geq C\sqrt{n}/L\), then the set of hyperplane arrangements is complete, i.e., \(\mathcal{H}_{L}(\mathbf{X})=\{0,1\}^{n}\)._ We next use Lemma 3.3 to derive a simpler form of (11). **Corollary 3.4**.: _As a direct consequence of Theorem 2.3 and Lemma 3.3, the non-convex deep threshold network training problem in (12) can be cast as the following convex program_ \[\mathcal{P}_{L}^{\mathrm{convex}}=\mathcal{P}_{v2}^{\mathrm{cvx}}=\min_{ \boldsymbol{\delta}\in\mathbb{R}^{n}}\frac{1}{2}\|\boldsymbol{\delta}- \mathbf{y}\|_{2}^{2}+\beta(\|(\boldsymbol{\delta})_{+}\|_{\infty}+\|(- \boldsymbol{\delta})_{+}\|_{\infty})\,.\] Surprisingly, both two-layer and deep networks share the same convex formulation in this case. However, notice that two-layer networks require a condition on the data matrix in Theorem 2.3 whereas the result in Corollary 3.4 requires a milder condition on the layer widths. ### Training complexity Here, we first briefly summarize our complexity results for the convex training of deep networks. Based on the convex problems in (15) and Corollary 3.4, we have two regimes: * When \(n+1\geq m_{L-1}\geq m^{*}\), we solve (15) with \(\mathcal{O}(n^{3r}\prod_{k=1}^{L-2}m_{k})\) complexity, where \(r:=\text{rank}(\mathbf{X})\). Note that this is **polynomial-time** when \(r\) and the number of neurons in each layer \(\{m_{l}\}_{l=1}^{L-2}\) are constants. * When \(\exists l,C:m_{l}\geq C\sqrt{n}/L\), we solve (7) in closed-form, and the reconstruction of the non-convex parameters requires \(\mathcal{O}(n)\) time as proven in Appendix A.9. **Computational complexity for (15):** We first need to obtain an upperbound on the problem dimensionality \(P_{L-1}\), which is stated in the next result. **Lemma 3.5**.: _The cardinality of the hyperplane arrangement set for an \(L\)-layer network \(\mathcal{H}_{L}(\mathbf{X})\) can be bounded as \(|\mathcal{H}_{L}(\mathbf{X})|=P_{L-1}\lesssim\mathcal{O}(n^{r\prod_{k=1}^{L-2 }m_{k}})\), where \(r=\mathrm{rank}(\mathbf{X})\) and \(m_{l}\) denotes the number of hidden neurons in the \(l^{th}\) layer._ Lemma 3.5 shows that the set of hyperplane arrangements gets significantly larger as the depth of the network increases. However, the cardinality of this set is still a polynomial term since that \(r<\min\{n,d\}\) and \(\{m_{l}\}_{l=1}^{L-2}\) are fixed constants. To solve (15), we first enumerate all possible arrangements \(\{\mathbf{d}_{i}\}_{i=1}^{P_{L-1}}\) to construct the matrix \(\mathbf{D}^{(L-1)}\). Then, we solve a standard convex Lasso problem, which requires \(\mathcal{O}(P_{L-1}^{3})\) complexity (Efron et al., 2004). Thus, based on Lemma 3.5, the overall complexity is \(\mathcal{O}(P_{L-2}^{3})\approx\mathcal{O}(n^{3r\prod_{k=1}^{L-2}m_{k}})\). **Computational complexity for (7):** Since Corollary 3.4 yields (7), the complexity is \(\mathcal{O}(n)\) time. ## 4 Extensions to arbitrary loss functions In the previous sections, we considered squared error as the loss function to give a clear description of our approach. However, all the derivations extend to arbitrary convex loss. Now, we consider the regularized training problem with a convex loss function \(\mathcal{L}(\cdot,\mathbf{y})\), e.g., hinge loss, cross entropy, \[\min_{\boldsymbol{\theta}\in\Theta}\mathcal{L}(f_{\theta,L}(\mathbf{X}), \mathbf{y})+\frac{\beta}{2}\sum_{k=1}^{m_{L-1}}\sum_{l=1}^{L}(\|\mathbf{W}_{k} ^{(l)}\|_{F}^{2}+\|\mathbf{s}_{k}^{(l)}\|_{2}^{2}). \tag{16}\] Then, we have the following generic loss results. **Corollary 4.1**.: _Theorem 3.2 implies that when \(m_{L-1}\geq m^{*}\), (16) can be equivalently stated as_ \[\mathcal{P}_{L}^{\mathrm{cvx}}=\min_{\mathbf{w}\in\mathbb{R}^{P_{L-1}}} \mathcal{L}\left(\mathbf{D}^{(L-1)}\mathbf{w},\mathbf{y}\right)+\beta\| \mathbf{w}\|_{1}. \tag{17}\] _Alternatively when \(\mathcal{H}_{L}(\mathbf{X})=\{0,1\}^{n}\), based on Corollary 3.4, the equivalent convex problem is_ \[\min_{\boldsymbol{\delta}\in\mathbb{R}^{n}}\mathcal{L}(\boldsymbol{\delta}, \mathbf{y})+\beta(\|(\boldsymbol{\delta})_{+}\|_{\infty}+\|(-\boldsymbol{ \delta})_{+}\|_{\infty}). \tag{18}\] Corollary 4.1 shows that (17) and (18) are equivalent to the non-convex training problem in (16). More importantly, they can be globally optimized via efficient convex optimization solvers. ## 5 Experiments In this section1, we present numerical experiments verifying our theoretical results in the previous sections. As discussed in Section 2.4, after solving the proposed convex problems in (15) and (7), there exist multiple set of weight matrices yielding the same optimal objective value. Therefore, to have a good generalization performance on test data, we use some heuristic methods for the construction of the non-convex parameters \(\{\mathbf{W}^{(l)}\}_{l=1}^{L}\). Below, we provide details regarding the weight construction and review some baseline non-convex training methods. Footnote 1: We provide additional experiments and details in Appendix B. **Convex-Lasso:** To solve the problem (15), we first approximate arrangement patterns of the data matrix \(\mathbf{X}\) by generating i.i.d. Gaussian weights \(\mathbf{G}\in\mathbb{R}^{d\times\hat{P}}\) and subsample the arrangement patterns via \(\mathbb{I}\left[\mathbf{X}\mathbf{G}\geq 0\right]\). Then, we use \(\mathbf{G}\) as the hidden layer weights to construct the network. We repeat this process for every layer. Notice that here we sample a fixed subset of arrangements instead of enumerating all possible \(P\) arrangements. Thus, this approximately solves (15) by subsampling its decision variables, however, it still performs significantly better than standard non-convex training. **Convex-PI:** After solving (7), to recover the hidden layer weights of (2), we solve \(\mathbb{I}\left\{\mathbf{X}\mathbf{w}_{j}^{(1)}\geq 0\right\}=\mathbf{d}_{j}\) as \(\mathbf{w}_{j}^{(1)}=\mathbf{X}^{\dagger}\mathbf{d}_{j}\), where \(\mathbf{X}^{\dagger}\) denotes the pseudo-inverse of \(\mathbf{X}\). The resulting weights \(\mathbf{w}_{j}^{(1)}\) enforce the preactivations \(\mathbf{X}\mathbf{w}_{j}^{(1)}\) to be zero or one. Thus, if an entry is slightly higher or less than zero due to precision issues during the pseudo-inverse, it might give wrong output after the threshold activation. To avoid such cases, we use \(0.5\) threshold in the test phase, i.e., \(\mathbb{I}\left\{\mathbf{X}_{\mathrm{test}}\mathbf{w}_{j}^{(1)}\geq 0.5\right\}\). **Convex-SVM:** Another approach to solve \(\mathbb{I}\left\{\mathbf{X}\mathbf{w}_{j}^{(1)}\geq 0\right\}=\mathbf{d}_{j}\) is to use Support Vector Machines Figure 2: In this figure, we compare the classification performance of three-layer threshold networks trained with the setup described in Figure 1 for a single initialization trial. This experiment shows that our convex training approach not only provides the globally optimal training performance but also generalize remarkably well on the test data (see Appendix B.5 for details). (SVMs), which find the maximum margin vector. Particularly, we set the zero entries of \(\mathbf{d}_{i}\) as \(-1\) and then directly run the SVM to get the maximum margin hidden neurons corresponding to this arrangement. Since the labels are in the form \(\{+1,-1\}\) in this case, we do not need additional thresholding as in the previous approach. **Nonconvex-STE**(Bengio et al., 2013): This is the standard non-convex training algorithm, where the threshold activations is replaced with the identity function during the backward pass. **STE Variants:** We also benchmark against variants of STE. Specifically, we replace the threshold activation with ReLU (**Nonconvex-ReLU**(Yin et al., 2019)), Leaky ReLU (**Nonconvex-LReLU**(Xiao et al.)), and clipped ReLU (**Nonconvex-CReLU**(Cai et al., 2017)) during the backward pass. **Synthetic Datasets:** We compare the performances of **Convex-PI** and **Convex-SVM** trained via (7) with the non-convex heuristic methods mentioned above. We first run each non-convex heuristic for five different initializations and then plot the best performing one in Figure 1. This experiment clearly shows that the non-convex heuristics fail to achieve the globally optimal training performance provided by our convex approaches. For the same setup, we also compare the training and test accuracies for three different regimes, i.e., \(n>d\), \(n=d\), and \(n<d\). As seen in Figure 2, our convex approaches not only globally optimize the training objective but also generalize well on the test data. **Real Datasets:** In Table 2, we compare the test accuracies of two-layer threshold network trained via our convex formulation in (15), i.e., **Convex-Lasso** and the non-convex heuristics mentioned above. For this experiment, we use CIFAR-10 (Krizhevsky et al., 2014), MNIST (LeCun), and the datasets in the UCI repository (Dua and Graff, 2017) which are preprocessed as in Fernandez-Delgado et al. (2014). Here, our convex training approach achieves the highest test accuracy for most of the datasets while the non-convex heuristics perform well only for a few datasets. Therefore, we also validates the good generalization capabilities of the proposed convex training methods on real datasets. ## 6 Conclusion We proved that the training problem of regularized deep threshold networks can be equivalently formulated as a standard convex optimization problem with a fixed data matrix consisting of hyperplane arrangements determined by the data matrix and layer weights. Since the proposed formulation parallels the well studied Lasso model, we have two major advantages over the standard non-convex training methods: **1)** We globally optimize the network without resorting to any optimization heuristic or extensive hyperparameter search (e.g., learning rate schedule and initialization scheme); **2)** We efficiently solve the training problem using specialized solvers for Lasso. We also provided a computational complexity analysis and showed that the proposed convex program can be solved in polynomial-time. Moreover, when a layer width exceeds a certain threshold, a simpler alternative convex formulation can be solved in \(\mathcal{O}(n)\). Lastly, as a by product of our analysis, we characterize the recursive process behind the set of hyperplane arrangements for deep networks. Even though this set rapidly grows as the network gets deeper, globally optimizing the resulting Lasso problem still requires polynomial-time complexity for fixed data rank. We also note that the convex analysis proposed in this work is generic in the sense that it can be applied to various architectures including batch normalization (Ergen et al., 2022), vector output networks (Sahiner et al., 2020, 2021), polynomial activations (Bartan and Pilanci, 2021), GANs (Sahiner et al., 2022), autoregressive models (Gupta et al., 2021), and Transformers (Ergen et al., 2022; Sahiner et al., 2022). \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{**Dataset**} & \multicolumn{2}{c|}{**Convex-Lasso (Convex)**} & \multicolumn{2}{c|}{**Nonconvex-LReLU**} & \multicolumn{2}{c|}{**Nonconvex-LReLU**} & \multicolumn{2}{c}{**Nonconvex-LReLU**} \\ \cline{2-10} & **Accuracy** & **Linear** & **Linear** & **Linear** & **Linear** & **Linear** & **Linear** & **Linear** & **Linear** \\ \hline CIFAR-10 & **0.86\(\pm\)0.008** & **5.9\(\pm\)0.13** & 0.83\(\pm\)0.004 & 83.3\(\pm\)0.004 & 83.5\(\pm\)0.014 & 83.5\(\pm\)0.17 & 0.78\(\pm\)0.004 & 79.1\(\pm\) 0.43 & 0.80\(\pm\)0.004 & 83.1\(\pm\) 0.45 \\ batch & 0.80\(\pm\)0.017 & 7.7\(\pm\)0.02 & 0.80\(\pm\)0.008 & 83.4\(\pm\)0.004 & 83.4\(\pm\)0.004 & 83.6\(\pm\)0.003 & 83.9\(\pm\)0.003 & 83.4\(\pm\)0.21 & 0.80\(\pm\)0.004 & 83.5\(\pm\) 0.11 \\ batch & 0.85\(\pm\)0.001 & 7.7\(\pm\)0.01 & 0.80\(\pm\)0.008 & 83.4\(\pm\)0.004 & 83.4\(\pm\)0.004 & 83.6\(\pm\)0.003 & 83.6\(\pm\)0.003 & 83.4\(\pm\) 0.21 & 0.80\(\pm\) 0.004 & 83.5\(\pm\) 0.11 \\ batch & **0.84\(\pm\)0.001** & 8.3\(\pm\)0.003 & 8.7\(\pm\)0.003 & 8.7\(\pm\)0.003 & 83.5\(\pm\)0.004 & 83.1\(\pm\)0.007 & 83.6\(\pm\)0.003 & 83.7\(\pm\) 0.19 & 0.95\(\pm\) 0.003 & 83.5\(\pm\) 0.11 \\ machine-10 & **0.85\(\pm\)0.001** & 8.4\(\pm\)0.002 & 8.0\(\pm\)0.003 & 8.0\(\pm\)0.001 & 83.4\(\pm\)0.003 & 83.0\(\pm\)0.004 & 83.5\(\pm\) 0.003 & 83.5\(\pm\) 0.16 & 0.87\(\pm\) 0.003 & 52.9\(\pm\) 0.13 \\ deep-10 & **0.85\(\pm\)0.001** & **0.85\(\pm\)0.002** & **0.85\(\pm\)0.001** & 83.0\(\pm\)0.003 & 83.0\(\pm\)0.004 & 83.5\(\pm\)0.004 & 83.5\(\pm\) 0.003 & 83.5\(\pm\) 0.16 & 0.87\(\pm\) 0.003 & 52.9\(\pm\) 0.13 \\ conv-24 & **0.87\(\pm\)0.003** & **2.28\(\pm\)0.003** & **0.87\(\pm\)0.003** & 83.1\(\pm\)0.003 & 83.1\(\pm\)0.003 & 83.0\(\pm\)0.003 & 83.2\(\pm\) 0.12 & 0.72\(\pm\) 0.003 & 83.7\(\pm\) 0.003 & 83.5\(\pm\) 0.003 \\ conv-24 & **0.87\(\pm\)0.003** & **2.28\(\pm\)0.003** & **0.87\(\pm\)0.003** & 83.1\(\pm\)0.003 & 83.1\(\pm\)0.003 & 83.1\(\pm\)0.003 & 83.5\(\pm\) 0.003 & 83.2\(\pm\) 0.003 & 83.5\(\pm\) 0.18 \\ conv-24 & **0.87\(\pm\)0.003** & **2.28\(\pm\)0.003** & **0.75\(\pm\)0.003** & 83.4\(\pm\)0.003 & 83.1\(\pm\)0.003 & 83.1\(\pm\)0.003 & 83.2\(\pm\) 0.003 & 83.5\(\pm\) 0.003 & 83.5\(\pm\) 0.003 & 83.5\(\pm\) 0.10 \\ conv-24 & **0.88\(\pm\)0.003** & **2.20\(\pm\)0.003** & **0.75\(\pm\)0.003** & 83.4\(\pm\)0.003 & 83.0\(\pm\)0.003 & 83.0\(\pm\) 0.003 & 83.2\(\pm\) 0.003 & 83.5\(\pm\) 0.003 & 83.5\(\pm\) 0.003 & 83.5\(\pm\) 0.003 \\ feature & **0.77\(\pm\)0.001** & **0.83\(\pm\)0.003** & **0.56\(\pm\)0.002** & **0.56\(\pm\)0.003** & **0.72\(\pm\)0.003** & **0.56\(\pm\)0.003** & **0.56\(\pm\)0.003** & **0.56\(\pm\)0.004** & **0.56\(\pm\)0.004** & **0.56\(\pm\)0.003** & **0.56\(\pm\)0.003** \\ feature & **0.77\(\pm\)0.001** & **0.83\(\pm\)0.003** & **0.56\(\pm\)0.002** & **0.56\(\pm\)0.001** & **0.72\(\pm\)0.003** & **0.56\(\pm\)0.003** & **0.56\(\pm\)0.003** & **0.56\(\pm\)0.004** & **0.56\(\pm\)0.004** & **0.56\(\pm\)0.003** & **0.56\(\pm\)0.003** \\ \hline Accuracy/Time & **9.13** & **11.13** & **27.13** & **17.13** & **2 ## 7 Acknowledgements This work was partially supported by the National Science Foundation (NSF) under grants ECCS-2037304, DMS-2134248, NSF CAREER award CCF-2236829, the U.S. Army Research Office Early Career Award W911NF-21-1-0242, Stanford Precourt Institute, and the ACCESS - AI Chip Center for Emerging Smart Systems, sponsored by InnoHK funding, Hong Kong SAR.
2302.08840
Learnable Topological Features for Phylogenetic Inference via Graph Neural Networks
Structural information of phylogenetic tree topologies plays an important role in phylogenetic inference. However, finding appropriate topological structures for specific phylogenetic inference tasks often requires significant design effort and domain expertise. In this paper, we propose a novel structural representation method for phylogenetic inference based on learnable topological features. By combining the raw node features that minimize the Dirichlet energy with modern graph representation learning techniques, our learnable topological features can provide efficient structural information of phylogenetic trees that automatically adapts to different downstream tasks without requiring domain expertise. We demonstrate the effectiveness and efficiency of our method on a simulated data tree probability estimation task and a benchmark of challenging real data variational Bayesian phylogenetic inference problems.
Cheng Zhang
2023-02-17T12:26:03Z
http://arxiv.org/abs/2302.08840v1
# Learnable Topological Features for Phylogenetic Inference via Graph Neural Networks ###### Abstract Structural information of phylogenetic tree topologies plays an important role in phylogenetic inference. However, finding appropriate topological structures for specific phylogenetic inference tasks often requires significant design effort and domain expertise. In this paper, we propose a novel structural representation method for phylogenetic inference based on learnable topological features. By combining the raw node features that minimize the Dirichlet energy with modern graph representation learning techniques, our learnable topological features can provide efficient structural information of phylogenetic trees that automatically adapts to different downstream tasks without requiring domain expertise. We demonstrate the effectiveness and efficiency of our method on a simulated data tree probability estimation task and a benchmark of challenging real data variational Bayesian phylogenetic inference problems. ## 1 Introduction Phylogenetics is an important discipline of computational biology where the goal is to identify the evolutionary history and relationships among individuals or groups of biological entities. In statistical approaches to phylogenetics, this has been formulated as an inference problem on hypotheses of shared history, i.e., _phylogenetic trees_, based on observed sequence data (e.g., DNA, RNA, or protein sequences) under a model of evolution. The phylogenetic tree defines a probabilistic graphical model, based on which the likelihood of the observed sequences can be efficiently computed (Felsenstein, 2003). Many statistical inference procedures therefore can be applied, including maximum likelihood and Bayesian approaches (Felsenstein, 1981; Yang & Rannala, 1997; Mau et al., 1999; Huelsenbeck et al., 2001). Phylogenetic inference, however, has been challenging due to the composite parameter space of both continuous and discrete components (i.e., branch lengths and the tree topology) and the combinatorial explosion in the number of tree topologies with the number of sequences. Harnessing the topological information of trees hence becomes crucial in the development of efficient phylogenetic inference algorithms. For example, by assuming conditional independence of separated subtrees, Larget (2013) showed that conditional clade distributions (CCDs) can provide more reliable tree probability estimation that generalizes beyond observed samples. A similar approach was proposed to design more efficient proposals for tree movement when implementing Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetics (Hohna & Drummond, 2012). Utilizing more sophisticated local topological structures, CCDs were later generalized to subsplit Bayesian networks (SBNs) that provide more flexible distributions over tree topologies (Zhang & Matsen IV, 2018). Besides MCMC, variational Bayesian phylogenetics inference (VBPI) was recently proposed that leveraged SBNs and a structured amortization of branch lengths to deliver competitive posterior estimates in a more timely manner (Zhang & Matsen IV, 2019; Zhang, 2020; Zhang & Matsen IV, 2022). Azouri et al. (2021) used a machine learning approach to accelerate maximum likelihood tree-search algorithms by providing more informative topology moves. Topological features have also been found useful for comparison and interpretation of the reconstructed phylogenies (Matsen IV, 2007; Hayati et al., 2022). While these approaches prove effective in practice, they all rely on heuristic features (e.g., clades and subsplits) of phylogenetic trees that often require significant design effort and domain expertise, and may be insufficient for capturing complicated topological information. Graph Neural Networks (GNNs) are an effective framework for learning representations of graph-structured data. To encode the structural information about graphs, GNNs follow a neighborhood aggregation procedure that computes the representation vector of a node by recursively aggregating and transforming representation vectors of its neighboring nodes. After the final iteration of aggregation, the representation of the entire graph can also be obtained by pooling all the node embeddings together via some permutation invariant operators (Ying et al., 2018). Many GNN variants have been proposed and have achieved superior performance on both node-level and graph-level representation learning tasks (Kipf and Welling, 2017; Hamilton et al., 2017; Li et al., 2016; Zhang et al., 2018; Ying et al., 2018). A natural idea, therefore, is to adapt GNNs to phylogenetic models for automatic topological feature learning. However, the lack of node features for phylogenetic trees makes it challenging as most GNN variants assume fully observed node features at initialization. In this paper, we propose a novel structural representation method for phylogenetic inference that automatically learns efficient topological features based on GNNs. To obtain the initial node features for phylogenetic trees, we follow previous studies (Zhu and Ghahramani, 2002; Rossi et al., 2021) to minimize the Dirichlet energy, with one not encoding for the tip nodes. Unlike these previous studies, we present a fast linear time algorithm for Dirichlet energy minimization by taking advantage of the hierarchical structure of phylogenetic trees. Moreover, we prove that these features are sufficient for identifying the corresponding tree topology, i.e., there is no information loss in our raw feature representations of phylogenetic trees. These raw node features are then passed to GNNs for more sophisticated structure representation learning required by downstream tasks. Experiments on a synthetic data tree probability estimation problem and a benchmark of challenging real data variational Bayesian phylogenetic inference problems demonstrate the effectiveness and efficiency of our method. ## 2 Background NotationA phylogenetic tree is denoted as \((\tau,\mathbf{q})\) where \(\tau\) is a bifurcating tree that represents the evolutionary relationship of the species and \(\mathbf{q}\) is a non-negative branch length vector that characterizes the amount of evolution along the edges of \(\tau\). The tip nodes of \(\tau\) correspond to the observed species and the internal nodes of \(\tau\) represent the unobserved characters (e.g., DNA bases) of the ancestral species. The transition probability \(P_{ij}(t)\) from character \(i\) to character \(j\) along an edge of length \(t\) is often defined by a continuous-time substitution model (e.g., Jukes and Cantor (1969)), whose stationary distribution is denoted as \(\eta\). Let \(E(\tau)\) be the set of edges of \(\tau\), \(r\) be the root node (or any internal node if the tree is unrooted and the substitution model is reversible). Let \(\mathbf{Y}=\{Y_{1},Y_{2},\ldots,Y_{M}\}\in\Omega^{N\times M}\) be the observed sequences (with characters in \(\Omega\)) of length \(M\) over \(N\) species. Phylogenetic posteriorAssuming different sites \(Y_{i},i=1,\ldots,M\) are independent and identically distributed, the likelihood of observing \(\mathbf{Y}\) given the phylogenetic tree \((\tau,\mathbf{q})\) takes the form \[p(\mathbf{Y}|\tau,\mathbf{q})=\prod_{i=1}^{M}p(Y_{i}|\tau,\mathbf{q})=\prod_{i=1}^{M}\sum _{a^{i}}\eta(a^{i}_{r})\prod_{(u,v)E(\tau)}P_{a^{i}_{u}a^{i}_{v}}(q_{uv}), \tag{1}\] where \(a^{i}\) ranges over all extensions of \(Y_{i}\) to the internal nodes with \(a^{i}_{u}\) being the assigned character of node \(u\). The above phylogenetic likelihood function can be computed efficiently through the pruning algorithm (Felsenstein, 2003). Given a prior distribution \(p(\tau,\mathbf{q})\) of the tree topology and the branch lengths, Bayesian phylogenetics then amounts to properly estimating the phylogenetic posterior \(p(\tau,\mathbf{q}|\mathbf{Y})\propto p(\mathbf{Y}|\tau,\mathbf{q})p(\tau,\mathbf{q})\). Variational Bayesian phylogenetic inferenceLet \(Q_{\mathbf{\phi}}(\tau)\) be an SBN-based distribution over the tree topologies and \(Q_{\mathbf{\psi}}(\mathbf{q}|\tau)\) be a non-negative distribution over the branch lengths. VBPI finds the best approximation to \(p(\tau,\mathbf{q}|\mathbf{Y})\) from the family of products of \(Q_{\mathbf{\phi}}(\tau)\) and \(Q_{\mathbf{\psi}}(\mathbf{q}|\tau)\) by maximizing the following multi-sample lower bound \[L^{K}(\mathbf{\phi},\mathbf{\psi})=\mathbb{E}_{Q_{\mathbf{\phi},\mathbf{\psi}}(\tau^{1:K}, \mathbf{q}^{1:K})}\log\left(\frac{1}{K}\sum_{i=1}^{K}\frac{p(\mathbf{Y}|\tau^{i},\mathbf{ q}^{i})p(\tau^{i},\mathbf{q}^{i})}{Q_{\mathbf{\phi}}(\tau^{i})Q_{\mathbf{\psi}}(\mathbf{q}^{i}| \tau^{i})}\right)\leq\log p(\mathbf{Y}) \tag{2}\] where \(Q_{\mathbf{\phi},\mathbf{\psi}}(\tau^{1:K},\mathbf{q}^{1:K})=\prod_{i=1}^{K}Q_{\mathbf{\phi}}( \tau^{i})Q_{\mathbf{\psi}}(\mathbf{q}^{i}|\tau^{i})\). To properly parameterize the variational distributions, a support of the conditional probability tables (CPTs) is often acquired from a sample of tree topologies via fast heuristic bootstrap methods (Minh et al., 2013; Zhang & Matsen IV, 2019). The branch length approximation \(Q_{\mathbf{\psi}}(\mathbf{q}|\tau)\) is taken to be the diagonal Lognormal distribution \[Q_{\mathbf{\psi}}(\mathbf{q}|\tau)=\prod\nolimits_{e\in E(\tau)}p^{\mathrm{Lognormal}} \left(q_{e}\mid\mu(e,\tau),\sigma(e,\tau)\right)\] where \(\mu(e,\tau),\sigma(e,\tau)\) are amortized over the tree topology space via shared local structures (i.e., split and primary subsplit pairs (PSPs)), which are available from the support of CPTs. More details about structured amortization, VBPI and SBNs can be found in section 3.2.2 and Appendix A. Graph neural networksLet \(G=(V,E)\) denote a graph with node feature vectors \(\mathbf{X}_{v}\) for node \(v\in V\), and \(\mathcal{N}(v)\) denote the set of nodes adjacent to \(v\). GNNs iteratively update the representation of a node by running a message passing (MP) scheme for \(T\) time steps. During each MP time step, the representation vectors of each node are updated based on the aggregated messages from its neighbors as follows \[\mathbf{h}_{v}^{(t+1)}=\mathrm{UPDATE}^{(t)}\left(\mathbf{h}_{v}^{(t)},\mathbf{m}_{v}^{(t +1)}\right),\quad\mathbf{m}_{v}^{(t+1)}=\mathrm{AGG}^{(t)}\left(\left\{\mathbf{h}_{u} ^{(t)}:u\in\mathcal{N}(v)\right\}\right)\] where \(\mathbf{h}_{v}^{(t)}\) is the feature vector of node \(v\) at time step \(t\), with initialization \(\mathbf{h}_{v}^{(0)}=\mathbf{X}_{v}\), \(\mathrm{UPDATE}^{(t)}\) is the update function, and \(\mathrm{AGG}^{(t)}\) is the aggregation function. A number of powerful GNNs with different implementations of the update and aggregation functions have been proposed (Kipf & Welling, 2017; Hamilton et al., 2017; Li et al., 2016; Velickovic et al., 2018; Xu et al., 2019; Wang et al., 2019). In additional to the local node-level features, GNNs can also provide features for the entire graph. To learn these global features, an additional \(\mathrm{READOUT}\) function is often introduced to aggregate node features from the final iteration \[\mathbf{h}_{G}=\mathrm{READOUT}\left(\left\{\mathbf{h}_{v}^{(T)}:v\in V\right\} \right).\] \(\mathrm{READOUT}\) can be any function that is permutation invariant to the node features. ## 3 Proposed Method In this section, we propose a general approach that automatically learns topological features directly from phylogenetic trees. We first introduce a simple embedding method that provides raw features for the nodes of phylogenetic trees, together with an efficient linear time algorithm for obtaining these raw features and a discussion on some of their theoretical properties regarding tree topology representation. We then describe how these raw features can be adapted to learn efficient representations of certain structures of trees (e.g., edges) for downstream tasks. ### Interior Node Embedding Learning tree structure features directly from tree topologies often requires raw node/edge features, as typically assumed in most GNN models. Unfortunately, this is not the case for phylogenetic models. Figure 1: An overview of the proposed topological feature learning framework for phylogenetic inference. **Left**: A phylogenetic tree topology with one hot encoding for the tip nodes and missing features for the interior nodes. **Middle**: Interior node embedding via Dirichlet energy minimization. **Right**: Subsequently, the tree topology with embedded node features are fed into a GNN model for more sophisticated tree structure representation learning required by downstream tasks. Although we can use one hot encoding for the tip nodes according to their corresponding species (taxa names only, not the sequences), the interior nodes still lack original features. The first step of tree structure representation learning for phylogenetic models, therefore, is to properly input those missing features for the interior nodes. Following previous studies (Zhu & Ghahramani, 2002; Rossi et al., 2021), we make a common assumption that the node features change smoothly across the tree topologies (i.e., the features of every node are similar to those of the neighbors). A widely used criterion of smoothness for functions defined on nodes of a graph is the _Dirichlet energy_. Given a tree topology \(\tau=(V,E)\) and a function \(f:V\mapsto\mathbb{R}^{d}\), the Dirichlet energy is defined as \[\ell(f,\tau)=\sum_{(u,v)\in E}\|f(u)-f(v)\|^{2}.\] Let \(V=V^{b}\cup V^{o}\), where \(V^{b}\) denotes the set of leaf nodes and \(V^{o}\) denotes the set of interior nodes. Let \(\mathbf{X}^{b}=\{\mathbf{x}_{v}|v\in V^{b}\}\) be the set of one hot embeddings for the leaf nodes. The interior node features \(\mathbf{X}^{o}=\{\mathbf{x}_{v}|v\in V^{o}\}\) then can be obtained by minimizing the Dirichlet energy \[\widehat{\mathbf{X}^{o}}=\mathop{\arg\min}_{\mathbf{X}^{o}}\ell(\mathbf{X}^{o},\mathbf{X}^{b},\tau)=\mathop{\arg\min}_{\mathbf{X}^{o}}\sum_{(u,v)\in E}\|\mathbf{x}_{u}-\mathbf{x}_{v} \|^{2}.\] #### 3.1.1 A Linear Time Two-pass Algorithm Note that the above Dirichlet energy function is convex, its minimizer therefore can be obtained by solving the following optimality condition \[\frac{\partial\ell(\mathbf{X}^{o},\mathbf{X}^{b},\tau)}{\partial\mathbf{X}^{o}}(\widehat{ \mathbf{X}^{o}})=\mathbf{0}. \tag{3}\] It turns out that equation 3 has a close-form solution based on matrix inversion. However, as matrix inversion scales cubically in general, it is infeasible for graphs with many nodes. Fortunately, by leveraging the hierarchical structure of phylogenetic trees, we can design a more efficient linear time algorithm for the solution of equation 3 as follows. We first rewrite equation 3 as a system of linear equations \[\sum\nolimits_{v\in\mathcal{N}(u)}(\widehat{\mathbf{x}}_{u}-\widehat{\mathbf{x}}_{v} )=\mathbf{0},\quad\forall u\in V^{o},\qquad\widehat{\mathbf{x}}_{v}=\mathbf{x}_{v},\quad \forall v\in V^{b}, \tag{4}\] where \(\mathcal{N}(u)\) is the set of neighbors of node \(u\). Given a topological ordering induced by the tree1, we can obtain the solution within a two-pass sweep through the tree topology, similar to the Thomas algorithm for solving tridiagonal systems of linear equations (Thomas, 1949). In the first pass, we traverse the tree in a postorder fashion and express the node features as a linear function of those of their parents, Footnote 1: This is trivial for rooted trees since they are directed. For unrooted trees, we can choose an interior node as the root node and use the topological ordering of the corresponding rooted trees. \[\widehat{\mathbf{x}}_{u}=c_{u}\widehat{\mathbf{x}}_{\pi_{u}}+\mathbf{d}_{u}, \tag{5}\] for all the nodes expect the root node, where \(\pi_{u}\) denotes the parent node of \(u\). More specifically, we first initialize \(c_{u}=0,\mathbf{d}_{u}=\mathbf{x}_{u}\) for all leaf nodes \(u\in V^{b}\). For all the interior nodes except the root node, we compute \(c_{u},\mathbf{d}_{u}\) recursively as follows (see a detailed derivation in Appendix B) \[c_{u}=\frac{1}{|\mathcal{N}(u)|-\sum_{v\in\mathrm{ch}(u)}c_{v}},\quad\mathbf{d}_{u }=\frac{\sum_{v\in\mathrm{ch}(u)}\mathbf{d}_{v}}{|\mathcal{N}(u)|-\sum_{v\in \mathrm{ch}(u)}c_{v}}, \tag{6}\] where \(\mathrm{ch}(u)\) denotes the set of child nodes of \(u\). In the second pass, we traverse the tree in a preorder fashion and compute the solution by back substitution. Concretely, at the root node \(r\), given equation 5 for all the child nodes from the first pass, we can compute the node feature directly from equation 4 as below \[\widehat{\mathbf{x}}_{r}=\frac{\sum_{v\in\mathrm{ch}(r)}\mathbf{d}_{v}}{|\mathcal{N}(r )|-\sum_{v\in\mathrm{ch}(r)}c_{v}}. \tag{7}\] For all the other interior nodes, the node features can be obtained via equation 5 by substituting the learned features for the parent nodes. We summarize our two-pass algorithm in Algorithm 1. Moreover, the algorithm is numerically stable due to the following lemma (proof in Appendix C). **Lemma 1**.: _Let \(\lambda=\min_{u\in V^{o}\setminus\{r\}}|\mathcal{N}(u)|\). For all interior node \(u\in V^{o}\setminus\{r\}\), \(0\leq c_{u}\leq\frac{1}{\lambda-1}\)._ Besides bifurcating phylogenetic trees, the above two-pass algorithm can be easily adapted to interior node embedding for general tree-shaped graphs with given tip node features. #### 3.1.2 Tree Topology Representation Power In this section, we discuss some theoretical properties regarding the tree topology presentation power of the node features introduced above. We start with a useful lemma that elucidates an important behavior of the solution to the linear system 4, which is similar to the solutions to elliptic equations. **Lemma 2** (Extremum Principle).: _Let \(\{\widehat{\mathbf{x}}_{u}\in\mathbb{R}^{d}|u\in V\}\) be a set of \(d\)-dimensional node features that satisfies equations 4. \(\forall 1\leq n\leq d\), let \(\widehat{\mathbf{X}}[n]=\{\widehat{\mathbf{x}}_{u}[n]|u\in V\}\) be the set of the \(n\)-th components of node features. Then, \(\forall 1\leq n\leq d\), we have: (i) the extremum values (i.e., maximum and minimum) of \(\widehat{\mathbf{X}}[n]\) can be achieved at some tip nodes; (ii) if the extremum values are achieved at some interior nodes, then \(\widehat{\mathbf{X}}[n]\) has only one member, or in other words, \(\widehat{\mathbf{x}}_{u}[n]\) is the same \(\forall u\in V\)._ **Theorem 1**.: _Let \(N\) be the number of tip nodes. Let \(\{\widehat{\mathbf{x}}_{u}\in\mathbb{R}^{N}|u\in V\}\) be the solution to the linear system 4 with one hot encoding for the tip nodes. Then, \(\forall u\in V^{o}\), we have_ \[(i)\ 0<\widehat{\mathbf{x}}_{u}[n]<1,\quad\forall 1\leq n\leq N,\quad\text{and} \quad(ii)\ \sum\nolimits_{n=1}^{N}\widehat{\mathbf{x}}_{u}[n]=1.\] The complete proofs of Lemma 2 and Theorem 1 are provided in Appendix C. When the tip node features are linearly independent, a similar proposition holds when we consider the coefficients of the linear combination of the tip node features for the interior node features instead. **Corollary 1**.: _Suppose that the tip node features are linearly independent, the interior node features obtained from the solution to the linear system 4 all lie in the interior of the convex hull of all tip node features._ The proof is provided in Appendix C. The following lemma reveals a key property of the nodes that are adjacent to the boundary of the tree topology in the embedded feature space. **Lemma 3**.: _Let \(\{\widehat{\mathbf{x}}_{u}|u\in V\}\) be the solution to the linear system 4, with linearly independent tip node features. Let \(\{\widehat{\mathbf{x}}_{u}=\sum_{v\in V^{h}}a^{u}_{u}\mathbf{x}_{v}|u\in V^{o}\}\) be the convex combination representations of the interior node features. For any tip node \(v\in V^{b}\), we have_ \[u^{*}=\arg\max_{u\in V^{o}}a^{u}_{u}\quad\Leftrightarrow\quad u^{*}\in \mathcal{N}(v).\] **Theorem 2** (Identifiability).: _Let \(\mathbf{X}^{o}=\{\widehat{\mathbf{x}}_{u}|u\in V^{o}\}\) and \(\mathbf{Z}^{o}=\{\widehat{\mathbf{x}}_{u}|u\in V^{o}\}\) be the sets of interior node features that minimizes the Dirichlet energy for phylogenetic tree topologies \(\tau_{x}\) and \(\tau_{z}\) respectively, given the same linearly independent tip node features. If \(\mathbf{X}^{o}=\mathbf{Z}^{o}\), then \(\tau_{x}=\tau_{z}\)._ The proofs of Lemma 3 and Theorem 2 are provided in Appendix C. By Theorem 2, we see that the proposed node embeddings are complete representations of phylogenetic tree topologies with no information loss. ### Structural Representation Learning via Graph Neural Networks Using node embeddings introduced in section 3.1 as raw features, we now show how to learn more sophisticated representations of tree structures for different phylogenetic inference tasks via GNNs. Given a tree topology \(\tau\), let \(\{\mathbf{h}^{(0)}_{v}:v\in V\}\) be the raw features and \(\{\mathbf{h}^{(T)}_{v}:v\in V\}\) be the output features after the final iteration of GNNs. We feed these output features of GNNs into a multi-layer perceptron (MLP) to get a set of learnable features for each node \[\mathbf{h}_{v}=\operatorname{MLP}^{(0)}\left(\mathbf{h}^{(T)}_{v}\right),\quad\forall \ v\in V,\] before adapting to different downstream tasks, as demonstrated in the following examples. #### 3.2.1 Energy Based Models for Tree Probability Estimation Our first example is on graph-level representation learning of phylogenetic tree topologies. Let \(\mathcal{T}\) denote the entire tree topology space. Given learnable node features of tree topologies, one can use a permutation invariant function \(g\) to obtain graph-level features and hence create an energy function \(F_{\mathbf{\phi}}:\mathcal{T}\mapsto\mathbb{R}\) that assigns each tree topology a scalar value as follows \[F_{\mathbf{\phi}}(\tau)=\mathrm{MLP}^{(1)}(\mathbf{h}_{G}),\quad\mathbf{h}_{G}=g\left(\{ \mathbf{h}_{v}:v\in V\}\right).\] where \(g\circ\mathrm{MLP}^{(0)}\) can be viewed as a \(\mathrm{READOUT}\) function in section 2. This allows us to construct energy based models (EBMs) for tree probability estimation \[q_{\mathbf{\phi}}(\tau)=\frac{\exp\left(-F_{\mathbf{\phi}}(\tau)\right)}{Z(\mathbf{\phi})},\quad Z(\mathbf{\phi})=\sum\nolimits_{\tau\in\mathcal{T}}\exp\left(-F_{\mathbf{\phi} }(\tau)\right).\] As \(Z(\mathbf{\phi})\) is usually intractable, we can employ noise contrastive estimation (NCE) (Gutmann and Hyvarinen, 2010) to train these energy based models. Let \(p_{n}\) be some noise distribution that has tractable density and allows efficient sampling procedures. Let \(D_{\mathbf{\phi}}(\tau)=\log q_{\mathbf{\phi}}(\tau)-\log p_{n}(\tau).\) We can train \(D_{\mathbf{\phi}}\)2 to minimize the following objective function (NCE loss) Footnote 2: Here \(Z(\mathbf{\phi})\) is taken as a free parameter and is included into \(\mathbf{\phi}\). \[J(\mathbf{\phi})=-\left(\mathbb{E}_{\tau\sim p_{\mathrm{data}}(\tau)}\log\left(S \left(D_{\mathbf{\phi}}(\tau)\right)\right)+\mathbb{E}_{\tau\sim p_{n}(\tau)}\log \left(1-S\left(D_{\mathbf{\phi}}(\tau)\right)\right)\right),\] where \(S(x)=\frac{1}{1+\exp(-x)}\) is the sigmoid function. It is easy to verify that the minimum is achieved at \(D_{\mathbf{\phi}^{*}}(\tau)=\log p_{\mathrm{data}}(\tau)-\log p_{n}(\tau).\) Therefore, \(q_{\mathbf{\phi}^{*}}(\tau)=p_{\mathrm{data}}(\tau)=p_{n}(\tau)\exp\left(D_{\mathbf{ \phi}^{*}}(\tau)\right)\). #### 3.2.2 Branch Length Parameterization for Vbpi The branch length parameterization in VBPI so far has relied on hand-engineered features (i.e., splits and PSPs) for the edges on tree topologies. Let \(\mathbb{S}_{\mathrm{r}}\) denote the set of splits and \(\mathbb{S}_{\mathrm{psp}}\) denote the set of PSPs. The simple split-based parameterization assigns parameters \(\mathbf{\psi}^{\mu},\mathbf{\psi}^{\sigma}\) for splits in \(\mathbb{S}_{\mathrm{r}}\). The mean and standard deviation for each edge \(e\) on \(\tau\) are then given by the associated parameters of the corresponding split \(e/\tau\) as follows \[\mu(e,\tau)=\psi_{e/\tau}^{\mu},\quad\sigma(e,\tau)=\psi_{e/\tau}^{\sigma}. \tag{8}\] The more flexible PSP parameterization assigns additional parameters for PSPs in \(\mathbb{S}_{\mathrm{psp}}\) and adds the associated parameters of the corresponding PSPs \(e/\tau\) to equation 8 to refine the mean and standard deviation parameterization \[\mu(e,\tau)=\psi_{e/\tau}^{\mu}+\sum\nolimits_{s\in e/\tau}\psi_{s}^{\mu},\ \ \sigma(e,\tau)=\psi_{e/\tau}^{\sigma}+\sum\nolimits_{s\in e/\tau}\psi_{s}^{\sigma}. \tag{9}\] Although these heuristic features prove effective, they often require substantial design effort, a sample of tree topologies for feature collection, and can not adapt themselves during training which makes it difficult for amortized inference over different tree topologies. Based on the learnable node features, we can design a more flexible branch length parameterization that is capable of distilling more effective structural information of tree topologies for variational approximations. For each edge \(e=(u,v)\) on \(\tau\), similarly as in section 3.2.1, one can use a permutation invariant function \(f\) to obtain edge-level features and transform them into the mean and standard deviation parameters as follows \[\mu(e,\tau)=\mathrm{MLP}^{\mu}\left(\mathbf{h}_{e}\right),\quad\sigma(e,\tau)= \mathrm{MLP}^{\sigma}\left(\mathbf{h}_{e}\right),\quad\mathbf{h}_{e}=f\left(\{\mathbf{h}_ {u},\mathbf{h}_{v}\}\right). \tag{10}\] Compared to heuristic feature based parameterizations in 8 and 9, learnable topological feature based parameterizations in 10 allow much richer design for the branch length distributions across different tree topologies and do not require pre-sampled tree topologies for feature collection. ## 4 Experiments In this section, we test the effectiveness and efficiency of learnable topological features for phylogenetic inference on the two aforementioned benchmark tasks: tree probability estimation via energy based models and branch length parameterization for VBPI. Following Zhang and Matsen (2019), in VBPI we used the simplest SBN for the tree topology variational distribution, and the CPT supports were estimated from ultrafast maximum likelihood phylogenetic bootstrap trees using UFBoot (Minh et al., 2013). The code is available at [https://github.com/zcrabbit/vbpi-gnn](https://github.com/zcrabbit/vbpi-gnn). Experimental setup.We evaluate five commonly used GNN variants with the following convolution operators: graph convolution networks (GCN), graph isomorphism operator (GIN), GraphSAGE operator (SAGE), gated graph convolution operator (GGNN) and edge convolution operator (EDGE). See more details about these convolution operators in Appendix F. In addition to the above GNN variants, we also considered a simpler model that skips all GNN iterations (i.e., \(T=0\)) and referred to it as MLP in the sequel. All GNN variants have 2 GNN layers (including the input layer), and all involved MLPs have 2 layers. We used summation as our permutation invariant aggregation function for graph-level features and maximization for edge-level features. All models were implemented in Pytorch (Paszke et al., 2019) with the Adam optimizer (Kingma & Ba, 2015).We designed our experiments with the goals of (i) verifying the effectiveness of GNN-based EBMs for tree topology estimation and (ii) verifying the improvement of GNN-based branch length parameterization for VBPI over the baseline approaches (i.e., split and PSP based parameterizations) and investigating how helpful the learnable topological features are for reducing the amortization gaps. ### Simulated Data Tree Probability Estimation We first investigated the representative power of learnable topological features for approximating distributions on phylogenetic trees using energy based models (EBMs), and conducted experiments on a simulated data set. We used the space of unrooted phylogenetic trees with 8 leaves, which contains 10395 unique trees in total. Similarly as in Zhang & Matsen IV (2019), we generated a target distribution \(p_{0}(\tau)\) by drawing a sample from the symmetric Dirichlet distribution \(\mathrm{Dir}(\beta 1)\) of order 10395 with a pre-selected arbitrary order of trees. The concentration parameter \(\beta\) is used to control the diffuseness of the target distribution and was set to 0.008 to provide enough information for inference while allowing for adequate diffusion in the target. As mentioned earlier in section 3.2.1, we used noise contrastive estimation (NCE) to train our EBMs where we set the noise distribution \(p_{n}(\tau)\) to be the uniform distribution. Results were collected after 200,000 parameter updates. Note that the minimum NCE loss in this case is \[J^{*}=-2\mathrm{JSD}\left(p_{0}(\tau)\|p_{n}(\tau)\right)+2\log 2,\] where \(\mathrm{JSD}(\cdot\|\cdot)\) is the Jensen-Shannon divergence. Figure 2 shows the empirical performance of different methods. From the left plot, we see that the NCE losses converge rapidly and the gaps between NCE losses for the GNN variants and the best NCE loss \(J^{*}\) (dashed red line) are close to zero, demonstrating the representative power of learnable topological features on phylogenetic tree probability estimations. The evolution of KL divergences (middle plot) is consistent with the NCE losses. Compared to MLP, all GNN variants perform better, indicating that the extra flexibility provided by GNN iterations is crucial for tree probability estimation that would benefit from more informative graph-level features. Although the Figure 2: Comparison of learnable topological feature based EBMs for probability mass estimation of unrooted phylogenetic trees with 8 leaves using NCE. **Left:** NCE loss. **Middle:** KL divergence. **Right:** EBM approximations vs ground truth probabilities. The NCE loss and KL divergence results were obtained from 10 independent runs and the error bars represent one standard deviation. raw features from interior node embedding contain all information of phylogenetic tree topologies, we see that distilling effective structural information from them is still challenging. This makes GNN models that are by design more capable of learning geometric representations a favorable choice. The right plot compares the probability mass approximations provided by EBMs using MLP and GGNN (which performs the best among all GNN variants), to the ground truth \(p_{0}(\tau)\). We see that EBMs using GGNN consistently provide accurate approximations for trees across a wide range of probabilities. On the other hand, estimates provided by those using MLP are often of large bias, except for a few trees with high probabilities. ### Real Data Variational Bayesian Phylogenetic Inference The second task we considered is VBPI, where we compared learnable topological feature based branch length parameterizations to heuristic feature based parameterizations (denoted as Split and PSP resepectively) proposed in the original VBPI approach (Zhang & Matsen IV, 2019). All methods were evaluated on 8 real datasets that are commonly used to benchmark Bayesian phylogenetic inference methods (Hedges et al., 1990; Garey et al., 1996; Yang & Yoder, 2003; Henk et al., 2003; Lakner et al., 2008; Zhang & Blackwell, 2001; Yoder & Yang, 2004; Rossman et al., 2001; Hohna & Drummond, 2012; Larget, 2013; Whidden & Matsen IV, 2015). These datasets, which we call DS1-8, consist of sequences from 27 to 64 eukaryote species with 378 to 2520 site observations. We concentrate on the most challenging part of the Bayesian phylogenetics: joint learning of the tree topologies and the branch lengths, and assume a uniform prior on the tree topology, an i.i.d. exponential prior (\(\mathrm{Exp}(10)\)) for the branch lengths and the simple Jukes & Cantor (1969) substitution model. We gathered the support of CPTs from 10 replicates of 1000 ultrafast maximum likelihood bootstrap trees (Minh et al., 2013). We set \(K=10\) for the multi-sample lower bound, with a schedule \(\lambda_{n}=\min(1,0.001+n/100000)\), going from 0.001 to 1 after 100000 iterations. The Monte Carlo gradient estimates for the tree topology parameters and branch length parameters were obtained via VIMCO (Mnih & Rezende, 2016) and the reparameterization trick (Kingma & Welling, 2014) respectively. Results were collected after 400,000 parameter updates. Table 1 shows the estimates of the evidence lower bound (ELBO) and the marginal likelihood using different branch length parameterizations on the 8 benchmark datasets, including the results for the \begin{table} \begin{tabular}{l l l l l l l l l l} \hline \hline \multicolumn{1}{l}{Data set} & DS1 & DS2 & DS3 & DS4 & DS5 & DS6 & DS7 & DS8 \\ \# Taka & 27 & 29 & 36 & 41 & 50 & 50 & 59 & 64 \\ \# SITKs & 1949 & 2520 & 1812 & 1137 & 378 & 1133 & 1824 & 1008 \\ \hline \multirow{7}{*}{\begin{tabular}{} \end{tabular} } & Split & -7112.162.00 & -2639.950(0.89) & -33736.87(0.36) & -13332.82(0.75) & -4218.860.20 & -4729.40(0.47) & -37335.440(1.12) & -4861.562(0.00) \\ & PSP & -7111.231.241 & -26340.43 & -0.3473.714(0.45) & -13332.02(0.65) & -3123.840.19 & -4729.20(0.44) & -37335.180(1.03) & -4855.400(0.32) \\ & MLP & -7110.040.10 & -26348.583 & -0.3107.326(0.05) & -13331.900(0.00) & **-3213.012.13** & -0.7288.901(0.17) & -37334.90(1.12) & -4655.330(1.15) \\ & GCN & -7110.320(10) & -2638.585(0.09) & -33736.20(0.05) & -13331.940(0.00) & -4218.000(10.10) & -6728.70(10.16) & -37334.920(12.1) & -4655.170(10.15) \\ & GIN & -7110.270(0.09) & -2638.586(0.09) & -33736.20(0.05) & -13331.920(0.02) & -6717.530(10.17) & -4728.590(10.17) & -4565.00(0.15) \\ & SAGE & -7110.200(10) & **-2638.584.500(0.09)** & -33736.200(0.06) & -13331.600(0.10) & -2417.210(0.11) & -4728.510(10.15) & -37334.910(11.1) & -4655.050(10.14) \\ & GGNN & **-7110.260.100** & **-2638.584.000** & **-33736.200(0.06)** & **-13331.790(0.00)** & **-3217.830.100** & **-4728.560(10.16) & -37334.870(12.1) & -4655.051(01.15) \\ & EDGE & **-7110.260.100** & **-2638.584.000** & **-33736.200(0.08)** & -13331.800(0.10) & **-3217.500.100** & **-3273.570(10.16)** & **-37334.840(10.14)** & -4655.051(01.14) \\ \hline \multirow{7}{*}{ \begin{tabular}{} \end{tabular} } & Split & -710.470(0.27) & -2637.087(0.08) & -33735.125(1.23) & -13330.100(1.34) & -2418.501(0.48) & -6734.080(0.473) & -37332.180(0.48) & -4631.509(0.94) \\ & PSP & -710.108(0.10) & -26367.740(0.08) & -33735.12(10.12) & -13339.920(0.22) & -24314.600(0.44) & -4741.040(0.47) & -37332.000(0.53) & -6366.050(0.51) \\ & MLP & -7108.40(10.24) & -26367.740(0.08) & -33735.120(10.10) & -13332.990(0.22) & -2314.600(0.45) & -6724.470(0.47) & -37332.000(0.34) & -4560.720(0.53) \\ & GCN & -7108.400(10.24) & -26367.730(0.08) & -33735.120(110.10) & -13332.990(0.18) & -2314.600(0.49) & -6724.1400(0.49) & -37332.000(0.34) & -4650.680(0.54) \\ & GIN & -7108.400(10.17) & **-26387.700(0.08)** & **-33735.120(0.100)** & **-13332.990(0.18)** & -2314.600(0.45) & -6724.300(0.42) & -37332.000(0.30) & -3650.608(0.49) \\ & GCN & -7108.400(10.19) & -26367.730(0.100) & -33753.120(0.09) & -13329.90(0.19) & -2414.600(0.36) & -6724.300(0.42) & -37332.000(0.30) & -4650.608(0.45) \\ & EDGE & **-7108.410.140** & **-2630.750(0.07)** & **-3375.120(0.09)** & -13329.900(0.19) & -2314.640(0.38) & **-6724.300(0.49)** & **-37332.600(0.24)** & **-4680.508(0.45)** \\ & SS & -7108.420(10.18) & -26367.570(0.48) & -33775.440(0.50) & -13330.600(0.54) & **-5214.510(0.28)** & -6724.070(0.86) & -37332.762(0.42) & -4649.581(11.75) \\ \hline \hline \end{tabular} \end{table} Table 1: Evidence Lower bound (ELBO) and marginal likelihood (ML) estimates of different methods across 8 benchmark datasets for Bayesian phylogenetic inference. The marginal likelihood estimates of all variational methods are obtained via importance sampling using 1000 samples, and the results (in units of nats) are averaged over 100 independent runs with standard deviation in brackets. Results for stepping-stone (SS) are from Zhang & Matsen IV (2019)(using 10 independent MrBayes (Ronquist et al., 2012) runs, each with 4 chains for 10,000,000 iterations stepping-stone (SS) method (Xie et al., 2011), which is one of the state-of-the-art sampling based methods for marginal likelihood estimation. For each data set, a better approximation would lead to a smaller variance of the marginal likelihood estimates. We see that solely using the raw features, MLP-based parameterization already outperformed the Split and PSP baselines by providing tighter lower bounds. With more expressive representations of local structures enabled by GNN iterations, GNN-based parameterization further improved upon MLP-based methods, indicating the importance of harnessing local topological information for flexible branch length distributions. Moreover, when used as importance distributions for marginal likelihood estimation via importance sampling, MLP and GNN variants provide more steady estimates (less variance) than Split and PSP respectively. All variational approaches compare favorably to SS and require much fewer samples. The left plot in Figure 3 shows the evidence lower bounds as a function of the number of parameter updates on DS1. Although neural networks based parameterization adds to the complexity of training in VI, we see that by the time Split and PSP converge, MLP and EDGE3 achieve comparable (if not better) lower bounds and quickly surpass these baselines as the number of iteration increases. Footnote 3: We use EDGE as an example here for branch length parameterization since it can learn edge features (see Appendix F). All GNN variants (except the simple GCN) performed similarly in this example (see Table 1). As diagonal Lognormal branch length distributions were used for all parameterization methods, how these variational distributions were amortized over tree topologies under different parameterizations therefore is crucial for the overall approximation performance. To better understand this effect of amortized inference, we further investigated the amortization gaps4 of different methods on individual trees in the 95% credible set of DS1 as in Zhang (2020). The middle and right plots in Figure 3 show the amortization gaps of different parameterization methods on each tree topology \(\tau\). We see the amortization gaps of MLP and EDGE are considerably smaller than those of Split and PSP respectively, showing the efficiency of learnable topological features for amortized branch length distributions. Again, incorporating more local topological information is beneficial, as evidenced by the significant improvement of EDGE over MLP. More results about the amortization gaps can be found in Table 2 in the appendix. Footnote 4: The amortization gap on a tree topology \(\tau\) is defined as \(L(Q^{*}|\tau)-L(Q_{\phi}|\tau)\), where \(L(Q_{\psi}|\tau)\) is the ELBO of the approximating distribution \(Q_{\psi}(q|\tau)\) and \(L(Q^{*}|\tau)\) is the maximum lower bound that can be achieved with the same variational family. See more details in Zhang (2020); Cremer et al. (2018). ## 5 Conclusion We presented a novel approach for phylogenetic inference based on learnable topological features. By combining the raw node features that minimize the Dirichlet energy with modern GNN variants, our learnable topological features can provide efficient structural information without requiring domain expertise. In experiments, we demonstrated the effectiveness of our approach for tree probability estimation on simulated data and showed that our method consistently outperforms the baseline approaches for VBPI on a benchmark of real data sets. Future work would investigate more sophisticated GNNs for phylogenetic trees, and applications to other phylogenetic inference tasks where efficiently leveraging structural information of tree topologies is of great importance. Figure 3: Performance on DS1. **Left:** Lower bounds. **Middle & Right:** Amortization gaps on trees in the \(95\%\) credible sets. #### Acknowledgments This work was supported by National Natural Science Foundation of China (grant no. 12201014), as well as National Institutes of Health grant AI162611. The research of the author was support in part by the Key Laboratory of Mathematics and Its Applications (LMAM) and the Key Laboratory of Mathematical Economics and Quantitative Finance (LMEQF) of Peking University. The author is grateful for the computational resources provided by the High-performance Computing Platform of Peking University. The author appreciates the anonymous ICLR reviewers for their constructive feedback.
2301.11841
PhysGraph: Physics-Based Integration Using Graph Neural Networks
Physics-based simulation of mesh based domains remains a challenging task. State-of-the-art techniques can produce realistic results but require expert knowledge. A major bottleneck in many approaches is the step of integrating a potential energy in order to compute velocities or displacements. Recently, learning based method for physics-based simulation have sparked interest with graph based approaches being a promising research direction. One of the challenges for these methods is to generate models that are mesh independent and generalize to different material properties. Moreover, the model should also be able to react to unforeseen external forces like ubiquitous collisions. Our contribution is based on a simple observation: evaluating forces is computationally relatively cheap for traditional simulation methods and can be computed in parallel in contrast to their integration. If we learn how a system reacts to forces in general, irrespective of their origin, we can learn an integrator that can predict state changes due to the total forces with high generalization power. We effectively factor out the physical model behind resulting forces by relying on an opaque force module. We demonstrate that this idea leads to a learnable module that can be trained on basic internal forces of small mesh patches and generalizes to different mesh typologies, resolutions, material parameters and unseen forces like collisions at inference time. Our proposed paradigm is general and can be used to model a variety of physical phenomena. We focus our exposition on the detail enhancement of coarse clothing geometry which has many applications including computer games, virtual reality and virtual try-on.
Oshri Halimi, Egor Larionov, Zohar Barzelay, Philipp Herholz, Tuur Stuyck
2023-01-27T16:47:10Z
http://arxiv.org/abs/2301.11841v2
# PhysGraph: Physics-Based Integration Using Graph Neural Networks ###### Abstract. Physics-based simulation of mesh based domains remains a challenging task. State-of-the-art techniques can produce realistic results but require expert knowledge. A major bottleneck in many approaches is the step of integrating a potential energy in order to compute velocities or displacements. Recently, learning based method for physics-based simulation have sparked interest with graph based approaches being a promising research direction. One of the challenges for these methods is to generate models that are mesh independent and generalize to different material properties. Moreover, the model should also be able to react to unforeseen external forces like ubiquitous collisions. Our contribution is based on a simple observation: evaluating forces is computationally relatively cheap for traditional simulation methods and can be computed in parallel in contrast to their integration. If we learn how a system reacts to forces _in general_, irrespective of their origin, we can learn an _integrator_ that can predict state changes due to the total forces with high generalization power. We effectively factor out the physical model behind resulting forces by relying on an opaque _force module_. We demonstrate that this idea leads to a learnable module that can be trained on basic internal forces of small mesh patches and generalizes to different mesh typologies, resolutions, material parameters and unseen forces like collisions at inference time. Our proposed paradigm is general and can be used to model a variety of physical phenomena. We focus our exposition on the detail enhancement of coarse clothing geometry which has many applications including computer games, virtual reality and virtual try-on. Keywords:Collision, neural network simulation, graph neural networks + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: +
2301.02197
Virtual Node Graph Neural Network for Full Phonon Prediction
The structure-property relationship plays a central role in materials science. Understanding the structure-property relationship in solid-state materials is crucial for structure design with optimized properties. The past few years witnessed remarkable progress in correlating structures with properties in crystalline materials, such as machine learning methods and particularly graph neural networks as a natural representation of crystal structures. However, significant challenges remain, including predicting properties with complex unit cells input and material-dependent, variable-length output. Here we present the virtual node graph neural network to address the challenges. By developing three types of virtual node approaches - the vector, matrix, and momentum-dependent matrix virtual nodes, we achieve direct prediction of $\Gamma$-phonon spectra and full dispersion only using atomic coordinates as input. We validate the phonon bandstructures on various alloy systems, and further build a $\Gamma$-phonon database containing over 146,000 materials in the Materials Project. Our work provides an avenue for rapid and high-quality prediction of phonon spectra and bandstructures in complex materials, and enables materials design with superior phonon properties for energy applications. The virtual node augmentation of graph neural networks also sheds light on designing other functional properties with a new level of flexibility.
Ryotaro Okabe, Abhijatmedhi Chotrattanapituk, Artittaya Boonkird, Nina Andrejevic, Xiang Fu, Tommi S. Jaakkola, Qichen Song, Thanh Nguyen, Nathan Drucker, Sai Mu, Bolin Liao, Yongqiang Cheng, Mingda Li
2023-01-05T17:59:57Z
http://arxiv.org/abs/2301.02197v1
# Virtual Node Graph Neural Network for Full Phonon Prediction ###### Abstract The structure-property relationship plays a central role in materials science. Understanding the structure-property relationship in solid-state materials is crucial for structure design with optimized properties. The past few years witnessed remarkable progress in correlating structures with properties in crystalline materials, such as machine learning methods and particularly graph neural networks as a natural representation of crystal structures. However, significant challenges remain, including predicting properties with complex unit cells input and material-dependent, variable-length output. Here we present the virtual node graph neural network to address the challenges. By developing three types of virtual node approaches - the vector, matrix, and momentum-dependent matrix virtual nodes, we achieve direct prediction of \(\Gamma\)-phonon spectra and full dispersion only using atomic coordinates as input. We validate the phonon bandstructures on various alloy systems, and further build a \(\Gamma\)-phonon database containing over 146,000 materials in the Materials Project. Our work provides an avenue for rapid and high-quality prediction of phonon spectra and bandstructures in complex materials, and enables materials design with superior phonon properties for energy applications. The virtual node augmentation of graph neural networks also sheds light on designing other functional properties with a new level of flexibility. ## Introduction The structure-property relationship defines one of the most fundamental questions in materials science [16, 21]. The ubiquitous presence of structure-property relationships profoundly influences almost all branches of materials sciences, such as structural materials [3], energy harvesting and conversion and energy storage materials [17, 5, 19], catalysts [37] and polymers [13], and quantum materials [15]. However, despite its central importance to materials design, building an informative structure-property relationship can be nontrivial. On the one hand, the number of stable structures grows exponentially with unit cell size [22], and the structure design efforts have been largely limited to crystalline solids with relatively small unit cells. On the other hand, certain material properties are challenging to acquire due to experimental or computational complexities. In the past few years, data-driven and machine-learning methods play an increasingly important role in materials science and significantly boost the research on building structure-property relationships [24, 6, 38]. Complex structures such as porous materials [1, 27], nanoalloys [10, 36], and grain boundaries [34] are becoming more feasible to handle, and properties ranging from mechanical strength to quantum ordering can be learned with increased confidence [9, 29]. One particular powerful approach is the graph neural networks (GNNs) [7]. By representing atoms as graph nodes and interatomic bonds as graph edges, GNNs
2303.08994
Physics-Informed Neural Networks for Time-Domain Simulations: Accuracy, Computational Cost, and Flexibility
The simulation of power system dynamics poses a computationally expensive task. Considering the growing uncertainty of generation and demand patterns, thousands of scenarios need to be continuously assessed to ensure the safety of power systems. Physics-Informed Neural Networks (PINNs) have recently emerged as a promising solution for drastically accelerating computations of non-linear dynamical systems. This work investigates the applicability of these methods for power system dynamics, focusing on the dynamic response to load disturbances. Comparing the prediction of PINNs to the solution of conventional solvers, we find that PINNs can be 10 to 1000 times faster than conventional solvers. At the same time, we find them to be sufficiently accurate and numerically stable even for large time steps. To facilitate a deeper understanding, this paper also present a new regularisation of Neural Network (NN) training by introducing a gradient-based term in the loss function. The resulting NNs, which we call dtNNs, help us deliver a comprehensive analysis about the strengths and weaknesses of the NN based approaches, how incorporating knowledge of the underlying physics affects NN performance, and how this compares with conventional solvers for power system dynamics.
Jochen Stiasny, Spyros Chatzivasileiadis
2023-03-15T23:53:32Z
http://arxiv.org/abs/2303.08994v2
# Physics-Informed Neural Networks ###### Abstract The simulation of power system dynamics poses a computationally expensive task. Considering the growing uncertainty of generation and demand patterns, thousands of scenarios need to be continuously assessed to ensure the safety of power systems. Physics-Informed Neural Networks (PINNs) have recently emerged as a promising solution for drastically accelerating computations of non-linear dynamical systems. This work investigates the applicability of these methods for power system dynamics, focusing on the dynamic response to load disturbances. Comparing the prediction of PINNs to the solution of conventional solvers, we find that PINNs can be 10 to 1'000 times faster than conventional solvers. At the same time, we find them to be sufficiently accurate and numerically stable even for large time steps. To facilitate a deeper understanding, this paper also presents a new regularisation of Neural Network (NN) training by introducing a gradient-based term in the loss function. The resulting NNs, which we call dtNNs, help us deliver a comprehensive analysis about the strengths and weaknesses of the NN based approaches, how incorporating knowledge of the underlying physics affects NN performance, and how this compares with conventional solvers for power system dynamics. keywords: dynamical systems, neural networks, scientific machine learning, time-domain simulation + ## 1 Introduction Time-domain simulations form the backbone in many power system analyses such as transient or voltage stability analyses. However, even the simplest set of governing Differential-Algebraic Equations (DAEs) which can describe the system dynamics sufficiently accurate, can impose a significant computational burden during the analysis. Ways to reduce this computational cost while maintaining a sufficiently high level of accuracy is of paramount importance across all applications in the power systems industry. Since, generally speaking, there is no closed form analytical solution for DAEs [1], we revert to numerical methods to approximate the dynamic response. Refs. [2; 3] provide a good overview on general solution approaches and the modelling in the power system context, and [4; 5; 6] summarise important developments, mostly relying on model simplification, decompositions, pre-computing partial solutions, and parallelisations. A new avenue to solve ordinary and partial differential equations emerged recently through so-called Scientific Machine Learning (SciML) - a field, which combines scientific computing with Machine Learning (ML). SciML has been receiving a lot of attention due to the significant potential speed-ups it can achieve for computationally expensive problems, such as the solution of differential equations. More specifically, the authors in [7], already 25 years ago, introduced the idea of using artificial Neural Networks (NNs) to approximate such solutions. The idea is that NNs learn from a set of training data to interpolate the solution for data points that lie between the training data with high accuracy. Ref. [8] has revived this effort, now named Physics-Informed Neural Networks (PINNs), which has developed into a growing field within SciML as [9] reviews. The key idea of PINNs is to directly incorporate the domain knowledge into the learning process. We do so by evaluating if the NN output satisfies the set of DAEs during training. If it does not, the parameters of the NN are adjusted in the next training iteration until the NN output satisfies the DAEs. This approach reduces the need for large training datasets and hence the associated costs for simulating them. Ref. [10] introduced PINNs in the field of power systems. Our ultimate goal is to develop PINNs as a solution tool for time-domain simulations in power systems. This paper takes a first step, and identifies the strengths and weaknesses of such a method in comparison with existing solution methods with respect to the application specific requirements on the solution method. Stott elaborated nearly half a century ago that, among others, sufficient accuracy, numerical stability, and flexibility were important characteristics that need to be weighed against the solution speed [2]. In an ideal world, we are looking for tools that are highly accurate, numerically stable, and flexible, and at the same time very fast. Several approaches have been proposed to deal with this trade-off, aiming at being faster (at least during run-time) while maintaining accuracy, numerical stability, and flexibility to the extent possible. Some of the promising ones are based on pre-computing parts of the solution of DAEs. For example, Semi-Analytical Solution (SAS)-methods adopt this approach [11; 12; 13]. We can push this idea of pre-computing the solution even further: PINNs, and NNs in general, pre-compute - learn - the entire solution, hence, the computation at run-time is extremely fast. Related works in [14; 15; 16] introduce alternative NN architectures and problem setups, primarily driven by considerations on the achieved accuracy. In contrast, our focus lies on assessing PINNs from a perspective of a numerical solution method in which accuracy has to be weighed against other numerical characteristics namely speed, numerical stability and flexibility. The contributions of this work are the following: 1. We apply Physics-Informed Neural Networks (PINNs) to multi-machine systems and show that PINNs can be 10 to 1'000 times faster than conventional methods for time-domain simulations, while achieving sufficient accuracy. 2. We demonstrate that the trade-off between speed and accuracy for PINNs, and NNs in general, does not directly relate to power system size but rather to the complexity of the dynamics. Hence, NNs can solve larger systems equally fast as small ones, if the complexity of the dynamics is comparable. This is contrary to conventional methods, where the solution time is closely linked to the system size. 3. We examine further numerical properties of NNs for solving DAEs. Besides speed, one of their key benefits is that NNs do not suffer from numerical instability as they solve without any iterative procedure. We also discuss the challenges of flexibility in different parameter settings and we outline concrete directions for future work to resolve them. 4. Having shown that NNs do have significant benefits and desirable properties, we carry out a comprehensive analysis on the performance and training of NNs and PINNs that can be helpful for future applications. In this context, we introduce _dtNNs_, a regularised form of NNs. dtNNs are an intermediate methodological step between NNs and PINNs as they are regularised by the time derivatives at the training data points. Section 2 describes the construction of a NN-based approximation for DAEs and how to incorporate physical knowledge in dtNNs and PINNs. Section 3 presents the case study and the training setup. Section 4 shows the results, on which basis we discuss the route forward in Section 5. Section 6 concludes. ## 2 Methodology This section lays out how we train a NN that shall be used in time-domain simulations, how the physical equations can be incorporated transforming the NN to a dtNN and a PINN, and how the resulting approximation is assessed. ### Approximating the solution to a dynamical system A dynamical system is characterised by its temporal evolution being dependent on the system's state variables \(\mathbf{x}\), the algebraic variables \(\mathbf{y}\) and the control inputs \(\mathbf{u}\): \[\frac{d}{dt}\mathbf{x} =\mathbf{f}_{\text{DAE}}\left(\mathbf{x}(t),\mathbf{y}(t),\mathbf{u}\right) \tag{1a}\] \[\mathbf{0} =\mathbf{g}_{\text{DAE}}\left(\mathbf{x}(t),\mathbf{y}(t),\mathbf{u}\right). \tag{1b}\] For clarity and ease of implementation, we express (1a) and (1b) as \[\mathbf{M}\frac{d}{dt}\mathbf{x}=\mathbf{f}(\mathbf{x}(t),\mathbf{u}). \tag{2}\] by incorporating \(\mathbf{y}\) into \(\mathbf{x}\) and adding \(\mathbf{M}\), which is a diagonal matrix to distinguish if a state \(x_{i}\) is differential (\(M_{ii}\neq 0\)) or algebraic (\(M_{ii}=0\)). We will use a NN to define an explicit function \(\hat{\mathbf{x}}(t)\) that shall approximate the solution \(\mathbf{x}(t)\) for all \(t\in[t_{0},t^{\text{max}}]\), i.e., for the entire _trajectory_, starting from the initial condition \(\mathbf{x}(t_{0})=\mathbf{x}_{0}\). ### Neural network as function approximator We use a standard feed-forward NN with \(K\) hidden layers that implements a sequence of linear combinations and non-linear activation functions \(\sigma(\cdot)\). In theory, a NN with a single hidden layer already constitutes a universal function approximator [17] if it is wide enough, i.e., the hidden layer consists of enough neurons \(N_{K}\). In practice, restrictions on the width and the process of determining the NN's parameters might limit this universality as [18] elaborates. Still, a multi-layer NN in the form of (3) provides us with a powerful function approximator: \[[t,\mathbf{x}_{0}^{\top},\mathbf{u}^{\top}]^{\top} =\mathbf{z}_{0} \tag{3a}\] \[\mathbf{z}_{k+1} =\sigma(\mathbf{W}_{k+1}\mathbf{z}_{k}+\mathbf{b}_{k+1})\quad\forall k=0,1,..., K-1\] (3b) \[\hat{\mathbf{x}} =\mathbf{W}_{K+1}\mathbf{z}_{K}+\mathbf{b}_{K+1}. \tag{3c}\] The NN output \(\hat{\mathbf{x}}\) is the system state at the prediction time \(t\). The input \(\mathbf{z}_{0}\) is composed of the prediction time \(t\), the initial condition \(\mathbf{x}_{0}\) and the control input \(\mathbf{u}\). The weight matrices \(\mathbf{W}_{k}\) and bias vectors \(\mathbf{b}_{i}\) form the adjustable parameters \(\mathbf{\theta}\) of the NN. For the training process, we compile a training dataset \(\mathcal{D}_{\text{train}}\), that maps \(\mathbf{z}_{0}\mapsto\mathbf{x}\) for a chosen input domain \(\mathcal{Z}\) and contains \(N=|\mathcal{D}_{\text{train}}|\) points. For our purposes, the input domain is a discrete set of the prediction time, e.g. from \(0\,\mathrm{s}\) until \(10\,\mathrm{s}\) with a step size of \(0.2\,\mathrm{s}\), and a set of different initial conditions and control inputs, e.g. different power disturbances. The output domain is the rotor angle and frequency at each of the prediction time steps and for each of the studied disturbances. \[\mathcal{D}_{\text{train}}:\mathbf{z}_{0}\mapsto\mathbf{x}\qquad\mathbf{z}_{0}\in \mathcal{Z}. \tag{4}\] During training we adjust the NN's parameters \(\mathbf{\theta}\) with an iterative gradient-based optimisation algorithm to minimise the so-called _loss_\(\mathcal{L}\) for \(\mathcal{D}_{train}\) \[\min_{\mathbf{\theta}} \mathcal{L}(\mathcal{D}_{\text{train}})\] (5a) \[\text{s.t.} (\text{\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq: ### Loss function and regularisation: NNs, dtNNs, and PINNs #### 2.3.1 Loss Function for Neural Networks The simplest loss function for such a problem is to define the loss as the mismatch between the NN prediction \(\hat{\mathbf{x}}\) and the ground truth or target \(\mathbf{x}\), and measure it using the L2-norm. To account for different orders of magnitude (for example, the voltage angles in radians are often much larger than frequency deviations expressed in p.u.) and levels of variations of the individual states \(\mathbf{x}\), we first apply a scaling factor \(\xi_{x,i}\) to the error computed per state \(i\). A physics-agnostic choice of \(\xi_{x,i}\) could be to use the state's standard deviation in the training dataset; for more details please see Section 3.2. We then apply the squared L2-norm for each data point \(j\) and take the average across the dataset \(\mathcal{D}\) to obtain the loss \(\mathcal{L}_{x}\) \[\mathcal{L}_{x}(\mathcal{D})=\frac{1}{|\mathcal{D}|}\sum_{j=1}^{|\mathcal{D}| }\left\|\left(\frac{\hat{x}_{i}^{j}-x_{i}^{j}}{\xi_{x,i}}\right)\right\|_{2}^ {2}. \tag{6}\] #### 2.3.2 dtNNs As an intermediate step between standard NNs and PINNs, in this subsection we introduce a new regularisation term to loss function (6). We do so to avoid the previously mentioned over-fitting and improve the generalisation performance of the NNs. To the best of our knowledge, this paper is the first to introduce a regularisation term based on the update function \(\mathbf{f}(\mathbf{x})\) from (2). Using the tool of Automatic Differentiation (AD) [19], we can compute the derivative of the NN, i.e., the time derivative of the approximated trajectory, \(\frac{d}{dt}\hat{\mathbf{x}}\) and compute a loss analogous to (6) (with a scaling factor \(\xi_{dt,i}\)): \[\mathcal{L}_{dt}(\mathcal{D})=\frac{1}{|\mathcal{D}|}\sum_{j=1}^{|\mathcal{D}| }\left\|\left(\frac{\frac{d}{dt}\hat{x}_{i}^{j}-\frac{d}{dt}x_{i}^{j}}{\xi_{dt,i}}\right)\right\|_{2}^{2} \tag{7}\] #### 2.3.3 PINNs As [7; 8] introduced generally, and [10] for power systems, we can also regularise such a NN by comparing the derivative of the NN \(\frac{d}{dt}\hat{\mathbf{x}}\) with the update function evaluated based on the estimated state \(\mathbf{f}(\hat{\mathbf{x}})\): \[\mathcal{L}_{f}(\mathcal{D}_{f})=\frac{1}{|\mathcal{D}_{f}|}\sum_{j=1}^{| \mathcal{D}_{f}|}\left\|\left(\frac{M_{ii}\frac{d}{dt}\hat{x}_{i}^{j}-f_{i}( \hat{\mathbf{x}}^{j})}{\xi_{f,i}}\right)\right\|_{2}^{2} \tag{8}\] This physics-loss does not require the ground truth state \(\mathbf{x}\) or its derivative. Quite the contrary, this loss can be queried for any desired point without requiring any form of simulation. We therefore can evaluate a dataset \(\mathcal{D}_{f}\) of randomly sampled or ordered _collocation points_ that map to \(0\) \[\mathcal{D}_{f}:\mathbf{z}_{0}\mapsto\mathbf{0}\qquad\mathbf{z}_{0}\in\mathcal{Z}. \tag{9}\] to essentially assess how well the NN approximation follows the physics - any point where this physics loss equals zero is in line with the governing physics of (2). However, (9) defines a mapping that is not bijective, hence, \(\mathcal{L}_{f}(\mathcal{D}_{f})=0\) does not imply that the desired trajectory is perfectly matched, only that a trajectory complying with (2) is matched. As an example, an exact prediction of the steady state of the system will yield \(\mathcal{L}_{f}(\mathcal{D}_{f})=0\) even though the target trajectory in \(\mathcal{D}_{\text{train}}\) is different. #### 2.3.4 Combined loss function during training To obtain a single objective or loss value for the training problem (5), we weigh the three terms as follows: \[\mathcal{L}=\mathcal{L}_{x}+\lambda_{dt}\mathcal{L}_{dt}+\lambda_{f}\mathcal{ L}_{f}, \tag{10}\] where \(\lambda_{dt}\) and \(\lambda_{f}\) are hyper-parameters of the problem. Subsequently, we refer to a NN trained with \(\lambda_{dt}=0,\lambda_{f}=0\) as "vanilla NN"1, with \(\lambda_{dt}\neq 0\), \(\lambda_{f}=0\) as "dtNN", and with \(\lambda_{dt}\neq 0,\lambda_{f}\neq 0\) as "PINN". Footnote 1: In [20] “vanilla NN” refers to a feed-forward NN with a single layer, we adopt the term nonetheless for clarity as it expresses the idea of a NN without any regularisation well. ### Accuracy metrics To compare across the different methods and setups, we monitor the loss \(\mathcal{L}_{x}\) in (6) as the comparison metric throughout the training and evaluation process and as an accuracy metric for the performance assessment. To get a more detailed picture, we also consider the loss value of single points, i.e., before calculating the mean in (6). However, the loss is dependent on the chosen values for \(\xi_{x,i}\) and does not provide an easily interpretable meaning. Therefore, we use the maximum absolute error \[\max AE_{\mathcal{S}}=\max_{i\in\mathcal{S},j\in\mathcal{D}_{\text{test}}} \left(\left|\hat{x}_{i}^{j}-x_{i}^{j}\right|\right) \tag{11}\] as an additional metric for assessment purposes, i.e., based on \(\mathcal{D}_{\text{test}}\), but not during training. Whereas a state-by-state metric would capture most details, we opt to compute the maximum absolute error across meaningful groups of states \(i\in\mathcal{S}\) that are of the same units and magnitudes. This aligns with the engineering perspective on the desired accuracy of a method. ## 3 Case study This section introduces the test cases and the details of the NN training. ### Power system - Kundur 11-bus and IEEE 39-bus system As a study setup, we investigate the dynamic response of a power system to a load disturbance. We use a second order model to represent each of the generators in the system. The update equation (2) formulates for generator buses as \[\begin{bmatrix}1&0\\ 0&2H_{i}\omega_{0}\end{bmatrix}\frac{d}{dt}\begin{bmatrix}\delta_{i}\\ \Delta\omega_{i}\end{bmatrix}=\begin{bmatrix}\Delta\omega_{i}\\ P_{mech,i}-D_{i}\Delta\omega_{i}+P_{e,i}\end{bmatrix} \tag{12}\] and for load buses as \[\begin{bmatrix}d_{i}\omega_{0}\end{bmatrix}\frac{d}{dt}\begin{bmatrix}\delta_ {i}\end{bmatrix}=\begin{bmatrix}P_{mech,i}+P_{e,i}\end{bmatrix} \tag{13}\] where \(P_{mech,i}=P_{set,i}+P_{dist,i}\) at bus \(i\), with \(P_{set,i}\) representing the power setpoint and \(P_{dist,i}\) the disturbance. The states \(\mathbf{x}\) are the bus voltage angle \(\delta_{i}\) and the frequency deviation \(\Delta\omega_{i}\) for generator buses, and the bus voltage angle \(\delta_{i}\) for the load buses. The buses are linked through the active power flows in the network defined by the admittance matrix \(\bar{\mathbf{Y}}_{bus}\) and the vector of complex voltages \(\bar{\mathbf{V}}=\mathbf{V}_{m}e^{j\mathbf{\delta}}\), where the vector \(\mathbf{V}_{m}\) collects the voltage magnitudes and \(\mathbf{\delta}\) the bus voltage angles: \[\mathbf{P}_{e}=\Re\left(\bar{\mathbf{V}}(\bar{\mathbf{Y}}_{bus}\bar{\mathbf{V}})^{*}\right). \tag{14}\] The \(*\) indicates the complex conjugate and \(P_{e,i}\) corresponds to the \(i\)-th entry of vector \(\mathbf{P}_{e}\), i.e., the active power balance at bus \(i\). In Section 4, we demonstrate the methodology on the Kundur 2-area system (11 buses, 4 generators) and the IEEE 39-bus test system (39 buses, 10 generators). For both systems we are using the base power of \(100\,\mathrm{MVA}\) and \(\omega_{0}=60\,\mathrm{Hz}\). The network parameters and set-points stem from the case description of Kundur [21, p. 813] and the IEEE 39-bus test case in Matpower [22]. The values for the inertia of the generators \(H_{i}\) are [6.5, 6.5, 6.175, 6.175] p.u. for the 11-bus case and [500.0, 30.3, 35.8, 38.6, 26.0, 34.8, 26.4, 24.3, 34.5, 42.0] p.u. for the 39-bus case. The damping factor was set to \(D_{i}=0.05\frac{\omega_{0}}{P_{set,i}}\) in both cases and for the loads to \(d_{i}=1.0\frac{P_{set,i}}{\omega_{0}}\) and \(d_{i}=0.2\frac{P_{set,i}}{\omega_{0}}\) respectively. ### NN training implementation The entire workflow is implemented in Python 3.8 and available under [23]. When we use the conventional numerical approaches to carry out the time-domain simulations for this system, the dynamical system is simulated using the Assimulo package [24] which implements various solution methods for systems of DAEs. The training process utilises PyTorch [25] for the learning process and WandB [26] for monitoring and processing the workflow. The implementation builds on [27] for the steps of the workflow. All datasets comprise the simulated response of the system over a period of \(20\,\mathrm{s}\) to a disturbance. The tested disturbance is the step response to an instantaneous loss of load \(|P_{dist,i}|\) at bus \(i\) with a magnitude between \(0\,\mathrm{p.u.}\) and \(10\,\mathrm{p.u.}\), where \(i=7\) for the 11-bus system and \(i=20\) for the 39-bus system. We record these data in increments of \(\Delta t\) and \(\Delta P\). The test dataset \(\mathcal{D}_{\mathrm{test}}\) which shall serve as a ground truth uses \(\Delta t=0.05\,\mathrm{s}\) and \(\Delta P=0.05\,\mathrm{p.u.}\), resulting in \(|\mathcal{D}_{\mathrm{test}}|=401\times 201=80601\) points. For the training datasets \(\mathcal{D}_{\mathrm{train}}\) used in Section 4.2 we create datasets with \(\Delta t\in[0.2,1.0,2.0]\mathrm{s}\) and \(\Delta P\in[0.2,1.0,2.0]\mathrm{p.u.}\). The validation datasets \(\mathcal{D}_{\mathrm{validation}}\) for those scenarios are offset by \(\frac{\Delta t}{2}\) and \(\frac{\Delta P}{2}\). For the scalings \(\xi_{x,i}\) in (6), we calculate the average standard deviation \(\sigma\) across all voltage angle differences \(\delta_{ij}\)2 and all frequency deviations \(\Delta\omega_{i}\), here the relevant groups of states \(\mathcal{S}\): Footnote 2: The training process benefits from using the voltage angle difference \(\delta_{ij}=\delta_{i}-\delta_{j}\), where \(j\) indicates a reference bus, as the output of the NN. The prediction becomes easier as the occurring drift in the dataset with respect to the variable \(t\) is significantly reduced. \[\xi_{x,i}=\frac{1}{|\mathcal{S}|}\sum_{i\in\mathcal{S}}\sigma(x_{i}(\mathcal{ D})) \tag{15}\] Thereby, we aim for equal levels of error within all \(\delta_{ij}\) and \(\Delta\omega_{i}\) states and account for the difference in magnitude between them. \(\xi_{dt,i}\) and \(\xi_{f,i}\) are all set to \(1.0\) to avoid adding further hyper-parameters, more elaborate choices based on system analysis or the database are conceivable. During training and testing \(\xi_{x,i}\) is based on \(\mathcal{D}_{\text{train}}\) and \(\mathcal{D}_{\text{test}}\) respectively. The regularisation weights \(\lambda_{dt}\) and \(\lambda_{f}\) are hyper-parameters. For the latter, we incorporate a fade-in dependent on the current epoch \(E\) : \[\lambda_{f}(E)=\min\Big{(}\lambda_{f,\max};\lambda_{f,0}\;10^{E/E^{\prime}} \Big{)}, \tag{16}\] where \(\lambda_{f,\max}\) is the maximum and \(\lambda_{f,0}\) the initial regularisation weight and \(E^{\prime}\) determines the "speed" of the fade-in. The fade-in causes that \(\mathcal{L}_{x}\) and \(\mathcal{L}_{dt}\) are first minimised and then \(\mathcal{L}_{f}\) helps for "fine-tuning" and better generalisation. We apply the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm implemented in PyTorch in the training process, a standard optimiser for PINNs as [28] reviews. The set of hyper-parameters comprises \(K\), \(N_{K}\), \(\lambda_{dt}\), \(\lambda_{f,\max}\), \(\lambda_{f,0}\)\(E^{\prime}\), and additional L-BFGS parameters. The available implementation [23] lists the range of the tested hyper-parameters and the choices for the different settings. All training and timing was performed on the High Performance Computing (HPC) cluster at the Technical University of Denmark (DTU) with nodes of 2xIntel Xeon Processor 2650v4 (12 core, 2.20GHz) and 256 GB memory of which we used 4 cores per training run. ## 4 Results We first show in this section an assessment of NNs at run-time that highlights their methodological advantages compared to conventional solvers. We then perform a comprehensive analysis of the required training phase and the effect of physics regularisation. ### NNs at run-time - opportunities for accuracy and computational cost The primary motivation for the use of NN-based solution approaches is their extremely fast evaluation. Figure 1 shows the run-time for different prediction times. The NNs return the value of the states at prediction time \(t\) between 10 and 1'000 times faster than the conventional solvers depending on three factors: the prediction time, the power system size, and the solver/NN settings. First, for NNs the run-time is independent of the prediction time as the prediction only requires a single evaluation of the NN. In contrast, the conventional solver's run-time increases with larger prediction times as more internal time steps are required. Second, the power system size strongly affects the conventional solver's run-time as shown by the increase when moving from the 11-bus to the 39-bus system. For the NN, it causes only a negligible change in run-time as only the last layer of the NN changes in size according to the number of states of the system, see (3c). Third, the "solver settings" play an important role; for conventional solvers, the internal tolerance setting \(\epsilon\) governs its evaluation speed, while for the NN the size, i.e., its number of layers \(K\) and number of neurons per layer \(N_{K}\), determine the run-time. Figure 2 sets the above results in relation to the achieved accuracy. The points represent different disturbance sizes and prediction times and the accuracy is measured as the associated loss. If a solver yielded points in the lower left corner of the plot, it could be called an ideal solver - fast and accurate. Conventional solvers can be very accurate when the internal tolerance \(\epsilon\) is set low enough, but at the price of being slower to evaluate. Allowing larger tolerances accelerates the solution process slightly at the expense of less accurate solutions. However, this trade-off is limited by the numerical stability of the used scheme; for too high tolerances the results would be considered as non-converged. In case of NNs, their superior speed is weighed against less accurate solutions. The accuracy of NNs is not only controlled by their size but also, very importantly, by the training process. The achievable accuracy is therefore determined before run-time, in contrast to the tolerance of a conventional solver, which is set at run-time. As a final remark related Figure 1: Run-time as a function of the prediction time \(t\) for NNs of different size and a conventional solver with varied tolerance settings \(\epsilon\). Tests for the 11-bus and 39-bus system with a disturbance \(P_{i}=6.09\,\)p.u.. to Figure 2, we need to highlight that while less adjustable, in contrast to conventional solvers, NNs do not face issues of numerical stability as their evaluation is a single and explicit function call. We lastly want to show how the accuracy, here expressed as the maximum absolute error across all voltage angle states \(\max\text{AE}_{\delta}\) for better intuition, relates to the NN size and the power system size. The boxplots in Fig. 3 represent the evaluation of 20 NNs with the same training setup but with different random initialisations of their parameters. We observe that deeper and wider NNs usually perform better on this metric. However, the largest NNs for the 11-bus system (\(N_{K}=128\) and \(K=4\) or \(K=5\)) show a larger variation than the smaller NN which means that the initialisation of the NNs affect their performance on the test dataset. This arises in models with a large representational capacity, loosely speaking models with many parameters, hence multiple parameter sets can lead to a low training loss but not all of them generalise well, i.e., have low error on the test dataset. The other, at first sight counter-intuitive, observation is that the 39-bus system performs better on the metric than the smaller 11-bus system. This can be attributed to the complexity of the target function, i.e., of the dynamic responses. The 11-bus system exhibits faster and more intricate dynamics for the presented cases, hence, it is more difficult to approximate their evolution. We could therefore achieve the same level accuracy for the 39-bus system with a smaller NN than for the 11-bus system. In terms of run-time, this would Figure 2: Evaluation of run-time and accuracy for the 11-bus system for varied solver tolerances \(\epsilon\) and NN sizes (\(K\) layers and \(N_{K}\) neurons per layers). Point-wise evaluation for 10 disturbance sizes \(P_{7}\) and prediction time as in Fig. 1. mean that the 39-bus system could be faster to evaluate than the 11-bus system. This characteristic of NNs effectively overcomes the relationship seen for conventional solvers that larger systems cause longer run-times3 as we have seen in Fig. 1. Footnote 3: Of course, this relationship breaks when the necessary time step sizes of the conventional solvers differ significantly. ### NNs at training time - a trade-off between accuracy and computational cost The benefits of NNs compared to conventional solvers at run-time become possible by shifting the computational burden to the NN training stage, i.e., the pre-computation of the solution. In this stage, we examine the trade-off between accuracy and the computational cost of the training. This trade-off is influenced by several factors; here, we consider 1) the used training dataset, 2) the type of regularisation, and 3) the optimisation algorithm. To investigate the influence of the training dataset and the regularisation, we use the 11-bus system with a NN of size \(K=5\) and \(N_{K}=32\). We consider five scenarios as shown in Table 1 with different numbers of data points \(|\mathcal{D}|\) and the three "flavours" of NNs which we introduced in Section 2: vanilla NN, dtNN, PINN. The datasets are created by sampling with different increments of time \(\Delta t\) and the power disturbance \(\Delta P\). As expected, more data points incur a higher dataset creation cost, however, it also depends what "kind" of additional data points we generate. When we halve the time increment \(\Delta t\) Figure 3: Maximum absolute error of angle \(\delta\) on the test dataset for the 11-bus and 39-bus system with varying NN sizes, i.e., number of layers \(K\) and neurons per layer \(N_{K}\). e.g., from scenario A to B or from scenario C to D, the dataset generation cost remains approximately the same. However, this does not hold if we halve the power increment \(\Delta P\). When simulating a certain trajectory, it is basically free to evaluate additional points, i.e., reduce \(\Delta t\), since interpolation schemes can be used for intermediate points. In contrast, any additional trajectory that needs to be simulated adds to the total cost. Similarly for "free", we can obtain the necessary values for the dtNN regularisation as this only requires the evaluation of the right hand side in (2). The PINN regularisation also incurs only negligible dataset generation cost, as it is a mere sampling of the collocation points \(|\mathcal{D}_{f}|\), here 5151, without the need for any simulation. Therefore, the additional regularisation come at no or negligible cost compared to generating more data points unless they lie on trajectories that are evaluated anyways. Figure 3(a) shows the resulting \(\max\text{AE}_{\delta}\) across 20 training runs with different initialisations of the NN parameters. Unsurprisingly, the error metric improves with more data points, i.e., from scenario A to E, and additional regularisation, i.e., from a vanilla NN to a dtNN and a PINN. In scenario E, which has the largest dataset, all three network types perform on a similar level, whereas PINNs otherwise clearly deliver the best performance. Furthermore, the performance becomes more consistent, i.e., less variance, towards scenario E. A very sensitive issue is the point when to stop the training process to prevent over-fitting. In this study, we use the best validation loss as the indicator to determine the "best epoch" and Fig. 3(b) shows the results. PINNs consistently train for more epochs and only for scenario E the three NN types train for approximately the same number of epochs. In Fig. 5 we plot the validation loss over the training epoch. In scenario D, we can clearly see, that while the vanilla NNs and the dtNNs do not improve much \begin{table} \begin{tabular}{c c c c c} \hline \hline Scenario & Time & Power disturbance & Dataset & Dataset \\ & increment \(\Delta t\) & increment \(\Delta P\) & size \(|\mathcal{D}|\) & creation cost \\ \hline A & 2 s & 2 p.u. & 66 & 0.413 s \\ B & 1 s & 2 p.u. & 126 & 0.412 s \\ C & 2 s & 1 p.u. & 121 & 0.812 s \\ D & 1 s & 1 p.u. & 231 & 0.814 s \\ E & 0.2 s & 0.2 p.u. & 5151 & 3.880 s \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of the scenarios with different training datasets. further after about 100 epochs, PINNs still see a significant improvement in terms of accuracy. From around this point onward, the physics-based loss \(\mathcal{L}_{f}\) drives the optimisations, the other training loss terms are already very small. This behaviour partly stems from the fade-in of \(\mathcal{L}_{f}\) but also from the fact that \(\mathcal{L}_{x}\) and \(\mathcal{L}_{dt}\) are based on much smaller datasets except for scenario E, in which the improvement of the accuracy progresses at similar speeds for all three NN types. PINNs offer us therefore the ability to achieve accuracy improvements for more epochs but we can also terminate them early if the achieved accuracy is sufficient to reduce the computational burden. By mul Figure 4: Training characteristic for different scenarios (for scenario definition, see Table 1) Figure 5: Validation loss as a function of trained epochs. The shadings signify the range from 20 randomly initialised runs. tiplying the number of epochs with the computational cost per epoch, we can estimate the total computational cost of the training. The vanilla NN and dtNN required about 0.17 s and 0.18 s per epoch for scenarios A-D and 0.40 s and 0.43 s for scenario E while the PINN constantly needs 0.66 s per epoch due to the collocation points. These numbers are very implementation and setup dependent, but show the trend that PINNs have higher cost per epoch due to the computation of \(\mathcal{L}_{f}\) while the dtNN is only slightly more expensive than a vanilla NN. The total up-front cost, comprised of data generation cost and training cost, has then to be evaluated against the desired accuracy to find an efficient setup. This trade-off is again very dependent on the case study. For the 39-bus system the dataset generation cost is 2.5 times more while the cost per epoch only increases by a few percentage points. Figures 3 and 4a displayed the maximum absolute errors on the test dataset as the accuracy metric, which is a critical metric of any solution approach. However, the accuracy of a NNs must also be seen as a distribution across data points as it is visible in Fig. 2. We, therefore, show in Fig. 6 the resulting distribution of the loss values as a function of the two input variables, i.e., the prediction time \(t\) and the disturbance size \(\Delta P_{i}\), for scenarios A, C, and E and the NN types. The plots clearly show that for a majority of points in the test dataset, the predictions are much more accurate than the maximum values. This is true in particular around the data points. These are clearly visible in scenario A by the "indents". The panel for the dtNN and scenario C in Fig. 6a shows an extreme case where the prediction at the available data points is very accurate but the interpolation in between produces high errors. In comparison with the vanilla NN, the additional regularisation of the dtNN leads to a more unbalanced error distribution. In contrast, the PINN shows overall higher levels of accuracy but also more balanced error distributions thanks to the evaluation of the collocation points. We observe two more trends: Smaller prediction times are associated with higher errors due to the faster dynamics; and secondly, larger disturbances tend to show larger errors as they include larger variations of the output variables. These results show the importance of the dataset and the regularisation on the overall characteristics of a NN-based predictor, which must be considered for assessing the trade-off between training time and accuracy. We lastly touch upon the effect of the optimiser on the training process, in this case the L-BFGS algorithm. The hyper-parameters of the algorithm strongly influence the required training time but also the achieved accuracy which is shown in Fig. 7. The points represent the outcome from random Figure 6: Distribution of \(\mathcal{L}_{x}(\mathcal{D}_{\mathrm{Test}})\) for different NN flavours (vanilla NN, dtNN, PINN) and scenarios (A,C,E). The shaded areas correspond to 100%, 80%, 50% of the errors and the black line represents the median. (for the definition of scenarios A, C, E, see Table 1) hyper-parameter settings and they clearly show a strong relationship between training time and accuracy. Furthermore, the optimiser's internal tolerance setting (coloured) strongly influences this relationship. ## 5 Discussion The results in the previous Section illustrate how NN-based approaches for solving DAEs offer a number of advantages at run-time: 10 to 1'000 times faster evaluation speed, no issues of numerical instability, and, in contrast to conventional solvers, their solution time does not increase with a growing power system size. These properties come at the cost of training the NNs. To assess an overall benefit in terms of computational time we, therefore, have to consider the total cost as the sum of up-front cost \(C_{\text{up-front}}\) for the dataset generation and training and the run-time cost \(C_{\text{run-time}}\) per evaluation \(n\): \[C_{\text{total}}=C_{\text{up-front}}+C_{\text{run-time}}\cdot n. \tag{17}\] Figure 8 shows a graphical representation of (17) for conventional solvers and NNs. It is clear, that NN-based approaches need to pass a critical number of evaluations \(n_{\text{critical}}\) to be useful in terms of overall cost, unless other considerations like numerical stability or real-time applicability outweigh the cost consideration. The results in Sections 4.1 and 4.2 discussed the various "settings" that affect run and training time - in Fig. 8 they would correspond to the dashed lines. For conventional solvers, changing these settings affects the slope, whereas for NNs they mostly impact the y-intercept, i.e., \(C_{\text{up-front}}\) Figure 7: Influence of hyper-parameters of the L-BFGS-optimiser on the trade-off between training time and achieved accuracy. The tolerance level of the optimiser has a large influence as shown by the coloured clusters of points. in either case, as expected, a different "setting" will change \(n_{\text{critical}}\). Hence, the decision for using NN-based methods largely hinges around whether we expect sufficiently many evaluations \(n\). Here, it is important to point out that the NN will be trained for a specific problem setup and a change in the setup, e.g., another network configuration, requires a new training process. In this aspect of "flexibility", conventional solvers have an important advantage over NN-based approaches. Addressing this lack of flexibility is of paramount importance for adopting NNs-based simulation methods and we see three routes forward for this challenge: 1) Reducing the up-front cost \(C_{\text{up-front}}\) by tailoring for example the learning algorithms, the used NN architectures, and regularisation schemes to the applications; this can largely be seen in the context of actively controlling the trade-off between accuracy and training time. 2) Finding use cases with large \(n\), i.e., highly repetitive tasks. 3) Designing hybrid setups - similar to SAS-based methods - in which repetitive sub-problems are solved by NNs and conventional solvers handle computations that require a lot of flexibility. ## 6 Conclusion This paper presented a comprehensive analysis of the use of Physics-Informed Neural Network (PINN) for power system dynamic simulations. We show that PINNs (i) are 10 to 1'000 times faster than conventional solvers, (ii) do not face issues of numerical instability unlike conventional solvers, and, (iii) achieve a decoupling between the power system size and the required solution time. However, PINNs are less flexible (i.e. they do not easily Figure 8: Total cost of different approaches in dependence of the number of evaluations. handle parameter changes), and require an up-front training cost. Overall, this makes PINN-based solutions well-suited for repetitive tasks as well as task where run-time speed is crucial, such as for screening. Besides the comparison between conventional and NN-based methods, this paper conducts a deeper analysis on the parameters that affect the performance of the NN solutions. In that respect, we introduce a new NN regularisation, called dtNN, as a intermediate step between NNs and PINNs. We show that PINNs achieve overall higher levels of accuracy, and more balanced error distributions thanks to the evaluation of the collocation points. **Funding:** This work was supported by the European Research Council [Grant Agreement No: 949899].
2306.12874
Charting nanocluster structures via convolutional neural networks
A general method to obtain a representation of the structural landscape of nanoparticles in terms of a limited number of variables is proposed. The method is applied to a large dataset of parallel tempering molecular dynamics simulations of gold clusters of 90 and 147 atoms, silver clusters of 147 atoms, and copper clusters of 147 atoms, covering a plethora of structures and temperatures. The method leverages convolutional neural networks to learn the radial distribution functions of the nanoclusters and to distill a low-dimensional chart of the structural landscape. This strategy is found to give rise to a physically meaningful and differentiable mapping of the atom positions to a low-dimensional manifold, in which the main structural motifs are clearly discriminated and meaningfully ordered. Furthermore, unsupervised clustering on the low-dimensional data proved effective at further splitting the motifs into structural subfamilies characterized by very fine and physically relevant differences, such as the presence of specific punctual or planar defects or of atoms with particular coordination features. Owing to these peculiarities, the chart also enabled tracking of the complex structural evolution in a reactive trajectory. In addition to visualization and analysis of complex structural landscapes, the presented approach offers a general, low-dimensional set of differentiable variables which has the potential to be used for exploration and enhanced sampling purposes.
Emanuele Telari, Antonio Tinti, Manoj Settem, Luca Maragliano, Riccardo Ferrando, Alberto Giacomello
2023-06-22T13:35:34Z
http://arxiv.org/abs/2306.12874v1
# Charting nanocluster structures via convolutional neural networks ###### Abstract A general method to obtain a representation of the structural landscape of nanoparticles in terms of a limited number of variables is proposed. The method is applied to a large dataset of parallel tempering molecular dynamics simulations of gold clusters of 90 and 147 atoms, silver clusters of 147 atoms, and copper clusters of 147 atoms, covering a plethora of structures and temperatures. The method leverages convolutional neural networks to learn the radial distribution functions of the nanoclusters and to distill a low-dimensional chart of the structural landscape. This strategy is found to give rise to a physically meaningful and differentiable mapping of the atom positions to a low-dimensional manifold, in which the main structural motifs are clearly discriminated and meaningfully ordered. Furthermore, unsupervised clustering on the low-dimensional data proved effective at further splitting the motifs into structural subfamilies characterized by very fine and physically relevant differences, such as the presence of specific punctual or planar defects or of atoms with particular coordination features. Owing to these peculiarities, the chart also enabled tracking of the complex structural evolution in a reactive trajectory. In addition to visualization and analysis of complex structural landscapes, the presented approach offers a general, low-dimensional set of differentiable variables which has the potential to be used for exploration and enhanced sampling purposes. [email protected] [email protected] [email protected] ## 1 Introduction Finite-size aggregates - of atoms, molecules or colloidal particles - can present a much broader variety of structures than infinite crystals, because they are not constrained by translational invariance on an infinite lattice. For example, the _structural landscape_ of small metal particles that consist of few tens to few hundreds of atoms is much richer than that of their bulk material counterparts [1, 2, 3, 4]. Different factors cooperate at rendering this variegated scenario: first of all, possible structures are not limited to fragments of bulk crystals, but they include non-crystalline motifs, such as icosahedra or decahedra, which contain fivefold symmetries that are forbidden in infinite crystals [5]. Moreover, for small sizes, also planar, cage-like, and amorphous clusters have been observed [6, 7, 8], along with hybrid structures that exhibit features associated to more than one motif within the same cluster [9]. Adding to this already complex scenario, metal nanoclusters are very likely to present defects, of which there are many different types. Volume defects for instance, such as stacking faults and twin planes, are frequently observed in experiments and simulations [10, 11, 12, 13, 14]. Furthermore, surface reconstructions are known to occur in several clusters, [15, 16, 17, 18] and internal vacancies can also be stabilized in some cases [19, 20]. Owing to the complexity of the structural landscape of nanoclusters, there is an urgent need for a robust classification method that can separate their structures into physically meaningful groups, possibly producing an informative chart of the structural landscape in terms of a small number of collective variables (CVs). In addition to providing a low-dimensional representation of the structural landscape, CVs are an essential tool of techniques to enhance sampling in configuration space, such as umbrella sampling, [21] metadynamics, [22] temperature-accelerated MD [23], and many others. A common trait to most enhanced sampling approaches is the requirement that the chart be differentiable with respect to atomic coordinates, _i.e._, that the CVs are differentiable functions of the coordinates. Machine learning (ML) is emerging as an invaluable analysis tool in the field of nanoclusters, as it allows to efficiently navigate the complexity of the structural landscape by extracting meaningful patterns from large collections of data. ML has already found application in microscopy image recognition [24], dimensionality reduction and exploration of potential energy surfaces [25], structural recognition [26, 25, 27], characterization of the local atomic environment [28, 29], and machine learnt force fields for metals [30]. One of the main challenges in the study of nanoclusters concerns the identification of descriptors that can discriminate the various structural classes. The availability of such a tool is crucial for navigating the landscape of structures generated during simulations. In this context the histogram of the interatomic distances, _i.e._ the radial distribution function (RDF), has been used to study the solid-solid transitions in metallic/bimetallic clusters via metadynamics,[31] owing to its capability to encode structural information. Another widely used approach, is Common Neighbor Analysis (CNA) [32], a tool which relies on analyzing local atomic coordination signatures for individual atoms [33, 26]. Often, arbitrary rules [33, 9] are then applied to CNA signatures of the atoms as a means to assign the whole nanocluster to a structural family. Albeit being widely used and informative, CNA still presents certain drawbacks. First, CNA classifications are based on the arrangement of first neighbors around any given atom, and therefore they do not directly encode information on the overall shape of the nanoparticles. In addition, even though CNA can be used for charting the structural landscape and for unsupervised clustering to obtain very refined groupings of structures (_e.g._, along the lines developed by Roncaglia & Ferrando [26]), the resulting chart is non-differentiable. In this work, we propose to use a descriptor capable of capturing in full generality the most important structural features of metal nanoclusters -the RDF- and feed it to an artifical neural network (ANN) that is trained to perform an unsupervised dimensionality reduction, yielding a low-dimensional, informative representation, where data are distributed according to their structural similarities. We start off by showing that RDFs are excellent descriptors of nanocluster structures, given their capability to describe both the local [34] and global order together with the overall shape of diverse systems, and then we proceed to discuss the results obtained by using convolutional ANNs to reduce the dimensionality of the original descriptors. The combination of RDF and ANNs allowed us to learn a differentiable map from the atomic positions to a low-dimensional (3D) chart of the structural features of nanoclusters of various sizes and metals. The employed datasets contain hundreds of thousands of unique structures obtained by parallel-tempering molecular dynamics (PTMD) simulations [9, 35]. It was possible to classify in an unsupervised manner this wealth of structures, reproducing the well-known CNA classes and, additionally, being able to distinguish subtle features present in metal nanoclusters, including location of the twinning planes stacking faults, surface defects, central vacancies in icosahedra, and intermediate/distorted structures. The chart also allowed us to track and describe in detail dynamical structural transformations. Additional advantages of the present chart are the transferability and robustness, which was demonstrated using independent datasets of metal clusters of varying size and chemical nature, together with its differentiability (and hence suitability for CV-based exploration and biasing in molecular dynamics). ## 2 Results and discussion Our goal is to gain insights into the structural complexity of metal nanoclusters by means of a differentiable map of the of configuration space onto a low-dimensional, yet sufficiently informative manifold (the chart). The method consists in generating, for every cluster configuration in the dataset, a set of high dimensional descriptors, the RDFs, which are known to describe both the local structural order and global shape, and distill this information representing it in a low-dimensional, highly compressed form. The specific ANN architecture we chose to perform the unsupervised dimensionality reduction is that of an autoencoder (AE) [36] endowed with convolutional layers that renders it highly specialized at learning from numerical sequences [37]. A dimensionality reduction step follows the convolutions, yielding a physically informed three-dimensional (3D) chart of the structural landscape of our dataset, which allows to navigate and easily understand it. Finally, we apply a clustering technique to the 3D chart to gauge its quality and to identify different structural families. AEs constitute a particular class of ANNs that is highly specialized in the unsupervised dimensionality reduction of data [36]. AEs are designed to reproduce the input while forcing the data through a bottleneck with severely reduced dimensionality (Fig. 1). In this way, the network needs to learn, in the first section of the network (encoder), an efficient representation of the data in such a way that the information can then be reconstructed by the second half of the network (decoder) with sufficient accuracy. The quality of the reconstruction, is measured by a loss function that is also used in the training the network. Convolutional layers, which are specialized at learning from ordered sequences, are adopted in the AE hereby presented, because discretized RDFs are by all means sequences. They work applying different kernels that slide along the data allowing the recognition of local features and patterns, which makes them well versed for the analysis of inputs like signals (using 1d convolutional kernels) or images (2d kernels). Moreover, the connections between the nodes and the related parameters are considerably reduced as compared to the fully connected layers used in standard ANN, which decreases the computational cost while allowing for better performances. Figure 1: Simple sketch of the autoencoder architecture, showing how encoder and decoder meet at a low-dimensional (3D) bottleneck. In order to test the method, we took advantage of the large dataset of nanocluster structures produced by the group [9, 35] via parallel tempering molecular dynamics (PTMD) for gold, silver, and copper nanoclusters of different sizes. In the next section we discuss in detail the results obtained for the most challenging case -a gold cluster of 90 atoms, Au\({}_{90}\)- while results relative to other metals and sizes will be shown in later sections. ### Structural landscape of Au\({}_{90}\) Gold nanoclusters represent an ideal test case, owing to the broad variety of structures [6, 7, 8, 9, 15] they present, which include face-centered-cubic (fcc) lattice, twins, icosahedra (lh), and decahedra (Dh). In the following, nanoclusters will be broadly classified into such standard structural families by CNA (in addition to the mix and amorphous classes), as used by Settem et al. [9], with the aim of having an independent benchmark for our unsupervised study. Here we focus on a small gold nanocluster, Au\({}_{90}\), which is characterized by an extremely challenging structural landscape, owing to the large fraction of surface atoms. In particular, we chart a set of Au\({}_{90}\) configurations extracted from PTMD simulations [9] exploring a total of 35 temperatures ranging from 250 K to 550 K. Starting from an initial set of 921,600 atom configurations, we performed a local minimization and filtered out duplicates, reducing the dataset to 49,016 independent configurations. As previously mentioned, RDFs were chosen because they are general descriptors of short and long range order [38, 39] that are equivariant with respect to rototranslation and permutation of the atom coordinates. The aptness of RDFs as structural descriptors is well demonstrated by Fig. 2, in which the RDFs of all CNA classes (fcc, twin, Dh, lh, mix, and amorphous) are well separated. We will show in the following that this descriptive power also applies to other metals and nanocluster sizes, which actually have a less rich structural landscape. However, a major drawback of using a probability distribution as a descriptor -even in its discretized version- is its high dimensionality. Our approach to provide an efficient charting of the structural landscape of metal nanoclusters, _i.e._, a Figure 2: Radial distribution functions families for Au\({}_{90}\). Colors reflect cluster structure classification provided by CNA. Blue is used for Dh, green for Twin, red for Fcc, orange for lh, purple for Mix, and pink for Amorphous. Shaded areas represent intervals containing 90 % of the data for each CNA label, with the lower boundary representing the 0.05 quantile of the RDF population and the upper boundary the 0.95 quantile. low-dimensional representation, relies therefore on a dimensionality reduction step. A large number of RDFs, corresponding to individual PTMD-derived structures, are used to train an autoencoder (AE), which automatically learns to compress the high-dimensional RDF information to a 3D latent representation (Fig. 1). Our AE is composed by an input and an output layer, a central block, comprising the bottleneck layer, formed by three fully-connected layers, while the cores of the encoder and the decoder are formed by convolutional layers (Fig. 1). The training was run feeding the AE with the RDFs dataset (49,016 independent data), split in training and validation sets; the mean squared error (MSE) between the output and the input RDF is used as the loss function. We chose to adopt a latent space dimensionality of 3. This choice allowed for better performances in terms of the loss function as compared to higher compressions, while still allowing for a convenient visual representation. We refer to the Supporting Information for a comparison of the results obtained varying the dimensionality of the latent space. The 3D chart obtained by the AE is shown in Fig. 3 with datapoints colored by their CNA label. This representation clearly indicates how each structural family is grouped in separate regions of the chart and how their spatial ordering and distance reflects affinities among these families: similar structures are placed close together (_e.g._, fcc and twin), while structures that share common features occupy intermediate regions (_e.g._, the twin region is interposed between fcc and Dh). Overall, the obtained chart allows for a physically meaningful representation of the structures. The scatter in the data suggests that the resolution of the analysis of the chart allowed by the CNA summary labels is not fully conclusive and how further analysis can allow for a better understanding of the physical information encoded in the structures distribution inside the latent space and, consequently, a finer discrimination of different families of structures. In order to increase the structural resolution and to have deeper insights into the physical information encoded in the latent space, we applied a clustering technique to identify meaningful and coherent regions in the chart. In particular, we chose a non-parametric technique known as mean shift [40]. Applying this method to the 3D chart of Fig. 4, was justified not only by the non-parametric nature of the clustering technique but also by its aptness at dealing with clusters of different sizes and shapes. The only input variable required by mean shift is the bandwidth, which dictates the resolution of the analysis, with the smaller bandwidths leading to more detailed parceling of the data. We chose a bandwidth that yields a robust clustering of the chart with sufficient detail as discussed in the Supporting Information. Our analysis resulted in a robust discrimination of 27 major regions for the \(\text{Au}_{90}\) chart, corresponding to 27 different major structural families, as reported in Fig. 4. From the figure it is immediately apparent how the mean shift classification is able to distinguish and split Figure 3: Visualization of the 3D chart generated via convolutional AE for \(\text{Au}_{90}\) dataset, from different perspectives. Individual points refer to a given \(\text{Au}_{90}\) configuration in the dataset mapped according to their latent space representation. The three latent coordinates are referred to as _CVs_. Points are colored following their (independent) CNA label classification; the color code is the same used in Fig. 2. Figure 4: A) Representative samples for each of the 27 structural families identified via application of the mean shift clustering algorithm on the latent space representation of the Au\({}_{90}\) dataset. These 27 classes were subsequently grouped in 7 bigger families by similarity. Atom colors refer to their coordination: green represents atoms with fcc coordination, red stands for hcp coordination, white for neither of the previous ones. Atomistic representations with transparency report 3D views, whereas those in solid colors represent cross sections. Every structure is given a numeric index associated with the label of the belonging cluster and a particular the color. The table on the right reports both the numeric and color labels of the clusters along with a description of the various structures. B) Single view for a 3D plot, analogous to the one on the extreme right of Fig. 3 except for the coloring, which is now representative of the labels assigned by the mean shift through the same color coding reported in panel A. C) Mean shift families fractions as a function of the temperature in the whole PTMD dataset. The color code is the same of panels A and B. More likely structures are represented with the same name of the macro-family, numeric index and color of panel A. D) Plot analogous to panel C with the only difference that the PTMD data as been classified using the CNA label classification as in the work of Settem et al. [9]. Color code and labeling are the same used in Fig. 2. clusters that belong to spatially separated regions of the chart, properly reflecting the ordering of the data. Representative structures of each mean shift family are shown in Fig. 4A, while Fig. 4B shows the 3D chart with the points colored according to the same families. They are broadly categorised into lh, Dh, fcc, faulted fcc, faulted hcp, intermediates, and amorphous. Fauulted fcc nanoclusters are those with a predominant fcc part but which contain twin planes and/or stacking faults. Fauulted hcp clusters are those with a predominant hcp part but which contain twin planes and/or stacking faults. Typically, structures observed in experiments and simulations are classified into basic structural families [41, 42, 43, 33, 9] which rarely capture the fine geometrical details within a given family. In contrast, our approach leads to a physically meaningful classification along with capturing the fine structural details, by splitting the broader families into several subfamilies. A closer look at the various fcc and hcp faulted nanoclusters illustrates this point. There are three subfamilies (cluster-3, cluster-11, cluster-15) which contain only one hcp plane. Cluster-3, referred to as 2:1 fcc, consists of two and one fcc plane(s) on either side of the hcp plane. Similarly, clusters-11, 15 are 1:1 fcc with differing shapes. When the hcp plane is adjacent to surface layer, we have hcp islands (clusters-7). Cluster-10 has two converging hcp islands. In cluster-4, local surface reconstruction occurs along with a single hcp plane. Moving on to faulted hcp structures, three hcp planes converge in cluster-16. With the increase in the number of parallel hcp planes, we have either stacking faults (cluster-14) or fcc island (cluster-21) which contains one fcc plane (opposite of hcp island). In the extreme case, we have full hcp particles (clusters-20, 25). Clusters-17 and 23 both undergo local surface reconstruction similar to cluster-4. In fcc families, we have the conventional fcc structures (cluster-5) and fcc structures with local surface reconstruction (cluster-13). In the case of decahedra, there are five sub-families. Clusters-8, 9, and 12 are all conventional decahedra. In cluster-9, the decahedral axis is at the periphery as opposed to clusters-8 and 12. Additionally, cluster-12 has a partial cap on top (atoms belonging to the cap are shown in red color). Decahedra in cluster-2 have an hcp island on the surface. Finally, decahedra also exhibit reconstruction at the reentrant grooves resulting in icosahedron-like features (cluster-1). There are three icosahedral clusters: Cluster-18 consists of incomplete non-compact icosahedra; cluster-19 is combination of lh and lh+Dh (has features of both lh and Dh) while cluster-26 is a combination of lh+dh and lh+amor (has features of both lh and amorphous). Similarly, there are three types of amorphous structures (clusters-0, 22, and 24). Finally, we have intermediate structures in cluster-6. The structural distributions of Au\({}_{90}\), _i.e._, the fraction of various families as a function of temperature, of the PTMD data according to mean shift and CNA labels are shown in Figs. 4C and D, respectively. In both cases, we find the conventional structure families. However, mean shift further refines the CNA-based classification [9]. For instance, with mean shift, we have a clear separation of the various types of Dh that were previously grouped together in a broad group of mixed structures. In the case of faulted structures, there is a prominent faulted fcc cluster (Faulted fcc-3) while all other faulted structures (band between Faulted fcc-3 and Dh-8 in Fig. 4C) have very low fractions. It is noteworthy that mean shift can classify even structures that have very low probability of occurrence. In short, the Au\({}_{90}\) analysis showcased the descriptive power of RDFs and the capability of the unsupervised dimensionality reduction performed by AE to properly compress information. Through the AE we were able to generate a highly physical representation of the data, which, rather than simply splitting different structures, is able to coherently distribute them in a 3D chart according to their physical similarities. As a consequence, the subsequent independent classification via mean shift easily identified a wealth of distinct structures and underscored the capability of the approach to distinguish both local and global structural motifs: location of twinning planes, surface defects, distorted cluster shapes, etc. ### Generality of the approach In this section we show that the approach adopted for Au\({}_{90}\) is of general applicability. At the root of such generality is the wealth of structural information carried by RDFs, which are expected to be valuable for a broad class of systems which includes nanoclusters of other metals and sizes, as showcased below, but is not limited to them [3, 28]. Here we focus on larger cluster sizes that, as a general trend, show a lower variety of structures as compared to smaller ones. In particular, we study clusters of \(147\) atoms with elemental gold (Au\({}_{147}\)), copper (Cu\({}_{147}\)), and silver (Ag\({}_{147}\)). These two latter cases exhibit rather different properties as compared to the gold clusters; in particular, they exhibit a lower differentiation in the structural landscape that is mainly dominated by lh structures. We discuss only selected structural families identified by the method for the three cases, that best showcase the discerning capabilities of the method: faulted structures characteristic of Au\({}_{147}\) and on the different types of lh present in Ag\({}_{147}\). Results for Cu\({}_{147}\) are similar to Ag\({}_{147}\) and are reported in the Supporting Information. These two examples put our approach to a test, because these two families are characterized by distinct structural features: faulted structures mainly differ for small changes in the overall shape of the particles and for their atomic coordination while lh have more similar shapes and lower degrees of crystallinity. Figure 5A shows that, in the case of Au\({}_{147}\), our approach is capable of distinguishing fine features in the large family of faulted structures, which are broadly grouped into faulted fcc and faulted hcp, in analogy to Au\({}_{90}\). In the standard faulted fcc (A5, corresponding to a standard double twin), there is a single hcp plane with at least one fcc plane on either side. When the hcp Figure 5: A) Cross-sections of the different types of twin families obtained by using mean shift clustering on the latent space representation of Au\({}_{147}\). The families were splitted in two groups,in the same fashion of our treatment for the Au\({}_{90}\) twin structures. Colors of the atoms refer to their individual coordination, similarly to Fig. 4. Every structure is labeled with the same alphanumeric index of Fig. 6A, where the 3D chart of Au\({}_{147}\) is depicted B) The four different families of icosahedral structures for Ag\({}_{147}\) are sketched. As customary the families were extracted via mean shift clustering in the 3D space resulting from encoding of the Ag\({}_{147}\) dataset. The six-atom rosette defects are highlighted in red. The first three figures on the left are three dimensional representations, the last figure is a cross section. Complete description of the clustering of all the Ag\({}_{147}\) structures can be found in the Supporting Information. surface layer, we have hcp islands (A10) or sometimes partial hcp islands (A13, A14). In addition, an hcp plane and an hcp island can occur within the same structure (A19). When there are more than one hcp plane, stacking defects are observed. In the extreme case, it can be completely hcp (A20) or fcc island (A16). When there are two hcp planes, depending on the location of the hcp planes, we have either the central stacking fault (A15) or peripheral stacking fault (A9). In the standard faulted hcp (A18), there is a single fcc plane with at least one hcp plane on either side. Finally, we have the faulted hcp cluster with converging hcp planes (A11). Owing to the particular characteristics of silver, the structural landscape of Ag\({}_{147}\) is largely dominated by icosahedra, which the clustering method is able to split into four subfamilies (Fig. 5B). Conventional lh consisting of surface vacancies are the dominant among them. Icosahedra also undergo reconstruction and disordering through "rosette" defects on the surface. When the disordering increases further, we observe lh with surface disordering. Finally, one can recognize lh with a central vacancy where the central atom is missing as shown in the cross section in the rightmost panel of Fig. 5B. Distinguishing with ease the latter structural subfamily is a feature of our approach; indeed CNA can hardly recognize icosahedra with a central vacancy because it relies on the (missing) lh-coordinated atom to identify the lh class. In summary, for all the considered cases, the method proved to be transferable and robust, being capable of characterizing the wealth of structures of Au\({}_{147}\) and giving insights into the fine features distinguishing lh subclasses for Cu\({}_{147}\) and Ag\({}_{147}\). ### Dynamical structural transitions The previous sections demonstrated how the method at hand is capable of generating reliable, low-dimensional structural charts from large datasets of nanoclusters configurations for different metals and sizes. In all considered cases, the charts, informed by RDFs, excelled at distributing the different families of structures in a physically meaningful fashion, keeping similar structures closer while positioning different ones far apart. The method was able to distinguish both structures presenting major shape differences (as faulted fcc and hcp in Au nanoclusters) and structures with lower degrees of crystallinity and closer overall shape (lh subfamilies). In other words, the three CVs defining the chart can discriminate between different metastable states of the systems studied while mantaining an insightful ordering among them. These features suggest that the approach can be used for describing structural transitions occurring along reactive trajectories, _e.g.,_ obtained by MD simulations. To test this idea, we use the chart to study a continuous dynamical trajectory (Fig. 6). We consider a 2 \(\mu\)s unbiased MD run of Au\({}_{147}\) at \(396\) K. At this temperature, the most probable structure for Au\({}_{147}\) is Dh [9]. By choosing as initial configuration an lh structure, which is very unlikely in such thermodynamic conditions, it is possible to observe a spontaneous lh \(\rightarrow\) Dh transition in an unbiased trajectory. In particular, we map 2 millions of individual MD snapshots on the chart through the AE in Fig. 1, which was previously trained on independent structures generated by PTMD. To be compatible with this representation, each snapshot undergoes a short local minimization. Figures 6A, B compare the structural chart of the entire PTMD dataset with the partial representation of the same chart as obtained from the unbiased MD trajectory. The trajectory progressively populates a connected, tube-shaped region of the chart, which joins smoothly lh to Dh domains, passing through intermediate, defected structures which belong to well defined families. More in detail, the following structural pathway is observed: lh (cluster-4) \(\rightarrow\) distorted-lh (cluster-2) \(\rightarrow\) distorted-Dh (cluster-7) \(\rightarrow\) Dh (cluster-3) which is confirmed by analyzing the structures along the trajectory (Fig. 6C). Beginning from lh there is an initial transition to distorted-lh where the disorder increases and we start observing fcc-coordinated atoms in the nanocluster. The distorted-lh then changes to distorted-Dh where the amount of fcc coordinated atoms increases further. Apart from the difference in the amount of fcc, distorted-lh is geometrically similar to lh while distorted-Dh is closer to Dh. Finally, the distorted-Dh transitions to Dh which completes a gradual change from lh to Dh with physically meaningful changes along the tube-shaped region. In the absence of the chart, it would in principle be possible to perform a visual analysis of the lh \(\rightarrow\) Dh trajectory of roughly two millions of structures. However, it would be extremely cumbersome to identify the main thermally activated transformation, and to track the fine structural changes and fluctuations along the trajectory which are crucial for understanding the transition mechanisms. This difficulty is easily overcome by tracking changes in the chart coordinates as reported in Fig. 6C, which shows the time evolution of the CVs as a function of time along the trajectory. Changes in CVs are found to correlate very well with structural changes. Three broad phases can then be distinguished during the evolution of the trajectory. In the initial phase (up to \(\sim\) 250 ns), the nanocluster is predominantly lh (cluster-4) with intermittent fluctuations to distorted-lh (cluster-2), distorted-Dh Figure 6: A) Structural chart of Au\({}_{147}\) containing 87,050 structures. Points are colored according to the structural families identified by mean shift clustering, see also Fig. S4, now labeled using alphanumeric indexes to distinguish them by the families of Fig. 4. B) Plot of an unbiased MD simulation of Au\({}_{147}\) undergoing a structural transition from lh to Dh in the same chart as A. The point are colored using their mean shift classification obtained on the training dataset represented in panel A. In the plot are depicted representative structures of the different regions. C) Scatter plots of the time evolution of the three CVs along the trajectory of panel B. Dark red dashed lines highlight two intervals in which the main transformations from lh to Dh occurs. The colors of the points correspond to their mean shift label as in panel A and B. Black dashed lines represent a running average of the scatter plots. Bottom panels report magnifications of the two main transitions with snapshots of the main structures observed. (cluster-7). The actual \(\mathrm{lh}\)\(\rightarrow\)\(\mathrm{Dh}\) transition occurs around \(\sim\) 245 ns, followed by a long intermediate phase (spanning \(\sim\) 245 ns to \(\sim\) 1820 ns), in which fluctuations between \(\mathrm{Dh}\) (cluster-3, dominant) and distorted-\(\mathrm{Dh}\) (cluster-7, minor) are observed. A final transition step at \(\sim\) 1820 ns leads to the final phase consisting of \(\mathrm{Dh}\) with very few fluctuations to distorted-\(\mathrm{Dh}\). Here, we stress that this information can be obtained simply by following the CVs even before analyzing the structures. We will now focus on the transition regions and look closely at the structural changes. For this purpose, we consider CV1. In the tube-like region, a continuous increase in CV1 is synonymous with a continuous change from \(\mathrm{Ih}\) to \(\mathrm{Dh}\). A zoomed plot of the first transition (between 240 ns and 260 ns) is shown in the lower left panel of Fig. 6C, see Fig. S7 for CV2 and CV3. The initial \(\mathrm{Ih}\) structures (I-A) transition to distorted-\(\mathrm{Ih}\) structures (II-A, III-A) where we begin to see the fcc-coordinated atoms along with \(\mathrm{Dh}\)-like features. With further increase in CV1, there is a gradual change to distorted-\(\mathrm{Dh}\) structures (IV-A, V-A). Finally, these structures transition to \(\mathrm{Dh}\) structures which have an \(\mathrm{hcp}\) island (VI-A, VII-A). Decahedra with \(\mathrm{hcp}\) island dominate the middle phase and \(\mathrm{hcp}\) island-free \(\mathrm{Dh}\) are obtained after a final transition around \(\sim\) 1822 ns (shown in the lower right section of Fig. 6C). This second transition is marked by a slight increase in the mean CV1 value (black dashed line): initially, we have \(\mathrm{Dh}\) with \(\mathrm{hcp}\) island (I-B, II-B) which transition to a better \(\mathrm{Dh}\) (without \(\mathrm{hcp}\) island) around \(\sim\) 1823 ns (V-B). It appears that this transition is aided by fluctuations to distorted-\(\mathrm{Dh}\) intermediates (III-B, IV-B). After the transition to a better \(\mathrm{Dh}\) (beyond \(\sim\) 1825 ns), there are three distinct horizontal branches. The dominant one, which has the highest CV1 value, corresponds to the perfect defect-free \(\mathrm{Dh}\) (V-B). However, this structure often undergoes two types of local reconstructions near the reentrant groove (VI-B, VII-B), which coincide with two distinct values of CV1. The preceding discussion underscores that the three deep CVs are capable of describing in a detailed and physical fashion what happens during a dynamical transition. The chart enables on-the-fly tracking of the system along its structural changes and describes transitions between different metastable states. This is a further evidence of the physical insightfulness of the latent space generated starting from the RDFs, underscoring the reliability of the structural information contained in the charts and further showcasing the power of the approach. In particular, the method shows promise for characterizing and analyzing long trajectories generated via molecular simulations enabling a fast and informed way to study and follow the time evolution of this type of systems. Importantly, the differentiability of the coordinates of the latent space with respect to the atomic positions opens the way to address the challenge of biasing MD simulations of structural changes [31, 44]. The specific merit of this approach is to provide a natural route to devise a general, informative, and low-dimensional collective variable space capable of describing dozens of structural motifs. We plan to investigate structural transformation driven by deep learnt collective variables in a separate communication. ## 3 Conclusions This work presents an original machine learning method capable to chart the structural landscape of nanoparticles according to their radial distribution function. The approach comprises two subsequent information extraction steps. The first consists in translating the atomic coordinates into RDFs, which encode information about the structure in translationally, rotationally, and permutationally invariant way. The high dimensional information contained in the RDF is then reduced to a low-dimensional (3D) and yet informative representation ("chart") by exploiting convolutional autoencoders. These deep-learnt collective variables are surprisingly good at describing structural features in a physically meaningful way, discriminating the different states of the system. The 3D charts of different metal nanoclusters were then analysed using a non-parametric clustering technique, which allowed us to classify the datapoints into structural families. The method succeeded at disentangling the complex structural motifs of nanoclusters having different shapes and metals (\(\mathrm{Au}_{90}\), \(\mathrm{Au}_{147}\), \(\mathrm{Ag}_{147}\), and \(\mathrm{Cu}_{147}\)), distinguishing also fine difference between faulted and mixed structures as well as small defects (icosahedra with central vacancy, surface defects, etc.). Related structural motifs, _e.g._, fcc and faulted fcc/hcp were found to occupy close regions of the chart, allowing us to garner insights also into dynamical structural transformations. Finally, the method further proved useful in the analysis of a long unbiased MD run of Au\({}_{147}\) undergoing a structural transition. The collective variables allowed us to accurately track and describe structural changes along the dynamics. This pushes the method applicability beyond the simple analysis of structural differences in large datasets, making it a powerful tool for the inspection, interpretation and possibly generation of reactive trajectories between metastable states. Indeed, the ability to discriminate with a high level of detail different metastable states, together with the intrinsic differentiability of neural networks, make the encoded variables promising low-dimensional CVs for biased MD simulations. The excellent results obtained for metal nanoclusters, for which the method could learn to identify a variety of structures ranging from crystalline to faulted and amorphous, demonstrates the virtue of machine learning from radial distribution functions. Building on the generality of its descriptors, this machine learning framework could be used to chart the structural landscape of diverse kinds of systems including non-metallic nanoparticles, [45, 27] colloidal assemblies [46, 47, 28], advancing our capability to classify, explore, and understand transitions in these systems. ## 4 Methods The original datasets we considered included hundreds of thousands of structures for each particular cluster size and type. The structures were generated through Parallel-Tempering Molecular Dynamics PTMD simulations (see the Supporting Information). Original structures were then locally minimized to discount thermal noise. In order to avoid redundancy in the data, due to duplicates in the locally minimized structures, the initial set of structures was filtered out in order to only select unique samples. This selection was based on both CNA classification and potential energy. As a result, structures in the final dataset differed from each other by at least 0.1 meV in the potential energy or by CNA label, leading to a reduction in the number of structures to few tens of thousands for every cluster type. The RDFs of each configurations were obtained using kernel density estimation on the interatomic distances (using the KernelDensity library from scikit-learn package [48]) with gaussian kernels and a bandwidth of 0.2 nanometers. The RDFs were then discretized and processed by the autoencoder as described in Fig. 1. Input and output of the AE share the same sizes, equal to the total mesh points of the discretized RDFs. The convolutional part of the encoder is composed by 5 blocks made of a convolutional layer, a rectified linear unit activation function and a batch normalization. After the convolutions the outputs are flattened and fed to a fully connected linear layer which outputs the 3 CVs values, closing the encoder section. The decoder follows, mirroring the encoder. The 3 outputs of the encoder are fed to a another fully connected layer whose output is reshaped and fed to 5 deconvolutional blocks that replicate, mirrored, the convolutional part of the encoder. Finally, in the output layer of the decoder, data are returned to their initial size. The output is compared to the input in the training using MSE loss. More details regarding the AE architecture parameters and the training can be found in the Supporting Information. After the training, the three dimensional output of the bottleneck is evaluated for all the data to obtain a 3D chart, _e.g._ the one reported in Fig. 3. After the chart of the data has been generated, the mean shift[40] clustering technique is exploited to identify families of structures and evaluate the quality of the chart. Mean shift requires to set only one parameter, the bandwidth, dictating the resolution of the analysis. Bandwidth selection was obtained looking for intervals of values, yielding an (almost) constant number of clusters, see Fig. S3. Finally, the 50 configurations closer to each centroid were analyzed visually, in order to inspect for major structural feature characterizing the different regions identified by the clustering. Acknowledgements This research is part of a project that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 803213). This work has been supported by the project "Understanding and Tuning FRiction through nanOstructure Manipulation (UTFROM)" funded by MIUR Progetti di Ricerca di Rilevante Interesse Nazionale (PRIN) Bando 2017 - grant 20178PZCB5. The authors acknowledge PRACE for awarding us access to Marconi100 at CINECA, Italy.
2305.00934
Variational Inference for Bayesian Neural Networks under Model and Parameter Uncertainty
Bayesian neural networks (BNNs) have recently regained a significant amount of attention in the deep learning community due to the development of scalable approximate Bayesian inference techniques. There are several advantages of using a Bayesian approach: Parameter and prediction uncertainties become easily available, facilitating rigorous statistical analysis. Furthermore, prior knowledge can be incorporated. However, so far, there have been no scalable techniques capable of combining both structural and parameter uncertainty. In this paper, we apply the concept of model uncertainty as a framework for structural learning in BNNs and hence make inference in the joint space of structures/models and parameters. Moreover, we suggest an adaptation of a scalable variational inference approach with reparametrization of marginal inclusion probabilities to incorporate the model space constraints. Experimental results on a range of benchmark datasets show that we obtain comparable accuracy results with the competing models, but based on methods that are much more sparse than ordinary BNNs.
Aliaksandr Hubin, Geir Storvik
2023-05-01T16:38:17Z
http://arxiv.org/abs/2305.00934v1
# Variational Inference for Bayesian Neural Networks under Model and Parameter Uncertainty ###### Abstract Bayesian neural networks (BNNs) have recently regained a significant amount of attention in the deep learning community due to the development of scalable approximate Bayesian inference techniques. There are several advantages of using a Bayesian approach: Parameter and prediction uncertainties become easily available, facilitating rigorous statistical analysis. Furthermore, prior knowledge can be incorporated. However, so far, there have been no scalable techniques capable of combining both structural and parameter uncertainty. In this paper, we apply the concept of model uncertainty as a framework for structural learning in BNNs and hence make inference in the joint space of structures/models and parameters. Moreover, we suggest an adaptation of a scalable variational inference approach with reparametrization of marginal inclusion probabilities to incorporate the model space constraints. Experimental results on a range of benchmark datasets show that we obtain comparable accuracy results with the competing models, but based on methods that are much more sparse than ordinary BNNs. Keywords:Bayesian neural networks; Structural learning; Model selection; Model averaging; Approximate Bayesian inference; Predictive uncertainty. ## 1 Introduction In recent years, frequentist deep learning procedures have become extremely popular and highly successful in a wide variety of real-world applications ranging from natural language to image analyses (Goodfellow et al., 2016). These algorithms iteratively apply some nonlinear transformations aiming at optimal prediction of response variables from the outer layer features. This yields high flexibility in modelling complex conditional distributions of the responses. Each transformation yields another hidden layer of features which are also called neurons. The architecture/structure of a deep neural network includes the specification of the nonlinear intra-layer transformations (_activation functions_), the number of layers (_depth_), the number of features at each layer (_width_) and the connections between the neurons (_weights_). In the standard (frequentist) settings, the resulting model is trained using some optimization procedure (e.g. stochastic gradient descent) with respect to its parameters in order to fit a particular objective (like minimization of the root mean squared error or negative log-likelihood). Very often deep learning procedures outperform traditional statistical models, even when the latter are carefully designed and reflect expert knowledge (Refenes et al., 1994; Razi and Athapplily, 2005; Adya and Colopy, 1998; Sargent, 2001; Kanter and Veeramachaneni, 2015). However, typically one has to use huge datasets to be able to produce generalizable neural networks and avoid overfitting issues. Even though several regularization techniques (\(L_{1}\) and \(L_{2}\) penalties on the weights, dropout, batch normalization, etc.) have been developed for deep learning procedures to avoid overfitting to training datasets, the success of such approaches is not obvious. Unstuetured pruning the network, either by putting some weights to zero or by removing some nodes has been shown to be possible. As an alternative to frequentist deep learning approaches, Bayesian neural networks represent a very flexible class of models, which are quite robust to overfitting (Neklyudov et al., 2018). However, they often remain heavily over-parametrized. There are several implicit approaches for the sparsification of BNNs by shrinkage of weights through priors (Jylanki et al., 2014; Blundell et al., 2015; Molchanov et al., 2017; Ghosh et al., 2018; Neklyudov et al., 2017). For example, Blundell et al. (2015) suggest a mixture of two Gaussian densities and then perform a fully factorizable mean-field variational approximation. Ghosh et al. (2019); Louizos et al. (2017) independently generalize this approach by means of suggesting Horseshoe priors (Carvalho et al., 2009) for the weights, providing even stronger shrinkage and automatic specification of the mixture component variances required in Blundell et al. (2015). Some algorithmic procedures can also be seen to correspond to specific Bayesian priors, e.g. Molchanov et al. (2017) show that Gaussian dropout corresponds to BNNs with log uniform priors on the weight parameters. In this paper, we consider a formal Bayesian approach for jointly taking into account _structural uncertainty_ and _parameter uncertainty_ in BNNs as a generalization of the methods developed for Bayesian model selection within linear regression models. The approach is based on introducing latent binary variables corresponding to the inclusion-exclusion of particular weights within a given architecture. This is done by means of introducing spike-and-slab priors. Such priors for the BNN setting were suggested in Polson and Rockova (2018) and Hubin (2018). A computational procedure for inference in such settings was proposed in an early version of this paper (Hubin and Storvik, 2019) and was further used without any changes in Bai et al. (2020). An asymptotic theoretical result for the choice of prior inclusion probabilities is additionally presented in Bai et al. (2020), however, it only is valid when the number of parameters goes to infinity and when the prior variance of the slab components is fixed. Here, we go further, in that we introduce hyperpriors on the prior inclusion probabilities and variance of the slab components making them stochastic, we also allow for more flexible variational approximations based on the multivariate Gaussian structures for inclusion indicators. Additionally, we consider several alternative prediction procedures for fully Bayesian model averaging, including posterior mean-based models, and the median probability model, and perform a comprehensive experimental study comparing the suggested approach with several competing algorithms and several data sets. Using a Bayesian formalization in the space of models allows adapting the whole machinery of Bayesian inference in the joint model-parameter settings, including _Bayesian model averaging_ (BMA) (across all models) or _Bayesian model selection_ (BMS) of one _"best"_ model with respect to some model selection criterion (Claeskens et al., 2008). In this paper, we study BMA as well as the _median probability_ model (Barbieri et al., 2004, 2021) and _posterior mean_ model-based inference for BNNs. Sparsifying properties of BMS (in particular the median probability model) are also addressed within the experiments. Finally, following Hubin (2018) we will link the obtained _marginal inclusion probabilities_ to _binary dropout_ rates, which gives proper probabilistic reasoning for the latter. The inference algorithm is based on scalable stochastic variational inference. The suggested approach has similarities to binary dropout that has become very popular (Srivastava et al., 2014). However, while standard binary dropout can be seen as a Bayesian approximation to a Gaussian process model where only parameter estimation is taken into account (Gal, 2016), our approach explicitly _models_ structural uncertainty. In this sense, it is closely related to Concrete dropout (Gal et al., 2017). However, the model proposed by Gal et al. (2017) does not allow for BMS: The median probability model will either select all weights or nothing due to a strong assumption of having the same dropout probabilities for the whole layer. Furthermore, the variational approximation procedure applied in Gal et al. (2017) has not been studied in the model uncertainty context. At the same time, it is important to state explicitly that our approach does not currently aim at interpretable inference on Bayesian neural networks in most of the cases with an exception of having a special case of model selection in GLM models. Also, interpretable models could in principle be feasible if direct connections from all the layers to the responses are allowed. However, such cases are not addressed in this paper. The rest of the paper is organized as follows: Section 2 gives some background and discuss related work. The class of BNNs and the corresponding model space are mathematically defined in Section 3. In Section 4, we describe the algorithm for training the suggested class of models using the reparametrization of marginal inclusion probabilities. Section 4.3 discusses several predictive inference possibilities. In Section 5, the suggested approach is applied to the two classical benchmark datasets MNIST, FMNIST (for image classifications) as well as PHONEME (for sound classification). We also compare the results with some of the existing approaches for inference on BNNs. Finally, in Section 6 some conclusions and suggestions for further research are given. Additional results are provided in the supplementary materials to the paper. ## 2 Background and related work Bayesian neural networks (BNNs) were already introduced a few decades ago by Neal (1992); MacKay (1995); Bishop (1997). BNNs take advantage of the rigorous Bayesian approach and are able to properly handle parameter and prediction uncertainty and can in principle also incorporate prior knowledge. In many cases, this leads to more robust solutions with less overfitting. However, this comes at a price of extremely high computational costs. Until recently, inference on BNNs could not scale to large and high-dimensional data due to the limitations of standard MCMC approaches, the main numerical procedure in use. Several attempts based on subsampling techniques for MCMC, which are either approximate (Bardenet et al., 2014, 2017; Korattikara et al., 2014; Quiroz et al., 2014; Welling and Teh, 2011) or exact (Quiroz et al., 2016; Maclaurin and Adams, 2014; Liu et al., 2015; Welling and Teh, 2011) have been proposed, but none of them is able to explore the parameter spaces efficiently in ultrahigh-dimensional settings. An alternative to the MCMC technique is to perform approximate Bayesian inference through variational Bayes, also known as variational inference (Jordan et al., 1999). Due to the fast convergence properties of the variational methods, variational inference algorithms are typically orders of magnitude faster than MCMC algorithms in high-dimensional problems (Ahmed et al., 2012). The variational inference has various applications in latent variable models, such as mixture models (Humphreys and Titterington, 2000), hidden Markov models (MacKay, 1997) and graphical models (Attias, 2000) in general. Graves (2011) suggested the methodology for scalable variational inference to Bayesian neural networks. This methodology was further improved by incorporating various variance reduction techniques, which are discussed in Gal (2016). As mentioned in the introduction, it has been shown that the majority of the weight parameters in neural netwooks can be pruned out from the model without a significant loss of predictive accuracy. However, pruning is typically done implicitly by deleting the weights via ad-hoc thresholding. Yet, learning which parameters to include in a model, can also be framed as a structure learning or a model selection problem. As discussed in Claeskens and Hjort (2008), Steel (2020) or Hansen and Yu (2001) (among other venues), model selection and model averaging in statistics generally assumes a discrete and countable (finite or infinite) set of models living on a corresponding model space. Models within a model space can differ in terms of the likelihood used, the link functions addressed, or which parameters are included in a linear or non-linear predictor. The purpose of model selection is to choose a single (best in some sense) model from a model space. This choice can lead to more interpretable models in some use cases (like variable selection in linear regression, Kuo and Mallick, 1998) or simply best models for some purpose (like prediction) in others (Geisser and Eddy, 1979), sometimes both coincide (Hubin et al., 2021), but that is not always the case (Breiman, 2001). Sparsity in model selection may or may not be of interest dependent on the context, although parsimony in some sense is typically desired. At the same time, the model selection often leads to problems with uncertainty handling resulting in too narrow confidence/credible intervals of the parameters (Heinze et al., 2018) and thus often may result in similar problems for predictions, i.e. overfitting. Model averaging (if model uncertainty is properly addressed) can resolve these issues (Bornkamp et al., 2017) and has other advantages (Steel, 2020). Also, model uncertainty aware model selection, e.g. using the median probability model is more robust to overfitting (Ghosh, 2015). There have been numerous works showing the efficiency and accuracy of model selection/averaging related to parameter selection through introducing latent variables corresponding to different discrete model configurations. In the Bayesian context, the posterior distribution can then be used to both select the best sparse configuration and address the joint model-and-parameters-uncertainty explicitly (George and McCulloch, 1993; Clyde et al., 2011; Frommlet et al., 2012; Hubin and Storvik, 2018; Hubin et al., 2020, 2021). Spike-and slab priors (Mitchell and Beauchamp, 1988) are typically used in this setting. All of these approaches have demonstrated both good predictive performance of the obtained sparse models and the ability to recover meaningful complex nonlinearities. They are however based on adaptations of Markov chain Monte Carlo (MCMC) and do not scale well to large high-dimensional data samples. Louizos et al. (2017) also warn about the complexity of explicit discretization of model configuration within BNNs, as it causes an exponential explosion with respect to the total number of parameters, and hence infeasibility of inference for high-dimensional problems. Polson and Rockova (2018) study the use of the spike-and-slab approach in BNNs from a theoretical standpoint. Logsdon et al. (2010); Carbonetto et al. (2012) suggest a fully-factorized variational distribution capable of efficiently and precisely "linearizing" the computational burden of Bayesian model selection in the context of _linear_ models with an ultrahigh number of potential covariates, typical for genome-wide association studies (GWAS). In the discussion of his Ph.D. thesis, Hubin (2018) proposed combining the approaches of Logsdon et al. (2010); Carbonetto et al. (2012) and Graves (2011) for scalable approximate Bayesian inference on the joint space of models and parameters in deep Bayesian regression models. We develop this idea further in this article. ## 3 The model A neural network model links (possibly multidimensional) observations \(\mathbf{y}_{i}\in\mathcal{R}^{r}\) and explanatory variables \(\mathbf{x}_{i}\in\mathcal{R}^{p}\) via a probabilistic functional mapping with a mean parameter vector \(\mathbf{\mu}_{i}=\mathbf{\mu}_{i}(\mathbf{x}_{i})\in\mathcal{R}^{r}\): \[\mathbf{y}_{i}\sim\mathfrak{f}\left(\mathbf{\mu}_{i}(\mathbf{x}_{i}),\phi \right),\quad i\in\{1,...,n\}, \tag{3.1}\] where \(\mathfrak{f}\) is some observation distribution, typically from the exponential family, while \(\phi\) is a dispersion parameter. To construct the vector of mean parameters \(\mathbf{\mu}_{i}\), one builds a sequence of building blocks of hidden layers through semi-affine transformations: \[z_{ij}^{(l+1)}= g_{j}^{(l)}\left(\beta_{0j}^{(l)}+\sum_{k=1}^{p^{(l)}}\beta_{ kj}^{(l)}z_{ik}^{(l)}\right),l=1,...,L-1,j=1,...,p^{(l+1)}, \tag{3.2}\] with \(\mu_{ij}=z_{ij}^{(L)}\). Here, \(L\) is the number of layers, \(p^{(l)}\) is the number of nodes within the corresponding layer while \(g_{j}^{(l)}\) is a univariate function (further referred to as the _activation function_). Further, \(\beta_{kj}^{(l)}\in\mathcal{R},k>0\) are the weights (slope coefficients) for the inputs \(z_{ik}^{(l)}\) of the \(l\)-th layer (note that \(z_{ik}^{(1)}=x_{ik}\) and \(p^{(1)}=p\)). For \(k=0\), we obtain the intercept/bias terms. Finally, we introduce latent binary indicators \(\gamma_{kj}^{(l)}\in\{0,1\}\) switching the corresponding weights on and off such that \(\beta_{kj}^{(l)}=0\) if \(\gamma_{kj}^{(l)}=0\). In our notation, we explicitly differentiate between discrete structural/model configurations defined by the vectors \(\mathbf{\gamma}=\{\gamma_{kj}^{(l)},j=1,..,p^{(l+1)},k=0,...,p^{(l)},l=1,...,L-1\}\) (further referred to as models) constituting the model space \(\Gamma\) and parameters of the models, conditional on these configurations \(\mathbf{\theta}|\mathbf{\gamma}=\{\mathbf{\beta},\phi|\mathbf{\gamma}\}\), where only those \(\beta^{(l)}_{kj}\) for which \(\gamma^{(l)}_{kj}=1\) are included. This approach is (in statistical science literature) a rather standard way to explicitly specify the model uncertainty in a given class of models and is used in e.g. Clyde et al. (2011); Frommlet et al. (2012); Hubin et al. (2021). A Bayesian approach is completed by specification of model priors \(p(\mathbf{\gamma})\) and parameter priors for each model \(p(\mathbf{\beta}|\mathbf{\gamma},\phi)\). If the dispersion parameter is present in the distribution of the outcomes, one also has to define a prior \(p(\phi|\mathbf{\gamma})\). Many kinds of priors on \(p(\mathbf{\beta}|\mathbf{\gamma},\phi)\) can be considered, including the mixture of Gaussians prior (Blundell et al., 2015), the Horseshoe prior (Ghosh et al., 2019; Louizos et al., 2017), or mixtures of g-priors (Li and Clyde, 2018), which could give further penalties to the weight parameters. We first following our early preprint (Hubin and Storvik, 2019) as well as even earlier ideas from Hubin (2018); Polson and Rockova (2018) and consider the independent Gaussian spike-and-slab weight priors combined with independent Bernoulli priors for the latent inclusion indicators being equal to \(1\). This choice of the priors corresponds to marginal spike-and-slab priors for the weights (Clyde et al., 2011): \[p(\beta^{(l)}_{kj}|\sigma^{2}_{\beta,l},\gamma^{(l)}_{kj})= \gamma^{(l)}_{kj}\mathcal{N}(0,\sigma^{2}_{\beta,l})+(1-\gamma^{( l)}_{kj})\delta_{0}(\beta^{(l)}_{kj}), \tag{3.3a}\] \[p(\gamma^{(l)}_{kj})= \text{Bernoulli}(\psi^{(l)}), \tag{3.3b}\] Here, \(\delta_{0}(\cdot)\) is the delta mass or "spike" at zero, \(\sigma^{2}_{\beta,l}\) is the prior variance of \(\beta^{(l)}_{kj}\), whilst \(\psi^{(l)}\in(0,1)\) is the prior probability for including the weight \(\beta^{(l)}_{kj}\) into the model. To automatically infer the prior variance and the prior probability for including a weight, we assume for \(\sigma^{2}_{\beta,l}\) a standard inverse Gamma hyperprior with hyperparameters \(a^{(l)}_{\beta},b^{(l)}_{\beta}\), and for \(\psi^{(l)}\) a Beta\((a^{(l)}_{\psi},b^{(l)}_{\psi})\) prior: \[p(\beta^{(l)}_{kj}|\sigma^{2}_{\beta,l},\gamma^{(l)}_{kj})= \gamma^{(l)}_{kj}\mathcal{N}(0,\sigma^{2}_{\beta,l})+(1-\gamma^{ (l)}_{kj})\delta_{0}(\beta^{(l)}_{kj}), \tag{3.4a}\] \[p(\sigma^{2}_{\beta,l})= \text{Inv-Gamma}(a^{(l)}_{\beta},b^{(l)}_{\beta}),\] (3.4b) \[p(\gamma^{(l)}_{kj})= \text{Bernoulli}(\psi^{(l)}),\] (3.4c) \[p(\psi^{(l)})= \text{Beta}(a^{(l)}_{\psi},b^{(l)}_{\psi}). \tag{3.4d}\] Here, \(\delta_{0}(\cdot)\) is the delta mass or "spike" at zero, \(\sigma^{2}_{\beta,l}\) is the prior variance of \(\beta^{(l)}_{kj}\) with a standard inverse Gamma hyperprior with hyperparameters \(a^{(l)}_{\beta},b^{(l)}_{\beta}\), whilst \(\psi^{(l)}\in[0,1]\) is the prior probability for including the weight \(\beta^{(l)}_{kj}\) into the model. Further, \(\psi^{(l)}\) is assumed Beta\((a^{(l)}_{\psi},b^{(l)}_{\psi})\) distributed. With the presence of a dispersion parameter, an additional prior is needed, see Dey et al. (2000). We will refer to our model as the Latent Binary Bayesian Neural Network (LBBNN) model. ## 4 Bayesian inference The main goal of inference with uncertainty in both models and parameters is to infer the posterior marginal distribution of some parameter of interest \(\Delta\) (for example the distribution of a new observation \(y^{*}\) conditional on new covariates \(\mathbf{x}^{*}\)) based on data \(\mathcal{D}\): \[p(\Delta|\mathcal{D})=\sum_{\mathbf{\gamma}\in\Gamma}\int_{\mathbf{\theta}\in\Theta_{ \gamma}}p(\Delta|\mathbf{\theta},\mathbf{\gamma},\mathcal{D})p(\mathbf{\theta},\mathbf{\gamma} |\mathcal{D})\mathrm{d}\mathbf{\theta}, \tag{4.1}\] where \(\Theta_{\gamma}\) is the parameter space defined through \(\mathbf{\gamma}\). Standard procedures for dealing with complex posteriors is to apply Monte Carlo methods which involve simulations from \(p(\mathbf{\theta},\mathbf{\gamma}|\mathcal{D})\). For the model defined by (3.1)-(3.4) with many hidden variables, such simulations become problematic. The idea behind variational inference (Graves, 2011; Blei et al., 2017) is to apply the approximation \[\tilde{p}(\Delta|\mathcal{D})=\sum_{\mathbf{\gamma}\in\Gamma}\int_{\mathbf{\theta}\in \Theta_{\gamma}}p(\Delta|\mathbf{\theta},\mathbf{\gamma},\mathcal{D})q_{\mathbf{\eta}}( \mathbf{\theta},\mathbf{\gamma})\mathrm{d}\mathbf{\theta}. \tag{4.2}\] for some suitable (parametric) distribution \(q_{\mathbf{\eta}}(\mathbf{\theta},\mathbf{\gamma})\) which, with appropriate choices of the parameters \(\mathbf{\eta}\), approximates the posterior well and is _simple_ to sample from. The specification of \(\mathbf{\eta}\) is typically obtained through the minimization of the Kullback-Leibler divergence from the variational family distribution to the posterior distribution: \[\mathrm{KL}(q_{\mathbf{\eta}}(\mathbf{\theta},\mathbf{\gamma})||p(\mathbf{\theta},\mathbf{\gamma} |\mathcal{D}))=\sum_{\mathbf{\gamma}\in\Gamma}\int_{\Theta_{\gamma}}q_{\mathbf{\eta}} (\mathbf{\theta},\mathbf{\gamma})\log\tfrac{q_{\mathbf{\eta}}(\mathbf{\theta},\mathbf{\gamma})}{ p(\mathbf{\theta},\mathbf{\gamma}|\mathcal{D})}\mathrm{d}\mathbf{\theta}, \tag{4.3}\] with respect to the variational parameters \(\mathbf{\eta}\). Compared to standard variational inference approaches, the setting is extended to include the discrete model identifiers \(\mathbf{\gamma}\). For an optimal choice \(\hat{\mathbf{\eta}}\) of \(\mathbf{\eta}\), inference on \(\Delta\) is performed through Monte Carlo estimation of (4.2) inserting \(\hat{\mathbf{\eta}}\) for \(\mathbf{\eta}\). The main challenge then becomes choosing a suitable variational family and a computational procedure for minimizing (4.3). Note that although this minimization is still a computational challenge, it will typically be much easier than directly obtaining samples from the true posterior. The final Monte Carlo estimation will be simple, provided the variational distribution \(q_{\mathbf{\eta}}(\mathbf{\theta},\mathbf{\gamma})\) is selected such that it is simple to sample from. As in standard settings of variational inference, minimization of the divergence (4.3) is equivalent to maximization of the evidence lower bound (ELBO) \[\mathcal{L}_{VI}(\mathbf{\eta})= \sum_{\mathbf{\gamma}\in\Gamma}\int_{\Theta_{\gamma}}q_{\mathbf{\eta}}( \mathbf{\theta},\mathbf{\gamma})\log p(\mathcal{D}|\mathbf{\theta},\mathbf{\gamma})\mathrm{d} \mathbf{\theta}-\mathrm{KL}\left(q_{\mathbf{\eta}}(\mathbf{\theta},\mathbf{\gamma})||p(\mathbf{ \theta},\mathbf{\gamma})\right) \tag{4.4}\] through the equality \[\mathcal{L}_{VI}(\mathbf{\eta})= p(\mathcal{D})-\mathrm{KL}\left(q_{\mathbf{\eta}}(\mathbf{\theta},\mathbf{ \gamma})||p(\mathbf{\theta},\mathbf{\gamma}|\mathcal{D})\right),\] which also shows that \(\mathcal{L}_{VI}(\mathbf{\eta})\) is a lower bound of the marginal likelihood \(p(\mathcal{D})\). ### Variational distributions We will consider a variational family previously proposed for linear regression (Logsdon et al., 2010; Carbonetto et al., 2012) which we extend to the LBBNN setting. Assume \[q_{\mathbf{\eta}}(\mathbf{\theta},\mathbf{\gamma})=q_{\mathbf{\eta}_{0}}(\phi)\prod_{l=1}^{L-1} \prod_{j=1}^{p^{(l+1)}}\prod_{k=0}^{p^{(l)}}q_{\kappa_{kj},\tau_{kj}}(\beta_{kj }^{(l)}|\gamma_{kj}^{(l)})q_{\alpha_{kj}^{(l)}}(\gamma_{kj}^{(l)}), \tag{4.5}\] where \(q_{\mathbf{\eta}_{0}}(\phi)\) is some appropriate distribution for the dispersion parameter, \[q_{\kappa_{kj}^{(l)},\tau_{kj}^{(l)}}\left(\beta_{kj}^{(l)}|\gamma_{kj}^{(l)} \right)= \gamma_{kj}^{(l)}\mathcal{N}(\kappa_{kj}^{(l)},\tau^{2}{}_{kj}^{( l)})+(1-\gamma_{kj}^{(l)})\delta_{0}(\beta_{kj}^{(l)}), \tag{4.6}\] and \[q_{\alpha_{kj}^{(l)}}(\gamma_{kj}^{(l)})= \text{Bernoulli}(\alpha_{kj}^{(l)}). \tag{4.7}\] With probability, \(\alpha_{kj}^{(l)}\in[0,1]\), the posterior of parameters of weight \(\beta_{kj}^{(l)}\) will be approximated by a normal distribution with some mean and variance ("slab"), and otherwise, the weight is put to zero. Thus, \(\alpha_{kj}^{(l)}\) will approximate the marginal posterior inclusion probability of the weight \(\beta_{kj}^{(l)}\). Here, \(\mathbf{\eta}=\{\mathbf{\eta}_{0},(\kappa_{kj}^{(l)},\tau_{kj}^{2(l)},\alpha_{kj}^{(l )}),l=1,...,L-1,k=1,...,p^{(l+1)},j=1,...,p^{(l)}\}\). A similar variational distribution has also been considered within BNN through the dropout approach (Srivastava et al., 2014). For dropout, however, the final network is dense but trained through a Monte Carlo average of sparse networks. In our approach, the target distribution is different in the sense of including the binary variables \(\{\gamma_{kj}^{(l)}\}\) as part of the _model_. Hence, our marginal inclusion probabilities can serve as a particular case of dropout rates with a _proper_ probabilistic interpretation in terms of structural model uncertainty. The variational distribution (4.5)-(4.7), corresponding to the commonly applied mean-field approximation, can be seen as a rather crude approximation, which completely ignores all posterior dependence between the model structures or parameters. Consequently, the resulting conclusions can be misleading or inaccurate as the posterior probability of one weight might be highly affected by the inclusion of others. Such a dependence structure can be built into the variational approximation either through the \(\gamma\)'s or through the \(\beta\)'s (or both). Here, we only consider dependence structures in the inclusion variables. We still assume independence between layers, but within layers, we introduce a dependence structure by defining \(\mathbf{\alpha}^{l}=\{\alpha_{kj}^{(l)}\}\) now to be a stochastic vector, which on logit-scale follows a multivariate normal distribution: \[\text{logit}(\mathbf{\alpha}^{(l)})\sim\text{MVN}(\mathbf{\xi}^{(l)},\mathbf{\Sigma}^{(l) }). \tag{4.8}\] Here, either a full covariance matrix \(\mathbf{\Sigma}^{(l)}\) or a low-rank parametrization for the covariance is possible. For the latter, \(\mathbf{\Sigma}^{(l)}=\mathbf{F}^{(l)}\mathbf{F}^{(l)}{}^{T}+\mathbf{D}^{(l)}\) with \(\mathbf{F}^{(l)}\) being the factor part of low-rank form of covariance matrix and \(D^{(l)}\) is the diagonal part of low-rank form of covariance matrix. This drastically reduces the number of parameters and allows for efficient computations of the determinant and inverse matrix. A particularly interesting case is when \(\mathbf{F}^{(l)}\) has rank zero in which case we retain independence between the components but some penalization in the variability of the \(\alpha_{kj}^{(l)}\)'s. Under the parametrization (4.6)-(4.8), the parameters \(\{\mathbf{\xi}^{(l)},l=1,...,L-1\}\) and \(\{\mathbf{\Sigma}^{(l)},l=1,...,L-1\}\) are added to the parameter vector \(\mathbf{\eta}\). Then the reparametrization trick is also performed for these parameters using the default representations for MVN and LFMVN, available out of the box in PyTorch probabilities. ### Optimization by stochastic gradient For simplicity, we assume here that there is no dispersion parameters, so the target distribution is \(p(\mathbf{\beta},\mathbf{\gamma}|\mathcal{D})\). We can rewrite the ELBO (4.4) as \[\mathcal{L}_{VI}(\mathbf{\eta})= \sum_{\mathbf{\gamma}\in\Gamma}\int_{\beta_{\gamma}}q_{\mathbf{\eta}}( \mathbf{\beta},\mathbf{\gamma})[\log p(\mathcal{D}|\mathbf{\beta},\mathbf{\gamma})-\log\tfrac {q_{\mathbf{\eta}}(\mathbf{\beta},\mathbf{\gamma})}{p(\mathbf{\beta},\mathbf{\gamma})}]\text{d} \mathbf{\beta}. \tag{4.9}\] Due to the huge computational cost in the computation of gradients when \(\Gamma\) and \(\mathcal{D}\) are large, stochastic gradient methods using Monte Carlo estimates for obtaining unbiased estimates of the gradients have become the standard approach for variational inference in such situations. Both the reparametrization trick and minibatching (Kingma et al., 2015; Blundell et al., 2015) are further applied. Another complication in our setting is the discrete nature of \(\mathbf{\gamma}\). Following Gal et al. (2017), we relax the Bernoulli distribution (4.7) with the _Concrete distribution_: \[\tilde{\gamma}=\gamma_{tr}(\nu,\delta;\alpha)=\text{sigmoid}(( \text{logit}(\alpha)-\text{logit}(\nu))/\delta),\quad\nu\sim\text{Unif}[0,1], \tag{4.10}\] where \(\delta\) is a tuning parameter, which is selected to take some small value. In the zero limit, \(\tilde{\gamma}\) reduces to a Bernoulli(\(\alpha\)) variable. Combined with the reparametrization of the \(\beta\)'s, \[\beta=\beta_{tr}(\varepsilon;\kappa,\tau)=\kappa+\tau\varepsilon, \quad\varepsilon\sim N(0,1) \tag{4.11}\] we define the following approximation to the ELBO: \[\mathcal{L}_{VI}^{\delta}(\mathbf{\eta}):=\int_{\mathbf{\nu}}\int_{\mathbf{ \varepsilon}}q_{\mathbf{\nu},\mathbf{\varepsilon}}(\mathbf{\nu},\mathbf{\varepsilon})[ \log p(\mathcal{D}|\beta_{tr}(\mathbf{\varepsilon},\mathbf{\kappa},\mathbf{\tau}),\gamma _{tr}(\mathbf{\nu},\mathbf{\alpha},\delta))-\] \[\log\frac{q_{\mathbf{\eta}}(\beta_{tr}(\mathbf{\varepsilon},\mathbf{ \kappa},\mathbf{\tau}),\gamma_{tr}(\mathbf{\nu},\mathbf{\alpha},\delta))}{p(\beta_{tr}( \mathbf{\varepsilon},\mathbf{\kappa},\mathbf{\tau}),\gamma_{tr}(\mathbf{\nu},\mathbf{\alpha}, \delta))}]\text{d}\mathbf{\varepsilon}\text{d}\mathbf{\nu} \tag{4.12}\] where the transformations on vectors are performed elementwise. Further, due to that \(\text{d}\mathbf{\varepsilon}\text{d}\mathbf{\nu}\) does not depend on \(\mathbf{\eta}\), we can change the order of integration and differentiation when taking the gradient of \(\mathcal{L}_{VI}^{\delta}(\mathbf{\eta})\): \[\nabla_{\mathbf{\eta}}\mathcal{L}_{VI}^{\delta}(\mathbf{\eta})=\int_{ \mathbf{\nu}}\int_{\mathbf{\varepsilon}}q_{\mathbf{\nu},\mathbf{\varepsilon}}(\mathbf{\nu},\mathbf{ \varepsilon})\nabla_{\mathbf{\eta}}[ \log p(\mathcal{D}|\beta_{tr}(\mathbf{\varepsilon},\mathbf{\kappa},\mathbf{ \tau}),\gamma_{tr}(\mathbf{\nu},\mathbf{\alpha},\delta)-\] \[\log\frac{q_{\mathbf{\eta}}(\beta_{tr}(\mathbf{\varepsilon},\mathbf{\kappa}, \mathbf{\tau}),\gamma_{tr}(\mathbf{\nu},\mathbf{\alpha},\delta))}{p(\beta_{tr}(\mathbf{ \varepsilon},\mathbf{\kappa},\mathbf{\tau}),\gamma_{tr}(\mathbf{\nu},\mathbf{\alpha},\delta)) }]\text{d}\mathbf{\varepsilon}\text{d}\mathbf{\nu}. \tag{4.13}\] An unbiased estimator of \(\nabla_{\mathbf{\eta}}\mathcal{L}_{VI}^{\delta}(\mathbf{\eta})\) is then given in Proposition 1. **Proposition 1**.: _Assume for all \(m=1,...,M\)\(\left(\mathbf{\nu}^{(m)},\mathbf{\eta}^{(m)}\right)\sim q_{\mathbf{\nu},\mathbf{ \varepsilon}}(\mathbf{\nu},\mathbf{\eta})\)and \(S\) is a random subset of indices \(\{1,...,n\}\) of size N. Also, assume the observations to be conditionally independent. Then, for any \(\delta>0\), an unbiased estimator for the gradient of \(\mathcal{L}_{VI}^{\delta}(\mathbf{\eta})\) is given by_ \[\widetilde{\nabla}_{\mathbf{\eta}}\mathcal{L}_{VI}^{\delta}(\mathbf{\eta })=\frac{1}{M}\sum_{m=1}^{M}\Big{[}\frac{n}{N}\sum_{i\in S}\nabla_{\mathbf{\eta}} \log p(\mathbf{y}_{i}|\mathbf{x}_{i},\beta_{tr}(\mathbf{\varepsilon}^{(m)},\mathbf{\kappa}, \mathbf{\tau}),\gamma_{tr}(\mathbf{\nu}^{(m)},\mathbf{\alpha},\delta))-\] \[\nabla_{\mathbf{\eta}}\log\frac{q_{\mathbf{\eta}}(\beta_{tr}(\mathbf{ \varepsilon}^{(m)},\mathbf{\kappa},\mathbf{\tau}),\gamma_{tr}(\mathbf{\nu}^{(m)},\mathbf{ \alpha},\delta))}{p(\beta_{tr}(\mathbf{\varepsilon}^{(m)},\mathbf{\kappa},\mathbf{\tau}), \gamma_{tr}(\mathbf{\nu}^{(m)},\mathbf{\alpha},\delta))}\Big{]}. \tag{4.14}\] Proof.: From (4.13) we have that \[\frac{1}{M}\sum_{m=1}^{M}\nabla_{\mathbf{\eta}}[\log p(\mathcal{D} |\beta_{tr}(\mathbf{\varepsilon}^{(m)},\mathbf{\kappa},\mathbf{\tau}),\gamma_{tr}(\mathbf{\nu} ^{(m)},\mathbf{\alpha},\delta)-\log\frac{q_{\mathbf{\eta}}(\beta_{tr}(\mathbf{ \varepsilon}^{(m)},\mathbf{\kappa},\mathbf{\tau}),\gamma_{tr}(\mathbf{\nu}^{(m)},\mathbf{ \alpha},\delta))}{p(\beta_{tr}(\mathbf{\varepsilon}^{(m)},\mathbf{\kappa},\mathbf{\tau}), \gamma_{tr}(\mathbf{\nu},\mathbf{\alpha},\delta))}]\] is an unbiased estimate of the gradient. Further, since we assume the observations to be conditionally independent, we have \[\nabla_{\mathbf{\eta}}\log p(\mathcal{D}|\beta_{tr}(\mathbf{\varepsilon},\mathbf{\kappa}, \mathbf{\tau}),\gamma_{tr}(\mathbf{\nu},\mathbf{\alpha},\delta))=\sum_{i=1}^{n}\nabla_{\mathbf{ \eta}}\log p(\mathbf{y}_{i}|\mathbf{x}_{i};\beta_{tr}(\mathbf{\varepsilon},\mathbf{\kappa}, \mathbf{\tau}),\gamma_{tr}(\mathbf{\nu},\mathbf{\alpha},\delta)),\] for which an unbiased estimator can be constructed through a random subset, showing the result. ``` sample\(N\) indices uniformly from \(\{1,...,n\}\) defining \(S\); for\(m\)in\(\{1,...,M\}\)do for\((k,j,l)\in\mathcal{B}\)do sample\(\nu_{kl}^{(l)}\sim\text{Unif}[0,1]\) and \(\varepsilon_{kj}^{(l)}\sim N(0,1)\) endfor endfor calculate\(\widetilde{\nabla}_{\eta}\mathcal{L}_{VI}^{\delta}(\mathbf{\eta})\) according to (4.14) update\(\mathbf{\eta}\leftarrow\mathbf{\eta}+\mathbf{A}\widetilde{\nabla}_{\mathbf{\eta}}\mathcal{L}_{VI}^{ \delta}(\mathbf{\eta})\) ``` **Algorithm 1** Doubly stochastic variational inference step Algorithm 1 describes one iteration of a doubly stochastic variational inference approach where updating is performed on the parameters for the case of mean field assumption (for simplicity). The set \(\mathcal{B}\) is the collection of all combinations \(j,k,l\) in the network. The matrix of learning rates \(\mathbf{A}\) will always be diagonal, allowing for different step sizes on the parameters involved. Following Blundell et al. (2015), constraints of \(\tau_{kj}^{(l)}\) are incorporated by means of the reparametrization \(\tau_{kj}^{(l)}=\log(1+\exp(\rho_{kj}^{(l)}))\) where \(\rho_{kj}^{(l)}\in\mathcal{R}\). Typically, updating is performed over a full _epoch_, in which case the observations are divided into \(n/N\) subsets and updating is performed sequentially over all subsets. In case of the dependence structure (4.8), \(\mathbf{\alpha}\) is sampled instead while \(\mathbf{\xi}^{(l)}\) and the components of \(\mathbf{\Sigma}^{(l)}\) go into \(\mathbf{\eta}\). Constraints on \(\alpha_{kj}^{(l)}\) are incorporated by means of the reparametrization \(\alpha_{kj}^{(l)}=(1+\exp(-\omega_{kj}^{(l)}))^{-1}\) with \(\omega_{kj}^{(l)}\in\mathcal{R}\) Note that in the suggested algorithm partial derivatives with respect to marginal inclusion probabilities, as well as mean and standard deviation terms of the weights can be calculated by the usual backpropagation algorithm on a neural network. This algorithm assumes a known dispersion parameter \(\phi\), but can be easily generalized to include learning about \(\phi\) as well. ### Prediction Once the estimates \(\widehat{\mathbf{\eta}}\) of the parameters \(\mathbf{\eta}\) of the variational approximating distribution are obtained, we go back to the original discrete model for \(\mathbf{\gamma}\) (setting \(\delta=0\)). Then, there are several ways to proceed with predictive inference. We list these below. ``` sample\(N\) indices uniformly from \(\{1,...,n\}\) defining \(S\); for\(m\)in\(\{1,...,M\}\)do for\((k,j,l)\in\mathcal{B}\)do sample\(\nu_{kl}^{(l)}\sim\text{Unif}[0,1]\) and \(\varepsilon_{kj}^{(l)}\sim N(0,1)\) endfor endfor calculate\(\widetilde{\nabla}_{\eta}\mathcal{L}_{VI}^{\delta}(\mathbf{\eta})\) according to (4.14) update\(\mathbf{\eta}\leftarrow\mathbf{\eta}+\mathbf{A}\widetilde{\nabla}_{\mathbf{\eta}}\mathcal{L}_{VI}^{ \delta}(\mathbf{\eta})\) ``` **Algorithm 2** Doubly stochastic variational inference step #### Fully Bayesian model averaging In this case, define \[\hat{p}(\Delta|\mathcal{D})=\frac{1}{R}\sum_{r=1}^{R}p(\Delta|\mathbf{\beta}^{r}, \mathbf{\gamma}^{r}) \tag{4.15}\] where \((\mathbf{\beta}^{r},\mathbf{\gamma}^{r})\sim q_{\widehat{\mathbf{\eta}}}(\mathbf{\beta},\mathbf{ \gamma})\). This procedure takes uncertainty in both the model structure \(\mathbf{\gamma}\) and the parameters \(\mathbf{\beta}\) into account in a formal Bayesian setting. A bottleneck of this approach is that we have to both sample from a huge approximate posterior distribution of parameters and models _and_ keep all of the components of \(\widehat{\mathbf{\eta}}\) stored during the prediction phase, which might be computationally and memory inefficient. The posterior mean based model (Wasserman, 2000)In this case we put \(\beta^{(l)}_{kj}=\hat{E}\{\beta^{(l)}_{kj}|\mathcal{D}\}\) where \[E\{\beta^{(l)}_{kj}|\mathcal{D}\}= p(\gamma^{(l)}_{kj}=1|\mathcal{D})E\{\beta^{(l)}_{kj}|\gamma^{(l)}_{ kj}=1,\mathcal{D}\}\approx\hat{\alpha}^{(l)}_{kj}\hat{\kappa}^{(l)}_{kj}.\] Here \(\hat{\alpha}^{(l)}_{kj}\) is either the estimate of \(\alpha^{(l)}_{kj}\) obtained through the variational inference procedure or, in case of dependence structure \(\boldsymbol{\alpha}\)\(E\{\alpha^{(l)}_{kj}|\mathcal{D}\}\). In the latter case, one formally would want to integrate out \(\boldsymbol{\alpha}\) instead but this is not quite feasible in practice and extra sampling is avoided. This approach specifies one dense model \(\hat{\boldsymbol{\gamma}}\) with no sparsification. At the same time, no sampling is needed. The median probability model (Barbieri et al., 2004)This approach is based on the notion of a median probability model, which has been shown to be optimal in terms of predictions in the context of simple linear models. Here, we set \(\gamma^{(l)}_{kj}=\mathrm{I}(\hat{\alpha}^{(l)}_{kj}>0.5)\) while \(\beta^{(l)}_{kj}\sim\gamma^{(l)}_{kj}N(\hat{\kappa}^{(l)}_{kj},\hat{\tau}^{2( l)}_{kj})\). A model averaging approach similar to (4.15) is then applied. Within this approach, we significantly sparsify the network and only sample from the distributions of those weights that have marginal inclusion probabilities above 0.5. Median probability model-based inference combined with parameter posterior meanHere, again, we set \(\gamma^{(l)}_{kj}=\mathrm{I}(\hat{\alpha}^{(l)}_{kj}>0.5)\) but now we use \(\beta^{(l)}_{kj}=\gamma^{(l)}_{kj}\hat{\kappa}^{(l)}_{kj}\). Similarly to the posterior mean-based model, no sampling is needed but in addition, we only need to store the variational parameters of \(\widehat{\boldsymbol{\eta}}\) corresponding to marginal inclusion probabilities above 0.5. Hence, we significantly sparsify the BNN of interest and reduce the computational cost of the predictions drastically. Post-trainingOnce it is decided to make inference based on a selected model, one might take several additional iterations of the training algorithm concerning the parameters of the models, having the architecture-related parameters fixed. This might give additional improvements in terms of the quality of inference as well as make the training steps much easier since the number of parameters is reduced dramatically. This is so since one does not have to estimate marginal inclusion probabilities \(\boldsymbol{\alpha}\) any longer. Moreover, the number of weights \(\beta^{(l)}_{jk}\)'s corresponding to \(\gamma^{(l)}_{jk}=1\) to make inference on is typically significantly reduced due to the sparsity induced by using the selected median probability model. It is also possible to keep the \(\alpha^{(l)}_{jk}\)'s fixed but still allow the \(\gamma^{(l)}_{jk}\)'s to be random. Other model selecting criteria and alternative thresholdingThe median probability model is (at least in theory) not always feasible in the sense that one needs at least one connected path across all of the layers with all of the weights linking the neurons having marginal inclusion above 0.5. One way to resolve the issue is to use the most probable model (the model with the largest marginal posterior probability) instead of the median probability model. Then, conditionally on its configuration, one can sample from the distribution of the parameters, select the mean (mode) of the parameters, or post-train the distributions of the parameters. Other model selection criteria, including DIC and WAIC, can be used in the same way as the most probable model. Another heuristic way to tackle the issue is to replace conditioning on \(\mathrm{I}(\gamma^{(l)}_{kj}>0.5)\) with \(\mathrm{I}(\gamma^{(l)}_{kj}>\lambda)\), where \(\lambda\) is a tuning parameter. The latter might also improve predictive performance in case too conservative priors on the model configurations are used. At the same time, we are not addressing the methods described in this paragraph in our experiments and rather leave them for further research. ## 5 Applications In-depth studies of the suggested variational approximations in the context of _linear_ regression models have been performed in earlier studies, including multiple synthetic and real data examples with the aims of both recovering meaningful relations and predictions (Carbonetto et al., 2012; Hernandez-Lobato et al., 2015). The results from these studies show that the approximations based on the suggested variational family distributions are reasonably precise and indeed scalable, but can be biased. We will not address toy examples and simulation-based examples in this article and rather refer the curious readers to the very detailed and comprehensive studies in the references mentioned above, whilst we will address some more complex examples here. In particular, we will address the classification of MNIST (LeCun et al., 1998) and fashion-MNIST (FMNIST Xiao et al., 2017) images as well as the PHONEME data (Hastie et al., 1995). Both MNIST and FMNIST datasets comprise of 70 000 grayscale images (size 28x28) from 10 categories (handwritten digits from 0 to 9, and "Top", "Trouser", "Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", and "Ankle Boot" Zalando's fashion items respectively), with 7 000 images per category. The training sets consist of 60 000 images, and the test sets have 10 000 images. For the PHONEME dataset, we have 256 covariates and 5 classes in the responses. In this dataset, we have 3 500 observations in the training set and 1 000 - in the test set. The PHONEME data are extracted from the TIMIT database (TIMIT Acoustic-Phonetic Continuous Speech Corpus, NTIS, US Dept of Commerce), which is a widely used resource for research in speech recognition. This dataset was formed by selecting five phonemes for classification based on a digitized speech from this database. The phonemes are transcribed as follows: "sh" as in "she", "dcl" as in "dark", "iy" as the vowel in "she", "aa" as the vowel in "dark", and "ao" as the first vowel in "water". Experimental designFor all the datasets, we address a dense neural network with the ReLU activation function, and multinomially distributed observations. For the two first examples, we have 10 classes and 784 input explanatory variables (pixels) while for the third one, we have 256 input variables and 5 classes. In all three cases, the network has 2 hidden layers with 400, and 600 neurons correspondingly. Priors for the parameters and model indicators were chosen according to (3.4) with parameters in the priors specified through an empirical Bayes approach. The inference was performed using the suggested doubly stochastic variational inference approach (Algorithm 1) on 250 epochs with a batch size of 100. \(M\) was set to 1 to reduce computational costs and due to the fact that this choice of \(M\) is argued to be sufficient in combination with the reparametrization trick (Gal, 2016). Up to 20 first epochs were used for pre-training of the models and parameters as well as empirically (aka Empirical Bayes) estimating the hyperparameters of the priors \((a_{\psi},b_{\psi},a_{\beta},b_{\beta})\) through adding them into the computational graph. After that, the main training cycle began (with fixed hyperparameters on the priors). We used the ADAM stochastic gradient ascent optimization (Kingma and Ba, 2014) with the diagonal matrix \(\mathbf{A}\) in Algorithm 1 and the diagonal elements specified in Tables 1 and S-1 for pre-training, and the main training stage. Typically, one would maximize the marginal likelihood for the Empirical Bayes methods, but since we do not have it available for the addressed models, its lower bound (ELBO) was used in the pre-training stage. After 250 training epochs, post-training was performed. When post-training the parameters, either with fixed marginal inclusion probabilities or with the median probability model, we ran additional 50 epochs of the optimization routine with \(\mathbf{A}\) specified in the bottom rows of Tables 1 and S-1. For the fully Bayesian model averaging approach, we used both \(R=1\) and \(R=10\). Even though \(R=1\) can give a poor Monte Carlo estimate of the prediction distribution, it can be of interest due to high sparsification. All the PyTorch implementations used in the experiments are available in our GitHub repository. We report results for _our model_**LBBNN** applied with the Gaussian priors (**GP**) for the slab components of \(\beta\)'s combined with variational inference based on Mean-Field (**MF**), MVN (**MVN**) and Low Factor MVN (for **LFMVN**, the predictions' results are reported in the supplemental material) dependence structures between the latent indicators. We use the combined names **LBBNN-GP-MF**, **LBBNN-GP-MVN**, and **LBBNN-GP-LFMVN** respectively to denote combination of model, prior and variational distribution. In Section 4 of the supplementary materials to the paper, results from our early preprint (Hubin and Storvik, 2019) for MNIST and FMNIST datasets where the hyperparameters are fixed (corresponding to the setting also considered by Bai et al., 2020) are reported. \begin{table} \begin{tabular}{l|l} \hline **Model** & **Meaning** \\ \hline BNN & Bayesian neural network \\ LBBNN & Latent binary Bayesian neural network \\ \hline **Parameters prior** & **Meaning** \\ \hline GP & Independent Gaussian priors for weights \\ MGP & Independent mixture of Gaussians prior for weights \\ HP & Independent horseshoe priors for weights \\ \hline **Inference** & **Meaning** \\ \hline MF & Mean-field variational inference \\ MVN & Multivariate Gaussian structure for the inclusion probabilities \\ LFMVN & Low factor for the covariance of MVN structure for the inclusion probabilities \\ \hline **\(\mathbf{\gamma}\)** & **Meaning** \\ \hline SIM & Inclusion of the weights is drawn from the posterior of inclusion indicators \\ ALL & All weights are used \\ MED & Weights corresponding to the median probability model are used \\ PRN & Not pruned using a threshold-based rule weights are used \\ \hline **\(\mathbf{\beta}\)** & **Meaning** \\ \hline SIM & The included weights are drawn from their posterior \\ MEA & Posterior means of the weights are used \\ \hline \(R\) & **Meaning** \\ \hline 10 & 10 samples are drawn \\ 1 & 1 sample is drawn or posterior means are used \\ \hline **Evaluation metric** & **Meaning** \\ \hline All cl Acc & Accuracy computed for all samples in the test set \\ 0.95 threshold Acc & Accuracy computed for those samples in the test set where the maximum \\ & (across classes) model averaged predictive posterior exceeds 0.95 \\ 0.95 threshold & Number of samples in the test set where the maximum \\ Num.cl & (across classes) model averaged predictive posterior exceeds 0.95 \\ Dens. level & Fraction of weights that are used to make predictions \\ Epo. time & Average time elapsed per epoch of training \\ \hline \end{tabular} \end{table} Table 1: Specifications of diagonal elements of \(\mathbf{A}\) matrices for the step sizes of optimization routines for LBBNN-GP-MF and LBBNN-GP-MVN, see Table 2 for explanation of the abbreviations. Note that \(A_{\omega}\) is only used in LBBNN-GP-MF, while \(A_{\xi}\) and \(A_{\Sigma}\) are only used in LBBNN-GP-MVN. For tuning parameters of LBBNN-GP-LFMVN, see Table S-1 in the supplementary materials to the paper. \begin{table} \begin{tabular}{l c c c c c c} \hline & \(A_{\beta}\), \(A_{\rho}\) & \(A_{\xi}\) & \(A_{\omega}\) & \(A_{\Sigma}\) & \(A_{a_{\omega}}\),\(A_{b_{\omega}}\) & \(A_{a_{\beta}}\),\(A_{b_{\beta}}\) \\ \hline Pre-training & 0.00010 & 0.10000 & 0.10000 & 0.10000 & 0.00100 & 0.00001 \\ Training & 0.00010 & 0.01000 & 0.00010 & 0.00010 & 0.00000 & 0.00000 \\ Post-training & 0.00010 & 0.00000 & 0.00000 & 0.00000 & 0.00000 & 0.00000 \\ \hline \end{tabular} \end{table} Table 1: Specifications of diagonal elements of \(\mathbf{A}\) matrices for the step sizes of optimization routines for LBBNN-GP-MF and LBBNN-GP-MVN, see Table 2 for explanation of the abbreviations. Note that \(A_{\omega}\) is only used in LBBNN-GP-MF, while \(A_{\xi}\) and \(A_{\Sigma}\) are only used in LBBNN-GP-MVN. For tuning parameters of LBBNN-GP-LFMVN, see Table S-1 in the supplementary materials to the paper. In addition, we also used several relevant _baselines_. In particular, we addressed a standard Dense BNN with Gaussian priors and mean-field variational inference (Graves, 2011), denoted as **BNN-GP-MF**, which can be seen as a special case of our original model with all \(\gamma_{kj}^{(l)}\) being fixed and equal to 1 and no prior on the variance components of weights. This model is important in measuring how predictive power is changed due to introducing sparsity. Furthermore, we report the results for a Dense BNN with mixture priors (**BNN-MGP-MF**) with two Gaussian components of the mixtures (Blundell et al., 2015) with probabilities of 0.5 for each and variances equal to 1 and \(e^{-6}\) correspondingly. Additionally, we have addressed two popular sparsity-inducing approaches, in particular, a dense network with Concrete dropout (**BNN-GP-CMF**) (Gal et al., 2017) and a dense network with Horseshoe priors (**BNN-HP-MF**) (Louizos et al., 2017). Finally, a frequentist fully connected neural network (**FNN**) (with posthoc weight pruning) was used as a more basic baseline. We only report the results for FNN in the supplementary materials to make the experimental design cleaner. All of the baseline methods (including the FNN) also have 2 hidden layers with 400, and 600 neurons correspondingly. They were trained for 250 epochs with an Adam optimizer (with a learning rate \(a=0.0001\) for all involved parameters) and a batch size equal to 100. For the BNN with Horseshoe priors, we are reporting statistics separately before and after ad-hoc pruning (PRN) of the weights. Post-training (when necessary) was performed for additional 50 epochs. For FNN, for all three experiments, we performed weight and neuron pruning (Blalock et al., 2020) to have the same sparsity levels as those obtained by the Bayesian approaches to make them directly comparable. Pruning of FNN was based on removing the corresponding share of weights/neurons having the smallest magnitude (absolute value). No uncertainty was taken into consideration and neither was structure learning considered for FNNs. For prediction, several methods were described in Section 4.3. All essentially boils down to choices on how to treat the model parameters \(\mathbf{\gamma}\) and the weights \(\mathbf{\beta}\). For \(\mathbf{\gamma}\), we can either simulate (**SIM**) from the (approximate) posterior or use the median probability model (**MED**). An alternative for **BNN-HP-MF** here is the pruning method (**PRN**) applied in Louizos et al. (2017). We also consider the choice of including all weights for some of the baseline methods (**ALL**). For \(\mathbf{\beta}\), we consider either sampling from the (approximate) posterior (**SIM**) or using the posterior mean (**MEA**). Under this notation, the fully Bayesian model averaging from Section 4.3 is denoted as **SIM SIM**, whilst the posterior mean based model as **ALL MEA**, the median probability model as **MED SIM**, the and the median probability model combined with parameter posterior mean as **MED MEA**. We then evaluated accuracies (**Acc** - the proportion of the correctly classified images). Accuracies based on the median probability model (through either \(R=1\) or \(R=10\)) and the posterior mean models were also obtained. Finally, accuracies based on post-training of the parameters with fixed marginal inclusion probabilities and post-training of the median probability model were evaluated. For the cases when model averaging is addressed (\(R=10\)), we are additionally reporting accuracies when classification is only performed if the maximum model-averaged class probability exceeds 95% as suggested by Posch et al. (2019). Otherwise, a doubt decision is made (Ripley, 2007, sec 2.1). In this case, we both report the accuracy within the classified images as well as the number of classified images. Finally, we are reporting the overall density level (the fraction of \(\gamma_{kj}^{(l)}\)'s equal to one within at least one of the simulations), for different approaches. To guarantee reproducibility, summaries (medians, minimums, maximums) across 10 independent runs of the described experiment \(s\in\{1,...,10\}\) were computed for all of these statistics. Estimates of the marginal inclusion probabilities \(\hat{p}(\gamma_{kj}^{(l)}=1|\mathcal{D})\) based on the suggested variational approximations were also computed for all of the weights. In order to compress the presentation of the results, we only present the mean marginal inclusion probabilities for each layer \(l\) as \(\rho(\gamma^{(l)}|\mathcal{D}):=\frac{1}{p^{(l+1)}p^{(l)}}\sum_{kj}\hat{p}( \gamma_{kj}^{(l)}=1|\mathcal{D})\), summarized in Table 6. To make the abbreviations used in the reported results more clear, we provide Table 2 with their short summaries. MnistThe results reported in Tables 3 and 6 (with some additional results on LBBNN-GP-LFMVN and post-training reported in Tables S-2 and S-8 in the supplementary material) show that within our LBBNN approach: a) model averaging across different BNNs (\(R=10\)) gives significantly higher accuracy than the accuracy of a random individual BNN from the model space (\(R=1\)); b) the median probability model and posterior mean based model also perform significantly better than a randomly sampled model. The performance of the median probability model and posterior mean-based model is in fact on par with full model averaging; c) according to Table 6 and Figure 1, for the mean-field variational distribution, the majority of the weights of the models have very low marginal inclusion probabilities for the weights at layers 1 and 2, while more weights have high marginal inclusion probabilities at layer 3 (although a significant reduction also at this layer). This resembles the structure of convolutional neural networks (CNN) where typically one first has a set of sparse convolutional layers, followed by a few fully connected layers. Unlike CNNs the structure of sparsification is learned automatically within our approach; d) for the MVN with full rank structure within variational approximation, the input layer is the most dense, followed by extreme sparsification in the second layer and a moderate sparsification at layer 3; e) the MVN approach with a low factor parametrization of the covariance matrix (results in the supplementary) only provides very moderate sparsification not exceeding 50% of the weight parameters; f) variations of all of the performance metrics across simulations are low, showing stable behavior across the repeated experiments; g) inference with a doubt option gives almost perfect accuracy, however, this comes at a price of rejecting to classify some of the items. For other approaches, it is also the case that h) both using the posterior mean-based model and using sample averaging improves accuracy compared to a single sample from the parameter space; i) variability in the estimates of the target parameters is low for the dense BNNs with Gaussian/mixture of Gaussians priors and BNN with horseshoe priors and rather high for the Concrete dropout approach. When it comes to comparing our approach to baselines we notice that j) dense approaches outperform sparse approaches in terms of the accuracy in general; k) Concrete dropout marginally outperforms other approaches in terms of median accuracy, however, it exhibits large variance, whilst our full BNN and the compressed BNN with horseshoe priors yield stable performance across experiments; l) neither our approach nor baselines managed to reach state of the art results in terms of hard classification accuracy of predictions (Palvanov and Im Cho, 2018); m) including a 95% threshold for making a classification results in a very low number of classified cases for the horseshoe priors (it is extremely underconfident), the Concrete dropout approach seems to be overconfident when doing inference with the doubt option (resulting in lower accuracy but a larger number of decisions), the full BNN, and BNN with Gaussian and mixture of Gaussian priors give less classified cases than the Concrete dropout approach but reach significantly higher accuracy; n) this might mean that the thresholds need to be calibrated towards the specific methods; o) our approach under the mean-field variational approximation and the full rank MVN structure of variational approximation yields the highest sparsity of weights when using the median probability model. Also, q) post-training (results in the supplementary) does not seem to significantly improve either the predictive quality of the models or uncertainty handling; o) all BNN for all considered sparsity levels on a given configuration of the network depth and widths are significantly outperforming the frequentist counterpart (with the corresponding same sparsity levels) in terms of the generalization error. Finally, in terms of computational time, r) as expected FNNs were the fastest in terms of time per epoch, whilst for the Bayesian approaches we see a strong positive correlation between the number of parameters and computational time, where BNN-GP-CMF is the fastest method and LBBNN-GP-MVN is the slowest. All times were obtained whilst training our models on a GeForce RTX 2080 Ti GPU card. Having said that, it is important to notice that the speed difference between the fastest and slowest Bayesian approach is less than 3 times, which given the fact that the time is also influenced by the implementation of different methods and a potentially different load of the server when running the experiments might be considered quite a tolerable difference in practice. \begin{table} \begin{tabular}{c c c c|c c c c c} \hline \multicolumn{2}{c|}{Prediction} & \multicolumn{2}{c|}{Model-Prior-} & \multicolumn{2}{c}{All cl} & \multicolumn{2}{c}{0.95 threshold} & Dens. & Epo. \\ \(\boldsymbol{\gamma}\) & \(\boldsymbol{\beta}\) & Method & \(R\) & Acc & Acc & Num.cl & level & time \\ \hline SIM & SIM & LBBNN-GP-MF & 1 & 0.968 (0.966,0.970) & - & - & 0.090 & 8.363 \\ SIM & SIM & LBBNN-GP-MF & 10 & 0.981 (0.979,0.982) & 0.999 & 8322 & 1.000 & 8.363 \\ ALL & MEA & LBBNN-GP-MF & 1 & 0.981 (0.980,0.983) & - & - & 1.000 & 8.363 \\ MED & SIM & LBBNN-GP-MF & 1 & 0.969 (0.968,0.974) & - & - & 0.079 & 8.363 \\ MED & SIM & LBBNN-GP-MF & 10 & 0.980 (0.979,0.982) & 0.999 & 8444 & 0.079 & 8.363 \\ MED & MEA & LBBNN-GP-MF & 1 & 0.981 (0.980,0.983) & - & - & 0.079 & 8.363 \\ \hline SIM & SIM & LBBNN-GP-MVN & 1 & 0.965 (0.964,0.966) & - & - & 0.180 & 9.651 \\ SIM & SIM & LBBNN-GP-MVN & 10 & 0.978 (0.976,0.979) & 1.000 & 7818 & 1.000 & 9.651 \\ ALL & MEA & LBBNN-GP-MVN & 1 & 0.978 (0.976,0.980) & - & - & 1.000 & 9.651 \\ MED & SIM & LBBNN-GP-MVN & 1 & 0.968 (0.966,0.969) & - & - & 0.163 & 9.651 \\ MED & SIM & LBBNN-GP-MVN & 10 & 0.977 (0.975,0.979) & 1.000 & 7928 & 0.163 & 9.651 \\ MED & MEA & LBBNN-GP-MVN & 1 & 0.974 (0.972,0.976) & - & - & 0.163 & 9.651 \\ \hline ALL & SIM & BNN-GP-MF & 1 & 0.965 (0.965,0.966) & - & - & 1.000 & 5.094 \\ ALL & SIM & BNN-GP-MF & 10 & 0.984 (0.982,0.985) & 0.999 & 8477 & 1.000 & 5.094 \\ ALL & MEA & BNN-GP-MF & 1 & 0.984 (0.982,0.985) & - & - & 1.000 & 5.094 \\ \hline ALL & SIM & BNN-MGP-MF & 1 & 0.965 (0.964,0.967) & - & - & 1.000 & 5.422 \\ ALL & SIM & BNN-MGP-MF & 10 & 0.982 (0.981,0.983) & 0.999 & 8329 & 1.000 & 5.422 \\ ALL & MEA & BNN-MGP-MF & 1 & 0.983 (0.981,0.984) & - & - & 1.000 & 5.422 \\ \hline SIM & SIM & BNN-GP-CMF & 1 & 0.982 (0.894,0.984) & - & - & 0.226 & 3.477 \\ SIM & SIM & BNN-GP-CMF & 10 & 0.984 (0.896,0.986) & 0.995 & 9581 & 1.000 & 3.477 \\ ALL & MEA & BNN-GP-CMF & 1 & 0.984 (0.893,0.986) & - & - & 1.000 & 3.477 \\ \hline SIM & SIM & BNN-HP-MF & 1 & 0.964 (0.962,0.967) & - & - & 1.000 & 4.254 \\ SIM & SIM & BNN-HP-MF & 10 & 0.982 (0.981,0.983) & 1.000 & 0003 & 1.000 & 4.254 \\ ALL & MEA & BNN-HP-MF & 1 & 0.966 (0.963,0.968) & - & - & 1.000 & 4.254 \\ PRN & SIM & BNN-HP-MF & 1 & 0.965 (0.962,0.969) & - & - & 0.194 & 4.254 \\ PRN & SIM & BNN-HP-MF & 10 & 0.982 (0.981,0.983) & 1.000 & 0002 & 0.194 & 4.254 \\ PRN & MEA & BNN-HP-MF & 1 & 0.965 (0.963,0.968) & - & - & 0.194 & 4.254 \\ \hline \end{tabular} \end{table} Table 3: Performance metrics for the MNIST data for the compared approaches. All results are medians across 10 repeated experiments (with min and max included in parentheses). No post-training is used. For further details see Table 2. this sense compared to the previous example. For FNN, the same conclusions as those obtained for the MNIST data set are valid. PhonemeFinally, the same set of approaches, model specifications (except for having 256 input covariates and 5 classes of the responses), and tuning parameters of the algorithms as in the MNIST and FMNIST examples were used for the classification of PHONEME data. The results a)- r) for the PHONEME data, based on Tables 5, 6, and Tables S-4 and S-10 in the supplementary are also overall consistent with the results from the MNIST and FMNIST experiments, however, predictive performances for all of the approaches are better than on FMNIST yet poorer than on MNIST. All of the methods, where sparsifications are possible, gave a lower sparsity level for this example. Yet, rather considerable sparsification is still shown to be feasible. For FNN, the same conclusions as those obtained for MNIST and FMNIST data sets are valid, though the deterioration of performance of FNN here was less drastic. Also, as we demonstrate in Figures S-5 - S-8 in the supplementary materials, the conclusions are consistent across various width configurations of Bayesian \begin{table} \begin{tabular}{l l l l|c c c c c} \hline \multicolumn{2}{c|}{Prediction} & \multicolumn{2}{c|}{Model-Prior-} & \multicolumn{2}{c}{All cl} & 0.95 threshold & Dens. & Epo. \\ \(\boldsymbol{\gamma}\) & \(\boldsymbol{\beta}\) & Method & \(R\) & Acc & Acc & Num.cl & level & time \\ \hline SIM & SIM & LBBNN-GP-MF & 1 & 0.864 (0.861,0.866) & - & - & 0.120 & 7.969 \\ SIM & SIM & LBBNN-GP-MF & 10 & 0.883 (0.881,0.886) & 0.995 & 4946 & 1.000 & 7.969 \\ ALL & MEA & LBBNN-GP-MF & 1 & 0.882 (0.879,0.887) & - & - & 1.000 & 7.969 \\ MED & SIM & LBBNN-GP-MF & 1 & 0.867 (0.864,0.871) & - & - & 0.108 & 7.969 \\ MED & SIM & LBBNN-GP-MF & 10 & 0.883 (0.880,0.886) & 0.995 & 5025 & 0.108 & 7.969 \\ MED & MEA & LBBNN-GP-MF & 1 & 0.880 (0.877,0.886) & - & - & 0.108 & 7.969 \\ \hline SIM & SIM & LBBNN-GP-MVN & 1 & 0.858 (0.854,0.859) & - & - & 0.156 & 9.504 \\ SIM & SIM & LBBNN-GP-MVN & 10 & 0.879 (0.874,0.880) & 0.995 & 4503 & 1.000 & 9.504 \\ ALL & MEA & LBBNN-GP-MVN & 1 & 0.875 (0.873,0.876) & - & - & 1.000 & 9.504 \\ MED & SIM & LBBNN-GP-MVN & 1 & 0.865 (0.860,0.866) & - & - & 0.129 & 9.504 \\ MED & SIM & LBBNN-GP-MVN & 10 & 0.877 (0.875,0.879) & 0.995 & 4694 & 0.129 & 9.504 \\ MED & MEA & LBBNN-GP-MVN & 1 & 0.871 (0.868,0.875) & - & - & 0.129 & 9.504 \\ \hline ALL & SIM & BNN-GP-MF & 1 & 0.864 (0.863,0.866) & - & - & 1.000 & 5.368 \\ ALL & SIM & BNN-GP-MF & 10 & 0.893 (0.890,0.894) & 0.997 & 5089 & 1.000 & 5.368 \\ ALL & MEA & BNN-GP-MF & 1 & 0.886 (0.882,0.888) & - & - & 1.000 & 5.368 \\ \hline ALL & SIM & BNN-MGP-MF & 1 & 0.867 (0.866,0.868) & - & - & 1.000 & 4.803 \\ ALL & SIM & BNN-MGP-MF & 10 & 0.893 (0.892,0.897) & 0.996 & 5151 & 1.000 & 4.803 \\ ALL & MEA & BNN-MGP-MF & 1 & 0.888 (0.885,0.890) & - & - & 1.000 & 4.803 \\ \hline SIM & SIM & BNN-GP-CMF & 1 & 0.896 (0.820,0.902) & - & - & 0.094 & 3.369 \\ SIM & SIM & BNN-GP-CMF & 10 & 0.897 (0.823,0.901) & 0.942 & 8825 & 1.000 & 3.369 \\ ALL & MEA & BNN-GP-CMF & 1 & 0.896 (0.821,0.901) & - & - & 1.000 & 3.369 \\ \hline SIM & SIM & BNN-HP-MF & 1 & 0.864 (0.863,0.869) & - & - & 1.000 & 4.613 \\ SIM & SIM & BNN-HP-MF & 10 & 0.887 (0.886,0.889) & 1.000 & 0181 & 1.000 & 4.613 \\ ALL & MEA & BNN-HP-MF & 1 & 0.867 (0.861,0.868) & - & - & 1.000 & 4.613 \\ PRN & SIM & BNN-HP-MF & 1 & 0.865 (0.860,0.868) & - & - & 0.302 & 4.613 \\ PRN & SIM & BNN-HP-MF & 10 & 0.887 (0.884,0.888) & 1.000 & 0179 & 0.302 & 4.613 \\ PRN & MEA & BNN-HP-MF & 1 & 0.865 (0.862,0.869) & - & - & 0.302 & 4.613 \\ \hline \end{tabular} \end{table} Table 4: Performance metrics for the FMNIST data for the suggested in the article Bayesian approaches to BNN. For further details, see Table 2 and the caption of Table 3. neural networks. Also, sparsification increases with increased width for all the methods, yet the growth in sparsity is not proportional to the growth of width. Out-of-domain experimentsFollowing the example of measuring the in and out-of-domain uncertainty suggested in Nitarshan (2018), we will first look at the ability of the LBBNN-GP-MF approach to give confidence in its predictions by means of trying to classify a sample from FMNIST images with samples from the posterior predictive distribution based on the joint posterior of models and parameters trained on MNIST dataset and compare this to the results for a sample of images from the test set of MNIST data. The results are reported for the joint posterior (of models and parameters) obtained in experiment run \(s=10\). As can be seen in Figure 2, the samples from LBBNN-GP-MF give highly confident predictions for the MNIST dataset with almost no variance in the samples from the posterior predictive distribution. At the same time, the out-of-domain uncertainty, related to the samples from the posterior predictive distribution based on FMNIST data, is typically high (with some exceptions) showing low confidence of the samples from \begin{table} \begin{tabular}{l l l l|c c c c c} \hline \hline \multicolumn{2}{c|}{Prediction} & \multicolumn{2}{c|}{Model-Prior-} & \multicolumn{2}{c}{All cl} & 0.95 threshold & Dens. & Epo. \\ \(\boldsymbol{\gamma}\) & \(\boldsymbol{\beta}\) & Method & \(R\) & Acc & Acc & Num.cl & level & time \\ \hline SIM & SIM & LBBNN-GP-MF & 1 & 0.913 (0.898,0.929) & - & - & 0.371 & 0.433 \\ SIM & SIM & LBBNN-GP-MF & 10 & 0.927 (0.923,0.933) & 0.992 & 690 & 1.000 & 0.433 \\ ALL & MEA & LBBNN-GP-MF & 1 & 0.925 (0.921,0.933) & - & - & 1.000 & 0.433 \\ MED & SIM & LBBNN-GP-MF & 1 & 0.923 (0.910,0.928) & - & - & 0.307 & 0.433 \\ MED & SIM & LBBNN-GP-MF & 10 & 0.925 (0.912,0.934) & 0.984 & 757 & 0.307 & 0.433 \\ MED & MEA & LBBNN-GP-MF & 1 & 0.925 (0.913,0.932) & - & - & 0.307 & 0.433 \\ \hline SIM & SIM & LBBNN-GP-MVN & 1 & 0.919 (0.911,0.927) & - & - & 0.255 & 0.505 \\ SIM & SIM & LBBNN-GP-MVN & 10 & 0.929 (0.927,0.935) & 0.995 & 649 & 1.000 & 0.505 \\ ALL & MEA & LBBNN-GP-MVN & 1 & 0.926 (0.918,0.931) & - & - & 1.000 & 0.505 \\ MED & SIM & LBBNN-GP-MVN & 1 & 0.925 (0.916,0.929) & - & - & 0.225 & 0.505 \\ MED & SIM & LBBNN-GP-MVN & 10 & 0.929 (0.925,0.933) & 0.995 & 668 & 0.225 & 0.505 \\ MED & MEA & LBBNN-GP-MVN & 1 & 0.924 (0.921,0.928) & - & - & 0.225 & 0.505 \\ \hline ALL & SIM & BNN-GP-MF & 1 & 0.915 (0.907,0.919) & - & - & 1.000 & 0.203 \\ ALL & SIM & BNN-GP-MF & 10 & 0.919 (0.900,0.929) & 0.966 & 834 & 1.000 & 0.203 \\ ALL & MEA & BNN-GP-MF & 1 & 0.917 (0.901,0.922) & - & - & 1.000 & 0.203 \\ \hline ALL & SIM & BNN-MGP-MF & 1 & 0.913 (0.910,0.925) & - & - & 1.000 & 0.208 \\ ALL & SIM & BNN-MGP-MF & 10 & 0.916 (0.912,0.926) & 0.969 & 833 & 1.000 & 0.208 \\ ALL & MEA & BNN-MGP-MF & 1 & 0.921 (0.914,0.926) & - & - & 1.000 & 0.208 \\ \hline SIM & SIM & BNN-GP-CMF & 1 & 0.879 (0.706,0.906) & - & - & 0.509 & 0.103 \\ SIM & SIM & BNN-GP-CMF & 10 & 0.922 (0.918,0.930) & 0.965 & 187 & 1.000 & 0.103 \\ ALL & MEA & BNN-GP-CMF & 1 & 0.873 (0.712,0.904) & - & - & 1.000 & 0.103 \\ \hline SIM & SIM & BNN-HP-MF & 1 & 0.921 (0.915,0.929) & - & - & 1.000 & 0.136 \\ SIM & SIM & BNN-HP-MF & 10 & 0.921 (0.915,0.926) & 0.895 & 019 & 1.000 & 0.136 \\ ALL & MEA & BNN-HP-MF & 1 & 0.921 (0.916,0.926) & - & - & 1.000 & 0.136 \\ PRN & SIM & BNN-HP-MF & 1 & 0.919 (0.909,0.926) & - & - & 0.457 & 0.136 \\ PRN & SIM & BNN-HP-MF & 10 & 0.919 (0.916,0.927) & 0.926 & 028 & 0.457 & 0.136 \\ PRN & MEA & BNN-HP-MF & 1 & 0.920 (0.914,0.926) & - & - & 0.457 & 0.136 \\ \hline \hline \end{tabular} \end{table} Table 5: Performance metrics for the PHONEME data for the suggested in the article Bayesian approaches to BNN. For further detail see Table 2 and the caption of Table 3. Figure 1: An illustration of histograms of the marginal inclusion probabilities of the weights for the three layers (from top to bottom) of LBBNN-GP-MF from simulation \(s=10\) for MNIST (left) and FMNIST (right). the posterior predictive distribution in this case. The reversed example of inference on FMNIST and uncertainty related to MNIST data, illustrated in Figure 3, leads to the same conclusions. More or less identical results were obtained for the LBBNN-GP-MVN and LBBNN-GP-LFMVN approaches, but they are not reported in the paper due to space constraints. Figure 4 shows the results on more detailed out-of-domain experiments using FMNIST data for the models trained on MNIST data and vice versa. Following Louizos and Welling (2017), the goal now is to obtain as inconclusive results as possible (reaching ideally a uniform distribution across classes), corresponding to a large entropy. The plot shows the empirical cumulative distribution function (CDF) of the entropies over the classified samples, where the ideal is a CDF close to the lower right corner. Concrete drop-out is overconfident with a distribution of test classes being far from uniform, the horseshoe prior-based approach (both before and after pruning) is the closest to uniform (but it was \begin{table} \begin{tabular}{c c c} \hline \hline \multicolumn{2}{c}{**MNIST data**} & **FMNIST data** & **PHONEME data** \\ \hline **LBBNN-GP-MF** & & \\ \hline \(\rho(\gamma^{(1)}=1|\mathcal{D})\) & 0.0844 (0.0835,0.0853) & 0.1323 (0.1291,0.1349) & 0.3806 (0.3764,0.3838) \\ \(\rho(\gamma^{(2)}=1|\mathcal{D})\) & 0.0959 (0.0942,0.0967) & 0.1005 (0.0981,0.1020) & 0.3670 (0.3641,0.3699) \\ \(\rho(\gamma^{(3)}=1|\mathcal{D})\) & 0.2945 (0.2808,0.3056) & 0.2790 (0.2709,0.2921) & 0.4236 (0.4053,0.4367) \\ \hline **LBBNN-GP-MVN** & & \\ \hline \(\rho(\gamma^{(1)}=1|\mathcal{D})\) & 0.2975 (0.2928,0.2993) & 0.2461 (0.2410,0.2515) & 0.3201 (0.3142,0.3273) \\ \(\rho(\gamma^{(2)}=1|\mathcal{D})\) & 0.0368 (0.0363,0.0377) & 0.0392 (0.0383,0.0398) & 0.2287 (0.2235,0.2355) \\ \(\rho(\gamma^{(3)}=1|\mathcal{D})\) & 0.1394 (0.1311,0.1475) & 0.1462 (0.1368,0.1521) & 0.2763 (0.2632,0.2953) \\ \hline **LBBNN-GP-LFMVN** & & \\ \hline \(\rho(\gamma^{(1)}=1|\mathcal{D})\) & 0.4474 (0.4448,0.4498) & 0.4589 (0.4565,0.4603) & 0.4973 (0.4965,0.4987) \\ \(\rho(\gamma^{(2)}=1|\mathcal{D})\) & 0.4525 (0.4501,0.4537) & 0.4516 (0.4493,0.4528) & 0.4972 (0.4952,0.4990) \\ \(\rho(\gamma^{(3)}=1|\mathcal{D})\) & 0.4815 (0.4685,0.4871) & 0.4805 (0.4654,0.4868) & 0.4979 (0.4925,0.5048) \\ \hline \hline \end{tabular} \end{table} Table 6: Medians and standard deviations of the average (per layer) marginal inclusion probability (see the text for the definition) for our model for both MNIST and FMNIST data across 10 repeated experiments. Figure 2: Uncertainty related to the in-domain test data (MNIST, left) and out-of-domain test data (FMNIST, right) based on 100 samples from the posterior predictive distribution. Yellow lines are model-averaged posterior class probabilities (in percent). Green bars mark the correct classes, blue bars for other samples (with heights corresponding to an alternative estimate of class probabilities using hard classification within each of the replicates in the prediction procedure. The dashed black lines give the 95% threshold for making decisions with doubt possibilities). The original images are depicted to the left. also closer to uniform for the in-domain predictions), whilst the 2 other baselines are in between, our approaches (both before pruning and after pruning with the median probability model) are on par with them showing that they handle out-of-domain uncertainty rather well. Figure 4: Empirical CDF for the entropy of the marginal posterior predictive distributions trained on MNIST and applied to FMNIST (left) and vice versa (right) for simulation \(s=10\). Postfix S indicates the model sparsified by an appropriate method. Figure 3: Uncertainty related to the in-domain test data (FMNIST, left) and out-of-domain test data (MNIST, right) based on 100 samples from the posterior predictive distribution. See Figure 2 for additional details. More on misclassification uncertaintiesFigure 5 shows the misclassification uncertainties associated with posterior predictive sampling. One can see that for the majority of the cases when the LBBNN-GP-MF makes a misclassification, the class certainty of the predictions is relatively low, indicating that the network is unsure. Moreover, even in these cases, the truth is typically within the 95% credible interval of the predictions, which following Posch et al. (2019) can be read from whether less than 95 out of 100 samples belong to a wrong class and at least 6 out of 100 samples belong to the right one. Also notice that in many of the cases of misclassification illustrated here, even a human would have serious doubts about making a decision. Here, again, very similar results were obtained for LBBNN-GP-MVN and LBBNN-GP-LFMVN approaches, but they are not reported in the paper due to space constraints. ## 6 Discussion In this paper, we have introduced the concept of Bayesian model (or structural) uncertainty in BNNs and suggested a scalable variational inference technique for approximating the joint posterior of models and the parameters of these models. Approximate posterior predictive distributions, with both models and parameters marginalized out, can be easily obtained. Furthermore, marginal inclusion probabilities give proper probabilistic interpretation to Bayesian binary dropout and allow to perform model (or architecture) selection. This comes at the price of having only one additional parameter per weight included. We provide image and sound classification applications of the suggested technique showing that it both allows to significantly sparsify neural networks without noticeable loss of predictive power and accurately handle the predictive uncertainty. Regarding the computational costs of optimization: For the mean-field approximations, we are introducing only one additional parameter \(\alpha_{kj}^{l}\) for each weight. With underlying Gaussian structure on \(\boldsymbol{\alpha}^{(l)}\), additional parameters of the covariance matrix are further introduced. The complexity of each optimization step is proportional to the number of parameters to optimize, thus the deterioration in terms of computational time (as demonstrated in the experiments) is not at all drastic as compared to the fully connected BNN or even FNN. For the obtained predictions the complexities for different methods are proportional to the number of "active" parameters involved in predictions giving typically benefits to more sparse methods. Regarding practical recommendations, we suggest, based on our empirical results, to use LBBNN-GP-MF if one is interested in a reasonable trade-off between sparsity, predictive accuracy, and uncertainty as well as computational costs. If sparsity is not needed standard BNN-GP-MF and BNN-MGP-MF are sufficient. Currently, fairly simple prior distributions for both models and parameters are used. These prior distributions are assumed independent across the parameters of the neural network, which might not always be reasonable. Alternatively, both parameter and model priors can incorporate joint-dependent structures, which can further improve the sparsification of the configurations of neural networks. When it comes to the model priors with local structures and dependencies between the variables (neurons), one can mention the so-called dilution priors (George et al., 2010). These priors take care of the similarities between models by down-weighting the probabilities of the models with highly correlated variables. There are also numerous approaches to incorporate interdependencies between the model parameters via priors in different settings within simpler models (Smith and LeSage, 2004; Fahrmeir and Lang, 2001; Dobra et al., 2004). Obviously, in the context of inference in the joint parameter-model settings in BNNs, more research should be done on the choice of priors. Specifically, for image analysis, it might be of interest to develop convolution-inducing priors, whilst for recurrent models, one can think of exponentially decaying parameter priors for controlling the short-long memory. In this work, we restrict ourselves to a subclass of BNNs, defined by the inclusion-exclusion of particular weights within a given architecture. In the future, it can be of particular interest to extend the approach to the choice of the ## Appendix A Figure 5: Uncertainty based on the samples from the LBBNN-GP-MF model from the joint posterior (from simulation \(s=10\)) for 16 potentially wrongly classified (under model averaging) images for MNIST data (left) and FMNIST data (right). Yellow lines are model-averaged posterior class probabilities based on the Full BNN approach (in percent). Here, green bars are the true classes and red - the incorrectly predicted, blue bars - other samples, dashed black lines indicate the 95% threshold for making a decision when a doubt possibility is included. The original images are depicted to the left. activation functions as well as the maximal depth and width of each layer of the BNN. A more detailed discussion of these possibilities and ways to proceed is given in Hubin (2018). Finally, studies of the accuracy of variational inference within these complex nonlinear models should be performed. Even within linear models, Carbonetto et al. (2012) have shown that the results can be strongly biased. Various approaches for reducing the bias in variational inference are developed. One can either use more flexible families of variational distributions by for example introducing auxiliary variables (Ranganath et al., 2016; Salimans et al., 2015), normalizing flows (Louizos and Welling, 2017), or address Jackknife to remove the bias (Nowozin, 2018). We leave these opportunities for further research. The approach suggested in this paper can only implicitly perform width-selection of the architectures: Our approach allows to select the width under specific activation functions like ReLU as for a neuron with all weights switched off, i.e. if all \(\mathbf{\gamma}\)'s of a given neuron are put to 0, the unit is excluded from a layer thus reducing the width. The paper does not address the depth selection of neural networks, but a similar implicit approach would be possible for the depth as long as architectures with skip-connections are allowed (i.e. all layers are connected to the responses as well as the next layers of the networks). In such a case, it would be possible to have all nodes excluded in a specific layer of depth \(k\) making only the lower depth layers influence the responses. Then, the depth uncertainty could be inferred. Addressing the depth selection, thus, is an interesting possibility for follow-up research. Also, addressing depth and width selection more explicitly through assigning targeted priors as in Hubin et al. (2021) could be of interest in the future. But the ideas from Hubin et al. (2021) would impose more challenges for variational Bayes as compared to MCMC. At the same time, as shown in Hubin et al. (2021), such a procedure is much more likely to provide highly interpretable models. Also, further research on model priors will be needed in case of the explicit width-depth selection. Last but not least, we would like to discuss some concurrent work that appeared while our paper was in the submission/review process. Firstly, two theoretical papers on posterior consistency of Bayesian variational deep learning in general and sparse contexts were published (Bhattacharya and Maiti, 2021; Cherief-Abdellatif, 2020). Also, Bai et al. (2020) justified theoretically (in an asymptotic setting) the choice of prior inclusion probabilities for the model and priors from Hubin (2018); Hubin and Storvik (2019). They used an almost identical variational inference technique as the one proposed in Hubin and Storvik (2019). The approach from (Hubin and Storvik, 2019) further recently found an application in genetic association studies (Cheng et al., 2022). Finally, Sun et al. (2022) used MCMC for inference on the model proposed in the early version of our work Hubin and Storvik (2019). These advancements show how rapidly the field develops and once again emphasize how actual and to date, the methodological developments on BNNs are in the field. #### Acknowledgments The authors would like to acknowledge Sean Murray (Norwegian Computing Center) for the comments on the language of the article and Dr. Pierre Lison (Norwegian Computing Center) for thoughtful discussions of the literature, potential applications, and technological tools. We also thank Dr. Petter Mostad, Department of Mathematical Sciences, The Chalmers University of Technology and the University of Gothenburg for valuable comments on Proposition 2. We also acknowledge constructive comments from the reviews and editorial comments we received at all stages of the publication of this article.
2301.09559
SpArX: Sparse Argumentative Explanations for Neural Networks [Technical Report]
Neural networks (NNs) have various applications in AI, but explaining their decisions remains challenging. Existing approaches often focus on explaining how changing individual inputs affects NNs' outputs. However, an explanation that is consistent with the input-output behaviour of an NN is not necessarily faithful to the actual mechanics thereof. In this paper, we exploit relationships between multi-layer perceptrons (MLPs) and quantitative argumentation frameworks (QAFs) to create argumentative explanations for the mechanics of MLPs. Our SpArX method first sparsifies the MLP while maintaining as much of the original structure as possible. It then translates the sparse MLP into an equivalent QAF to shed light on the underlying decision process of the MLP, producing global and/or local explanations. We demonstrate experimentally that SpArX can give more faithful explanations than existing approaches, while simultaneously providing deeper insights into the actual reasoning process of MLPs.
Hamed Ayoobi, Nico Potyka, Francesca Toni
2023-01-23T17:20:25Z
http://arxiv.org/abs/2301.09559v3
# SpArX: Sparse Argumentative Explanations for Neural Networks ###### Abstract Neural networks (NNs) have various applications in AI, but explaining their decision process remains challenging. Existing approaches often focus on explaining how changing individual inputs affects NNs' outputs. However, an explanation that is consistent with the input-output behaviour of an NN is not necessarily faithful to the actual mechanics thereof. In this paper, we exploit relationships between _multi-layer perceptrons_ (MLPs) and _quantitative argumentation frameworks_ (QAFs) to create argumentative explanations for the mechanics of MLPs. Our _SpArX_ method first sparsifies the MLP while maintaining as much of the original mechanics as possible. It then translates the sparse MLP into an equivalent QAF to shed light on the underlying decision process of the MLP, producing _global and/or local explanations_. We demonstrate experimentally that SpArX can give more faithful explanations than existing approaches, while simultaneously providing deeper insights into the actual reasoning process of MLPs. ## 1 Introduction The increasing use of black-box models like neural networks (NNs) in autonomous intelligent systems raises concerns about their fairness, reliability and safety. In order to address these concerns, the literature puts forward various explainable AI approaches to make the mechanics of NNs more transparent. This literature includes model-agnostic approaches as in [12, 13] and approaches tailored to the structure of NNs as in [14, 15]. However, these approaches fail to capture the actual mechanics of the NNs they aim to explain and thus it is hard to evaluate how faithful these approaches are [12, 13, 15]. [20] recently proposed regularizing the training procedure of NN such that they can be well approximated by decision trees. While this is an interesting direction, evaluating the faithfulness of the decision trees to the NN remains a challenge. Other recent work unearthend formal relationships between NNs in the form of multi-layer perceptrons (MLPs) and symbolic reasoning approaches like _quantitative argumentation frameworks_ (_QAFs_) [21, 16, 17] and weighted conditional knowledge bases [14]. The formal relationships indicate that these approaches may pave the way towards potentially more faithful explanations than approximate abstractions, such as decision trees. In this paper, we provide explanations for MLPs leveraging on their formal relationships with QAFs in [21]. Intuitively, QAFs represent arguments and relations of attack or support between them as a graph, where nodes hold arguments and edges represent the relations. Various QAF formalisms have been studied over the years, e.g. by [1, 1, 13, 14, 15, 16, 17, 18, 19, 20, 21]. As it turns out, every MLP corresponds to a QAF of a particular form (where arguments are equipped with a base score and edges with weights) under a particular semantics (ascribing a "dialectical" strength to arguments), and, conversely, many acyclic QAFs corresponds to MLPs [21]. While this relationship suggests that QAFs are well suited to create faithful explanations for MLPs, it is evident that just reinterpreting an MLP as a QAF would not give us a comprehensible explanation in general because the QAF has the same density as the original MLP. In order to create faithful and comprehensible argumentative explanations, we propose a novel method based on a two-step process. We first _sparsify_ the MLP, while maintaining as much of its mechanics as possible, in the spirit of clustering. Then, we translate the sparse MLP into a QAF, extending the method of [21]. We call our method _SpArX_ (standing for _Sparse Argumentative eXplanations_ for MLPs). In principle, in order to sparsify in the first step, we could just apply an existing compression method for NNs (e.g. [21]). However, existing methods are not designed for maintaining the mechanics of NNs. We thus make the following contributions: * We propose a novel _clustering method_ for summarizing neurons based on their output-similarity. We compute parameters for the clustered neurons by aggregating the original parameters so that the output of the clustered neuron is similar to the neurons that it summarizes. * We propose two families of _aggregation functions_for aggregating the parameters of neurons in a cluster: the first gives _global explanations_ (explaining the global behaviour of the MLP) and the second gives _local explanations_ (explaining the behaviour of the MLP at a target point, when the MLP is applied to a specific input). * We conduct several experiments demonstrating the viability of our SpArX method for MLPs and its competitiveness with respect to other methods in terms of (i) conventional notions of _input-output faithfulness_ of explanations and (ii) novel notions of _structural faithfulness_, while (iii) shedding some light on the tradeoff between faithfulness and comprehensibility understood in terms of a notion of _cognitive complexity_, when generating explanations with SpArX. Overall, we show that formal relationships between black-box machine learning models (such as NNs) and interpretable symbolic reasoning approaches (such as QAFs) can pave the way toward practical solutions for faithful and comprehensible explanations showing how the models reason. ## 2 Related Work While MLPs are most commonly used in their fully connected form, there has been increasing interest in learning sparse NNs in recent years. However, the focus is usually not on finding an easily interpretable network structure, but rather on decreasing the risk for overfitting, memory and runtime complexity and the associated power consumption. Existing approaches include regularization to encourage neurons with weight \(0\) to be deleted [10], pruning of edges [23], compression [17] and low rank approximation [14]. Another related approach introduces interval NNs [1], which summarize neurons in clusters and consider interval outputs for the clustered neurons to give lower and upper bounds on the outputs for verification purposes. Our method is related to interval NNs in that we also summarize neurons in clusters. However, as opposed to [1], we cluster neurons based on their output. Furthermore, the output associated with a cluster is not an interval, but a numerical value like in a standard MLP. To compute the output, we aggregate the neurons in the cluster using different aggregation functions. Several approaches exist for obtaining argumentative explanations for a variety of models, e.g. as recently overviewed in [11], including for NNs [1], but these are based on approximations of NNs (e.g. using Layerwise Relevance Propagation [1]), rather than summarizations as in our method, and their faithfulness is difficult to ascertain. Several existing methods, like ours, make use of symbolic reasoning approaches for providing explanations, e.g. as recently overviewed in [16]. The explanations resulting from these methods (e.g. abduction-based explanations [1], prime implicants [15], sufficient reasons [1], and majority reasons [1]) faithfully capture the input-output behaviour of the explained models rather than their mechanics, as our method does for MLPs. Other methods extract logical rules as explanations for machine learning models, including NNs, e.g. as in [10] and [12], but again focus on explanations that are input-output faithful rather than on explaining the underpinning mechanics. ## 3 Preliminaries Intuitively, a multi-layer perceptron (MLP) is a layered acyclic graph that processes its input by propagating it through the layers. Formally, we describe MLPs as follows. **Definition 1** (Multi-Layer Perceptron (MLP)).: An _MLP_\(\mathcal{M}\) is a tuple \((V,E,\mathcal{B},\mathcal{W},\varphi)\), where: * \((V,E)\) is a directed graph; * \(V=\uplus_{l=0}^{d+1}V_{l}\) consists of (ordered) layers of neurons; for \(0\,{\leq}\,l\,{\leq}\,d+1\), \(V_{l}=\{v_{i,i}\mid 1\leq i\leq|V_{l}|\}\): we call \(V_{0}\) the _input layer_, \(V_{d+1}\) the _output layer_ and \(V_{l}\), for \(1\leq l\leq d\), the \(l\)-th _hidden layer_; \(d\) is the _depth_ of the MLP. * \(E\subseteq\bigcup_{l=0}^{d}\left(V_{l}\times V_{l+1}\right)\) is a set of edges between adjacent layers; if \(E=\bigcup_{l=0}^{d}\left(V_{l}\times V_{l+1}\right)\), then the MLP is called _fully connected_; * \(\mathcal{B}=\{b^{1},\dots,b^{d+1}\}\) is a set of _bias_ vectors, where, for \(1\leq l\leq d+1\), \(b^{l}\in\mathbb{R}^{|V_{l}|}\); * \(\mathcal{W}=\{W^{0},\dots,W^{d}\}\) is a set of _weight_ matrices, where, for \(1\leq l\leq d\), \(W^{l}\in\mathbb{R}^{|V_{l+1}|\times|V_{l}|}\) such that \(W^{l}_{i,j}=0\) whenever \((v_{l,j},v_{l+1,i})\not\in E\); * \(\varphi:\mathbb{R}\rightarrow\mathbb{R}\) is an _activation function_. In order to process an _input_\(x\in\mathbb{R}^{|V_{0}|}\), the input layer of \(\mathcal{M}\) is initialized with \(x\). The input is then propagated forward through the MLP to generate values at each subsequent layer and ultimately an _output_ in the output layer. Formally, if the values at layer \(l\) are \(x_{l}\in\mathbb{R}^{|V_{l}|}\), then the values \(x_{l+1}\in\mathbb{R}^{|V_{l+1}|}\) at the next layer are defined by \(x_{l+1}=\varphi(W^{l}\,x_{l}+b^{l})\), where the activation function \(\varphi\) is applied component-wise. We let \(\mathcal{O}_{x}^{\mathcal{M}}:V\rightarrow\mathbb{R}\) denote the _output function_ of \(\mathcal{M}\), assigning to every neuron its value when the input \(x\) is given. That is, for \(v_{0,i}\in V_{0}\), we let \(\mathcal{O}_{x}^{\mathcal{M}}(v_{0,i})=x_{i}\) and, for \(l>0\), we let the _activation value_ of neuron \(v_{l,i}\) be \(\mathcal{O}_{x}^{\mathcal{M}}(v_{l,i})=\varphi(W^{l}\,\mathcal{O}_{x}^{\mathcal{ M}}(V_{l-1})+b^{l})_{i}\), where \(\mathcal{O}_{x}^{\mathcal{M}}(V_{l-1})\) denotes the vector that is obtained from \(V_{l-1}\) by applying \(\mathcal{O}_{x}^{\mathcal{M}}\) component-wise. Every MLP can be seen as a quantitative argumentation framework (QAF) [10]. Intuitively, QAFs are _edge-weighted_ directed graphs, where nodes represent _arguments_ and, similarly to [11], edges with negative weight represent _attack_ and edges with positive weight represent _support_ relations between arguments. Each argument is initialized with a _base score_ that assigns an apriori _strength_ to the argument. The strength of arguments is then updated iteratively based on the strength values of attackers and supporters until the values converge. In acyclic graphs corresponding to MLPs, this iterative process is equivalent to the forward propagation process in the MLPs [10]. Conceptually, strength values are from some _domain_\(\mathcal{D}\)[1]. As we focus on (real-valued) MLPs, we will assume \(\mathcal{D}\subseteq\mathbb{R}\). The exact domain depends on the activation function, e.g. the logistic function results in \(\mathcal{D}=[0,1]\), the hyperbolic tangent in \(\mathcal{D}=[-1,1]\) and ReLU in \(\mathcal{D}=[0,\infty]\). Formally, we describe QAFs as follows. **Definition 2** (Quantitative Argumentation Framework (Qafy)).: A _QAF with domain \(\mathcal{D}\subseteq\mathbb{R}\)_ is a tuple \((\mathcal{A},E,\beta,w)\) that consists of * a set of _arguments_\(\mathcal{A}\) and a set of _edges_\(E\subseteq\mathcal{A}\times\mathcal{A}\) between arguments, * a function \(\beta:\mathcal{A}\rightarrow\mathcal{D}\) that assigns _base scores_ from \(\mathcal{D}\) to all arguments, and * a function \(w:E\rightarrow\mathbb{R}\) that assigns _weights_ to all edges. Edges with negative/positive weights are called _attack/support_ edges, denoted by \(\mathrm{Att/Sup}\), respectively. The strength values of arguments are usually computed iteratively using a two-step update procedure [11]: first, an _aggregation function_\(\alpha\) aggregates the strength values of attackers and supporters; then, an _influence function_\(\iota\) adapts the base score. Examples of aggregation functions include product [1, 1], addition [1, 2] and maximum [11] and the influence function is defined accordingly to guarantee that strength values fall in the domain \(\mathcal{D}\). In this paper, we focus on the aggregation and influence function from [2], to obtain QAFs simulating MLPs with a logistic activation function [2]. The strength values of arguments are computed by the following iterative procedure: for every argument \(a\in\mathcal{A}\), we let \(s_{a}^{(0)}:=\beta(a)\) be the initial strength value; the strength values are then updated by the following two steps repeatedly for all \(a\in\mathcal{A}\) (where the auxiliary variable \(\alpha_{a}^{i}\) carries the aggregate at iteration \(i\geq 0\)): **Aggregation:**: \(\alpha_{a}^{(i+1)}:=\sum_{(b,a)\in E}w((b,a))\cdot s_{b}^{(i)}\). **Influence:**: \(s_{a}^{(i+1)}:=\varphi_{l}\big{(}\ln(\frac{\beta(a)}{1-\beta(a)})+\alpha_{a}^{ (i+1)}\big{)}\), where \(\varphi_{l}(z)=\frac{1}{1+\exp(-z)}\) is the logistic function. The _final strength_ of argument \(a\) is defined via the limit of \(s_{a}^{(i)}\), for \(i\) towards infinity. Notably, the semantics given by this notion of final strength satisfies almost all desiderata for QAF semantics perfectly [2]. ## 4 From General MLPs to QAFs Here we generalize the connection between MLPs and QAFs beyond MLPs with logistic activation functions, as follows. Assume that \(\varphi:\mathbb{R}\rightarrow\mathcal{D}\) is an activation function that is strictly monotonically increasing. Examples include logistic, hyperbolic tangent and parametric ReLU activation functions. Then \(\varphi\) is invertible and \(\varphi^{-1}:\mathcal{D}\rightarrow\mathbb{R}\) is defined. We can then define the update function corresponding to an MLP with such activation function \(\varphi\) by using the same aggregation function as before and using the following influence function: **Influence:**: \(s_{a}^{(i+1)}:=\varphi\big{(}\varphi^{-1}(\beta(a))+\alpha_{a}^{(i+1)}\big{)}\). Note that the previous definition of influence in Section 3, from [2], is a special case because \(\ln(\frac{1}{1-x})\) is the inverse function of the logistic function \(\varphi_{l}(x)\). Note also that the popular ReLU activation function \(\varphi_{ReLU}(x)=\max(0,x)\) is not invertible because all non-positive numbers are mapped to \(0\). However, for our purpose of translating MLPs to QAFs, we can define \[\varphi_{ReLU}^{-1}(x)=\begin{cases}x,&\text{if }x>0;\\ 0,&\text{otherwise}.\end{cases}\] In order to translate an MLP \(\mathcal{M}\) with activation function \(\varphi\) and input \(x\) into a QAF \(Q_{\mathcal{M},x}\), we interpret every neuron \(v_{l,i}\) as an abstract argument \(A_{l,i}\). Edges in \(\mathcal{M}\) with positive/negative weights are interpreted as supports/attacks, respectively, in \(Q_{\mathcal{M},x}\). The base score of an argument \(A_{0,i}\) associated with input neuron \(v_{0,i}\) is just the corresponding input value \(x_{i}\). The base score of the remaining arguments \(A_{l,i}\) is \(\varphi(b_{i}^{l})\), where \(b_{i}^{l}\) is the bias of the associated neuron \(v_{l,i}\). **Proposition 1**.: _Let \(\mathcal{M}\) be an MLP with activation function \(\varphi\) that is invertible or ReLU. Then, for every input x, the QAF \(Q_{\mathcal{M},x}\) satisfies \(\mathcal{O}_{x}^{\mathcal{M}}(v_{l,i})=\sigma(A_{l,i})\), where \(\sigma(A_{l,i})\) denotes the final strength of \(A_{l,i}\) in \(Q_{\mathcal{M},x}\)._ ## 5 SpArX: Explaining MLPs with QAFs Just translating an MLP into a QAF would not give a comprehensible explanation in general because the QAF has the same density as the original MLP. Thus, in order to explain an MLP, we first sparsify it, in a customizable way to support different comprehensibility needs, and then translate it into a QAF. The sparsification should maintain as much of the original MLP as possible to give faithful explanations. To achieve this, roughly speaking we exploit redundancies in the MLP by replacing neurons giving similar outputs with a single neuron that summarizes their joint effect. Summarizing neurons in this way is a clustering problem. Formally, a clustering problem is defined by a set of inputs from an abstract space \(\mathcal{S}\) and a distance measure \(\delta:\mathcal{S}\times\mathcal{S}\rightarrow\mathbb{R}_{\geq 0}\). The goal is to partition \(\mathcal{S}\) into clusters \(C_{1},\ldots,C_{K}\) (where \(\mathcal{S}=\psi_{i=1}^{K}C_{i}\)) such that the distance between points within a cluster is'small' and the distance between points in different clusters is 'large'. Finding an optimal clustering is NP-complete in many cases [10]. Thus, we cannot expect to find an efficient algorithm that computes an optimal clustering, but we can apply standard algorithms e.g. K-means [13] to find a good (but not necessarily optimal) clustering efficiently. In our setting, \(\mathcal{S}\) is the set \(V_{l}\) of neurons in any given layer \(0<l<d+1\) and the distance between two neurons can be defined as the difference between their outputs for the inputs in a given dataset \(\Delta\) (e.g. the training dataset): \[\delta(v_{l,i},v_{l,j})=\sqrt{\sum_{x\in\Delta}(\mathcal{O}_{x}^{\mathcal{M}}(v _{l,i})-\mathcal{O}_{x}^{\mathcal{M}}(v_{l,j}))^{2}}. \tag{1}\] After clustering, we have a partitioning \(\mathcal{P}=\psi_{l=1}^{d}\mathcal{P}_{l}\) of (the hidden layers of) our MLP \(\mathcal{M}\), where \(\mathcal{P}_{l}=\{C_{1}^{1},\ldots,C_{K_{l}}^{l}\}\) is the clustering of the \(l\)-th layer, that is, \(V_{l}=\psi_{i=1}^{K_{l}}C_{i}^{l}\). We use the clustering to create a corresponding _clustered MLP_\(\mu\) whose neurons correspond to clusters in the original \(\mathcal{M}\). We call these neurons _cluster-neurons_ and denote them by \(v_{C}\), where \(C\) is the associated cluster. The graphical structure of \(\mu\) is as follows. **Definition 3** (Graphical Structure of Clustered MLP).: Given an MLP \(\mathcal{M}\) and a clustering \(\mathcal{P}=\uplus_{i=1}^{d}\mathcal{P}_{i}\) of \(\mathcal{M}\), the _graphical structure of the corresponding clustered MLP_\(\mu\) is a directed graph \((V^{\mu},E^{\mu})\) such that * \(V^{\mu}=\uplus_{l=0}^{d+1}V^{\mu}_{l}\) consists of (ordered) layers of cluster-neurons such that: 1. the input layer \(V^{0}_{0}\) consists of a singleton cluster-neuron \(v_{\{v_{0,i}\}}\) for every input neuron \(v_{0,i}\in V_{0}\); 2. the \(l\)-th hidden layer of \(\mu\) (for \(0<\!l<d+1\)) consists of one cluster-neuron \(v_{C}\) for every cluster \(C\in\mathcal{P}_{l}\); 3. the output layer \(V^{\mu}_{d+1}\) consists of a singleton cluster-neuron \(v_{\{v_{d+1,i}\}}\) for every output neuron \(v_{\{d+1,i}}\!\in\!V_{d+1}\); * \(E^{\mu}=\bigcup_{l=0}^{d}\big{(}V^{\mu}_{l}\times V^{\mu}_{l+1}\big{)}\). **Example 1**.: For illustration, consider the MLP in Figure 1.a, trained to approximate the XOR function from the dataset \(\Delta=\{(0,0),(0,1),(1,0),(1,1)\}\) with target outputs \(0,1,1,0\), respectively. The activation values of the hidden neurons for the four input pairs are \((0,0,0,0),(1.7,0,1.8,0),(0,2.3,0,1.5),(0,0,0,0)\), respectively. Applying \(K\)-means clustering with \(\delta\) as in Eq. 1 and \(K=2\) for the hidden layer results in two clusters \(C_{1},C_{2}\), indicated by rectangles in the figure. We define global and local explanations for MLPs by translating their corresponding clustered MLPs into QAFs. The clustered MLPs for global and local explanations share the same graphical structure but differ in the parameters of the cluster-neurons, that is, (i) the biases of cluster-neurons and (ii) the weights for edges between cluster-neurons. We define these parameters in terms of _aggregation functions_, specifically a _bias aggregation function_\(\mathrm{Agg}^{b}:\mathcal{P}\rightarrow\mathbb{R}\), mapping clusters to biases, and an _edge aggregation function_\(\mathrm{Agg}^{e}:\mathcal{P}\times\mathcal{P}\rightarrow\mathbb{R}\cup\{\bot\}\), mapping pairs of clusters to weighs if the pairs correspond to edges in \(\mu\), or \(\bot\) otherwise. Given any concrete such aggregation functions (as defined later), the parameters of \(\mu\) can be defined as follows. **Definition 4** (Parameters of Clustered MLP).: Given an MLP \(\mathcal{M}=(V,E,\mathcal{B},\mathcal{W},\varphi)\), let \((V^{\mu},E^{\mu})\) be the graphical structure of the corresponding clustered MLP \(\mu\). Then, for bias and edge aggregation functions \(\mathrm{Agg}^{b}\) and \(\mathrm{Agg}^{e}\), respectively, \(\mu\) is \((V^{\mu},E^{\mu},\mathcal{B}^{\mu},\mathcal{W}^{\mu},\varphi)\) with _parameters_\(\mathcal{B}^{\mu},\mathcal{W}^{\mu}\) as follows: * for every cluster-neuron \(v_{C}\in V^{\mu}\), the bias (in \(\mathcal{B}^{\mu}\)) of \(v_{C}\) is \(\mathrm{Agg}^{b}(C)\); * for every edge \((v_{C_{1}},v_{C_{2}})\in E^{\mu}\), the weight (in \(\mathcal{W}^{\mu}\)) of the edge is \(\mathrm{Agg}^{e}((C_{1},C_{2}))\). ### Sparse Argumentative Global Explanations We consider the following aggregation functions. As we explain in the supplementary material (SM), they minimize the deviation (with respect to the least-squares error) of bias and weights of the cluster neuron and the neurons contained in the cluster. **Definition 5** (Global Aggregation Functions).: The _average bias and edge aggregation functions_ are, respectively: \[\mathrm{Agg}^{b}(C)=\frac{1}{|C|}\sum_{v_{l,i}\in C}b^{l}_{i};\] \[\mathrm{Agg}^{e}((C_{1},C_{2}))=\sum_{v_{l,i}\in C_{1}}\frac{1}{|C_{2}|}\sum _{v_{l+1,j}\in C_{2}}W^{l}_{j,i}.\] The average bias aggregation function simply averages the biases of neurons in the cluster. Intuitively, the weight of an edge between cluster-neurons \(v_{C_{1}}\) and \(v_{C_{2}}\) has to capture the effects of all neurons summarized in \(C_{1}\) on neurons summarized in \(C_{2}\). The neurons in \(C_{1}\) all affect every neuron in \(C_{2}\), therefore their effects have to be summed up. As \(v_{C_{2}}\) acts as a replacement of all neurons in \(C_{2}\), it has to aggregate their activation. We achieve this aggregation by averaging again. The following example illustrates global explanations drawn from clustered MLPs, with the average aggregation functions, via their understanding as QAFs. **Example 2**.: Figure 1.b shows the clustered MLP for our XOR example from Figure 1.a. The corresponding QAF can be visualised as in Figure 1.c, where attacks are given in red and supports in green, and the thickness of the edges reflects their weight. To better visualize the role of each cluster-neuron, we can use a word-cloud representation as in Figure 1.d (showing, e.g., that \(x_{0}\) and \(x_{1}\) play a negative and positive role, respectively, towards \(C_{1}\), which supports the output). The word-cloud representation gives insights into the reasoning of the MLP (with the learned rule \((\overline{x_{0}}\wedge x_{1})\vee(x_{0}\wedge\overline{x_{1}})\)). ### Sparse Argumentative Local Explanations While global explanations attempt to faithfully explain the behaviour of the MLP on all inputs, our local explanations focus on the behaviour in the neighborhood of the input \(x\) from the dataset \(\Delta\), similar to LIME [10]. To do so, we generate random neighbors of \(x\) to obtain a _sample dataset_\(\Delta^{\prime}\), and weigh them with an exponential kernel Figure 1: a) MLP for XOR, with cluster-neurons \(C_{1}\) and \(C_{2}\). b) Clustered MLP. c) Global explanation as a QAF. d) Word-cloud representation of the hidden cluster-neurons. from LIME [Ribeiro _et al._2016], assigning lower weight to a sample \(x^{\prime}\in\Delta^{\prime}\) that is further away from the target \(x\): \[\pi_{x^{\prime},x}=exp(-D(x^{\prime},x)^{2}/\sigma^{2})\] where \(D\) is the Euclidean distance function and \(\sigma\) is the width of the exponential kernel. We aggregate biases as before but replace the edge aggregation function with the following. **Definition 6** (Local Edge Aggregation Function).: The _local edge aggregation function_ with respect to the _input_\(x\) is \[\operatorname{Agg}_{x}^{\varepsilon}(C_{1},C_{2})=\] where \(\mathcal{O}_{x^{\prime}}^{\mathcal{M}}(v_{l,i})\) is the activation value of the neuron \(v_{l,i}\) in the original MLP and \(\mathcal{O}_{x^{\prime}}^{\mu}(v_{C_{1}})\) is the activation value of the cluster-neuron \(C_{1}\) in the clustered MLP. Note that, by this definition, the edge weights are computed layer by layer from input to output. ## 6 Desirable Properties of Explanations In order to evaluate SpArX, we consider three measures for assessing faithfulness and comprehensibility of explanations. In this section, we assume as given an MLP \(\mathcal{M}\) of depth \(d\) and a corresponding clustered MLP \(\mu\). To begin with, we consider a _faithfulness_ measure inspired by the notion of fidelity considered for LIME [Ribeiro _et al._2016], based on measuring the _input-output_ difference between the original model (in our case, \(\mathcal{M}\)) and the substitute model (in our case, the clustered MLP/QAF). **Definition 7** (Input-Output Unfaithfulness).: The _local input-output unfaithfulness_ of \(\mu\) to \(\mathcal{M}\) with respect to _input_\(x\) and _dataset_\(\Delta\) is \[\mathcal{L}^{\mathcal{M}}(\mu)=\sum_{x^{\prime}\in\Delta}\pi_{x^{\prime},x} \sum_{v\in V_{d+1}}(\mathcal{O}_{x^{\prime}}^{\mathcal{M}}(v)-\mathcal{O}_{x^ {\prime}}^{\mu}(v))^{2}.\] The _global input-output unfaithfulness_ of \(\mu\) to \(\mathcal{M}\) with respect to dataset \(\Delta\) is \[\mathcal{G}^{\mathcal{M}}(\mu)=\sum_{x^{\prime}\in\Delta}\sum_{v\in V_{d+1}}( \mathcal{O}_{x^{\prime}}^{\mathcal{M}}(v)-\mathcal{O}_{x^{\prime}}^{\mu}(v))^ {2}.\] The lower the input-output unfaithfulness of the clustered MLP \(\mu\), the more faithful \(\mu\) is to the original MLP. The input-output unfaithfulness measures deviations in the input-output-behaviour of the substitute model, but, since clustered MLPs maintain much of the structure of the original MLPs, we can define a more fine-grained notion of _structured faithfulness_ by comparing the outputs of the individual neurons in the MLP to the output of the cluster-neurons summarizing them in the clustered MLP. **Definition 8** (Structural Unfaithfulness).: Let \(K_{l}\) be the number of clusters at hidden layer \(l\) in \(\mu\) (\(0{<}l{\leq}d\)) and \(K_{d+1}\) be the number of output neurons (in \(\mu\) and \(\mathcal{M}\)). Let \(K_{l,j}\) be the number of neurons in the \(j\)th cluster-neuron \(C_{l,j}\) (\(0{<}l\leq d+1\), with \(K_{d+1,j}{=}1\)). Then, the _local structural unfaithfulness_ of \(\mu\) to \(\mathcal{M}\) with respect to _input_\(x\) and _dataset_\(\Delta\) is: \[\mathcal{L}_{s}^{\mathcal{M}}(\mu)=\sum_{x^{\prime}\in\Delta}\pi_{x^{\prime},x }\sum_{l=1}^{d+1}\sum_{j=1}^{K_{l}}\sum_{v_{l,i}\in C_{l,j}}(\mathcal{O}_{x^{ \prime}}^{\mathcal{M}}(v_{l,i}){-}\mathcal{O}_{x^{\prime}}^{\mu}(C_{l,j}))^{2}.\] The _global structural unfaithfulness_\(\mathcal{G}_{s}^{\mathcal{M}}(\mu)\) is defined analogously by removing the similarity terms \(\pi_{x^{\prime},x}\). The lower the structured unfaithfulness of the clustered MLP \(\mu\), the more structurally faithful \(\mu\) is to the original MLP. Note that our notion of structural faithfulness is different from the notions of structural descriptive accuracy by [Albini _et al._2022]: they characterise bespoke explanations, defined therein, of probabilistic classifiers equipped with graphical structures and cannot be used in place of our notion, tailored to local and global explanations Finally, we consider the _cognitive complexity_ of an explanation based on its size, inspired by the notion of cognitive tractability in [Cyras _et al._2019]. In our case, we use the number of cluster-neurons/arguments as a measure. **Definition 9** (Cognitive Complexity).: Let \(K_{l}\) be the number of clusters at hidden layer \(l\) in \(\mu\) (\(0<l\leq d\)). Then, the _cognitive complexity_ of \(\mu\) is defined as \[\Omega(\mu)=\mathop{\Pi}\limits_{0<l<d+1}K_{l}\] Note that there is a tradeoff between faithfulness and cognitive complexity in SpArX. By reducing the number of cluster-neurons, we reduce cognitive complexity. However, this also results in a higher variance in the neurons that are summarized in the clusters, so the faithfulness of the explanation can suffer. We will explore this trade-off experimentally next. Finally, note that other properties of explanations by symbolic approaches, notably by [Amgoud and Ben-Naim2022], are unsuitable for our mechanistic explanations as QAFs. Indeed, these properties characterize the input-output behaviour of classifiers, rather than their mechanics. ## 7 Experiments We conducted four sets of experiments to evaluate SpArX with respect to (i) the trade-off between its sparsification and its ability to maintain faithfulness (Section 7.1 for global and Section 7.2 for local explanations), (ii) SpArX's scalability (Section 7.3), and (iii) the tradeoff between faithfulness and cognitive complexity (Section 7.4). We used four datasets for classification: iris1 with 150 instances, 4 continuous features and 3 classes; breast cancer2 with 569 instances, 30 continuous features and 2 classes; COMPAS [Jeff Larson and Angwin2016] with 11,000 instances and 52 categorical features, to classify \(two\_year\_recid\); and forest covertype3[Blackard and Dean1999] with 581,012 instances, 54 features (10 continuous, 44 binary), and 7 classes. Footnote 1: [https://archive.ics.uci.edu/ml/datasets/iris](https://archive.ics.uci.edu/ml/datasets/iris) Footnote 2: [https://archive.ics.uci.edu/ml/datasets/cancer](https://archive.ics.uci.edu/ml/datasets/cancer) We used the first three datasets to evaluate the (input-output and structural) unfaithfulness of the global and local explanations generated by SpArX. We then used the last dataset, which is considerably larger and requires deeper MLPs with varying architectures, to evaluate the scalability of SpArX. Finally, we used COMPAS to assess cognitive complexity. For the experiments with the first three datasets we used MLPs with 2 hidden layers and 50 hidden neurons each, whereas for the experiments with the last we used 1-5 hidden layers with 100, 200 or 500 neurons. For all experiments, we used the RELU activation function for the hidden neurons and the softmax activation function for the output neurons. We give classification performances for all MLPs and average run-times for generating local explanations in the SM. When experimenting with SpArX, one needs to choose the number of clusters/cluster-neurons at each hidden layer: we do so by specifying a _compression ratio_ (for example, a compression ratio of 0.5 amounts to obtaining half cluster-neurons than the original neurons). ### Global Faithfulness (Comparison to HAP) Since SpArX essentially compresses an MLP to construct a QAF, one may ask how it compares to existing compression approaches.4 To assess the faithfulness of our global explanations, we compared SpArX's clustering approach to the state-of-the-art compression method Hessian Aware Pruning (HAP) [20], which uses relative Hessian traces to prune insensitive parameters in NNs. We measured both input-output and structural unfaithfulness of SpArX and HAP to the original MLP, using the result of HAP compression in place of \(\mu\) when applying Definitions 7 and 8 for comparison. Footnote 4: Whereas existing NN compression methods typically retrain after compression, we do not, as we want to explain the original NN. Input-Output Faithfulness.Table 1 shows the input-output unfaithfulness of the global explanations (\(\mathcal{G}^{\mathcal{M}}(\mu)\) in Def. 7) obtained from SpArX and HAP using our three chosen datasets and different compression ratios. The unfaithfulness of global explanations in SpArX is lower than HAP especially when the compression ratio is high. Note that this does not mean that SpArX is a better compression method, but that the compression method in SpArX is better for our purposes (that is, compressing the MLP while keeping its mechanics). Structural Faithfulness.Table 2 gives the structural global unfaithfulness (\(\mathcal{G}^{\mathcal{M}}_{s}(\mu)\) in Def. 8) for SpArX and HAP, on the three chosen datasets, using different compression ratios. Our method has a much lower structural unfaithfulness than HAP by preserving activation values close to the original model. ### Local Faithfulness (Comparison to LIME) In order to evaluate the local input-output unfaithfulness of SpArX (\(\mathcal{L}^{\mathcal{M}}(\mu)\) in Def. 7), we compare SpArX and LIME [13]5, which approximates a target point locally on an interpretable substitute model.6 Table 3 shows the input-output unfaithfulness of the local explanations for LIME and SpArX. We used the same sampling approach as LIME [13]. We averaged the unfaithfulness measure for all test examples. The results show that the local explanations produced by our approach are more input-output faithful to the original model. Thus, basing local explanations on keeping the MLP mechanics helps also with their input-output faithfulness. Footnote 5: [https://github.com/marcotcr/lime](https://github.com/marcotcr/lime) Footnote 6: We used ridge regression, suitable with tabular data in LIME. We used the substitute model as \(\mu\) when applying Def. 7 to LIME. ### Scalability To evaluate the scalability of SpArX, we measured its input-output faithfulness on MLPs of increasing complexity. we have compared it to LIME [13] using forest covertype as a sufficiently large dataset to be tested with various MLP architectures with different sizes. We have trained 15 MLPs with varying numbers of hidden \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Compression Ratio} & \multicolumn{3}{c}{Datasets} \\ \cline{3-5} & & Iris & Cancer & COMPAS \\ \hline HAP & 0.2 & 0.05 & 0.48 & 0.02 \\ SpArX & **0.00** & **0.02** & **0.00** \\ \hline HAP & 0.4 & 0.23 & 0.53 & 0.11 \\ SpArX & **0.00** & **0.05** & **0.00** \\ \hline HAP & 0.6 & 0.23 & 0.58 & 0.20 \\ SpArX & **0.00** & **0.10** & **0.00** \\ \hline HAP & 0.8 & 0.28 & 1.00 & 0.26 \\ SpArX & **0.00** & **0.21** & **0.00** \\ \hline \hline \end{tabular} \end{table} Table 1: Global input-output unfaithfulness of sparse MLPs generated by HAP vs our SpArX method. (Best results in **bold**) \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Datasets} \\ \cline{2-4} & Iris & Cancer & COMPAS \\ \hline LIME & 0.3212 & 0.1623 & 0.0224 \\ SpArX (0.6) & **0.0257** & **0.0055** & **0.0071** \\ SpArX (0.8) & **0.0707** & **0.0156** & **0.0083** \\ \hline \hline \end{tabular} \end{table} Table 3: Local input-output unfaithfulness of LIME vs our SpArX method (with different compression ratios, in brackets). layers (\(\#\)Layers) and different numbers of hidden neurons (\(\#\)Neurons) at each hidden layer (see details in the SM). Table 4 compares the input-output unfaithfulness of the local explanations of SpArX using \(80\%\) compression ratio7 with LIME, all averaged over the test set. The result confirms that SpArX explanations are scalable to different MLP architectures with different sizes and large datasets. Footnote 7: For experiments with lower compression ratios see the SM. ### Cognitive Complexity The cognitive complexity of SpArX (Def. 9) is dependent on the number of clusters at each layer. A lower number of clusters leads to a more interpretable explanation at the cost of achieving lower (structural) faithfulness. Figure 1(a) shows a _global explanation8_ of an MLP for the COMPAS dataset, with one hidden layer and 20 neurons in the hidden layer, with \(20\%\) compression ratio and pruning edges with low weights. Note that pruning is only done here for visualization. This explanation is hardly interpretable for a user. The classification results of the clustered MLP is \(98.32\%\), the same as the original MLP. Figure 1(b) shows the global explanation of the same MLP but with \(85\%\) compression rate and pruning the edges with low weights. This global explanation is more comprehensible for humans since there are fewer neurons in the clustered MLP. The classification results of the clustered MLP is \(94.60\%\), the same as the original model. Unlike shallow input-output explanations, we can see the role of each hidden neuron in the proposed method. The output \(O_{0}\) is _two_year_recid_. There are two sets of hidden cluster-neurons, namely an attacker (\(C_{1}\)) and two supporters (\(C_{2}\) and \(C_{3}\)). \(C_{1}\) attacks the output which means the criminal will not recommit a crime in a two-year period. Three features are supporting \(C_{1}\) and two features are attacking it. The attacking features also support \(C_{3}\). This means that they strengthen the support by \(C_{3}\) and weaken the attack by \(C_{1}\). Therefore, _priscrs_count_10 priors or more_ and _is_recid_Yes_ both strongly affect the output. Indeed, looking at the COMPAS dataset, more than \(99\%\) of criminals that have these two features recommitted the crime in a two years period. \(C_{2}\) and \(C_{3}\) are supporting the output. \(C2\) is only supported by the _is_violent_recid_Yes_ feature. This means that if a criminal has a violent recidivism (s)he will probably recommit a crime after two years. Checking the COMPAS dataset, this interpretation is \(100\%\) correct. These kinds of interpretations are beyond the shallow input-output explanations offered by model-agnostic explanation methods such as LIME. \(C3\) is supported by several features and is attacked by one feature. Looking at this argumentative global explanation one can understand the effect of each feature as well as of each hidden neuron on the output. Footnote 8: We give an example of local explanation in the SM. ## 8 Conclusion We introduced SpArX, a novel method for generating sparse argumentative explanations for MLPs. In contrast to shallow input-output explainers like LIME, SpArX focuses on maintaining structural similarity to the original MLP in order to give faithful explanations, while allowing tailoring them to the cognitive needs of users. Our experimental results show that the explanations by SpArX are more (input-output) faithful to the original model than LIME. We have also compared SpArX with a state-of-the-art NN compression technique called HAP, showing that SpArX is more faithful to the original model. Further, our explanations are more _structurally_ faithful to the original model by providing deeper insights into the mechanics thereof. Future research includes extending SpArX to other types of NNs, e.g. CNNs, as well as furthering it to cluster neurons across hidden layers. It would also be interesting to explore whether SpArX could be extended to exploit formal relationships between NNs and other symbolic approaches, e.g. in [11]. Further, it would be interesting to explore formalizations such as in [10] for characterizing uncertainty as captured by SpArX. \begin{table} \begin{tabular}{c c c c c} \hline \multirow{2}{*}{\(\#\)Layers} & \multirow{2}{*}{Method} & \multicolumn{3}{c}{\(\#\)Neurons} \\ \cline{3-5} & & 100 & 200 & 500 \\ \hline 1 & LIME & 0.2375 & 0.2919 & 0.3123 \\ 1 & SpArX & **0.0000** & **0.0018** & **0.0000** \\ \hline 2 & LIME & 0.2509 & 0.2961 & 0.3638 \\ 2 & SpArX & **0.0019** & **0.0015** & **0.0034** \\ \hline 3 & LIME & 0.3130 & 0.3285 & 0.3127 \\ 3 & SpArX & **0.0028** & **0.0026** & **0.0000** \\ \hline 4 & LIME & 0.3395 & 0.3459 & 0.3243 \\ 4 & SpArX & **0.0001** & **0.0049** & **0.0000** \\ \hline 5 & LIME & 0.3665 & 0.3178 & 0.3288 \\ 5 & SpArX & **0.0030** & **0.0064** & **0.0000** \\ \hline \end{tabular} \end{table} Table 4: Evaluating scalability of SpArX (forest covertype dataset): local input-output unfaithfulness of SpArX (with \(80\%\) compression ratio) and LIME using various MLPs with different numbers of hidden layers (\(\#\)Layers) and neurons (\(\#\)Neurons). Figure 2: Global explanations by SpArX of an MLP with \(20\%\) and \(85\%\) compression ratios for the COMPAS dataset.
2309.01069
Separable Hamiltonian Neural Networks
Hamiltonian neural networks (HNNs) are state-of-the-art models that regress the vector field of a dynamical system under the learning bias of Hamilton's equations. A recent observation is that embedding a bias regarding the additive separability of the Hamiltonian reduces the regression complexity and improves regression performance. We propose separable HNNs that embed additive separability within HNNs using observational, learning, and inductive biases. We show that the proposed models are more effective than the HNN at regressing the Hamiltonian and the vector field. Consequently, the proposed models predict the dynamics and conserve the total energy of the Hamiltonian system more accurately.
Zi-Yu Khoo, Dawen Wu, Jonathan Sze Choong Low, Stéphane Bressan
2023-09-03T03:54:43Z
http://arxiv.org/abs/2309.01069v4
# Separable Hamiltonian Neural Networks ###### Abstract The modelling of dynamical systems from discrete observations is a challenge faced by modern scientific and engineering data systems. Hamiltonian systems are one such fundamental and ubiquitous class of dynamical systems. Hamiltonian neural networks are state-of-the-art models that unsupervised-ly regress the Hamiltonian of a dynamical system from discrete observations of its vector field under the learning bias of Hamilton's equations. Yet Hamiltonian dynamics are often complicated, especially in higher dimensions where the state space of the Hamiltonian system is large relative to the number of samples. A recently discovered remedy to alleviate the complexity between state variables in the state space is to leverage the additive separability of the Hamiltonian system and embed that additive separability into the Hamiltonian neural network. Following the nomenclature of physics-informed machine learning, we propose three _separable Hamiltonian neural networks_. These models embed additive separability within Hamiltonian neural networks. The first model uses additive separability to quadratically scale the amount of data for training Hamiltonian neural networks. The second model embeds additive separability within the loss function of the Hamiltonian neural network. The third model embeds additive separability through the architecture of the Hamiltonian neural network using conjoined multilayer perceptions. We empirically compare the three models against state-of-the-art Hamiltonian neural networks, and demonstrate that the separable Hamiltonian neural networks, which alleviate complexity between the state variables, are more effective at regressing the Hamiltonian and its vector field. Hamiltonian dynamical systems, physics-informed neural networks, physics properties, system symmetries, additive separability ## I Introduction Modelling dynamical systems is a core challenge for science and engineering. The movement of a pendulum, the wave function of a quantum-mechanical system, the movement of fluids around the wing of a plane, the weather patterns under climate change, and the populations forming an ecosystem are spatio-temporal behaviours of physical phenomena described by dynamical systems. Hamiltonian systems [1] are a class of dynamical systems governed by Hamilton's equations, which indicate conservation of the Hamiltonian value of the system. A recent advancement in the modelling of dynamical systems is Hamiltonian neural networks [2, 3], which are physics-informed neural networks with learning biases given by Hamilton's equations and their corollaries [4]. Hamiltonian neural networks are universal function approximators [5] capable of modelling non-linear multivariate functions [6]. They use a learning bias [4] based on physics information regarding Hamilton's equations [2, 3] to aid the neural network in converging towards solutions that adhere to physics laws [4]. Hamiltonian neural networks unsupervised-ly regress the vector field and the Hamiltonian of a dynamical system from discrete observations of its state space or vector field under the learning bias of Hamilton's equations, and outperform their physics-uninformed counterparts in doing so [7]. However, Hamiltonian dynamics are often complicated and chaotic, especially in higher dimensional systems such as the Toda Lattice and Henon Heiles systems. The state space of the Hamiltonian system is large relative to the number of samples. A redeeming feature of these systems is their additive separability. As highlighted by Gruver et al. in the ICML 2022 spotlight paper, additive separability "_allowed the physics-informed neural network to avoid[...] artificial complexity from its coordinate system_" (the input variables) and improve its performance [8]. This motivates further informing Hamiltonian neural networks of additive separability to alleviate the complexity between state variables of Hamiltonian systems. The main technical contribution of this work is the embedding of additive separability into a Hamiltonian neural network to regress a Hamiltonian and vector field. We propose three Hamiltonian neural networks that independently embed additive separability using three modes of biases. We call this family of Hamiltonian neural networks, _separable Hamiltonian neural networks_. The three separable Hamiltonian neural networks follow the nomenclature of Karniadakis et al. [4] and embed additive separability using three modes of biases: observational bias, learning bias, and inductive bias. The first model embeds an observational bias by training on newly generated data that embody separability. The second model embeds a learning bias through the loss function of the Hamiltonian neural network. The third model embeds an inductive bias through the architecture of the Hamiltonian neural network by means of conjoined multilayer perceptrons. We empirically evaluate the performance of the proposed models against a baseline Hamiltonian neural network on a variety of representative additively separable Hamiltonian systems. We compare their performances in regressing an additively separable Hamiltonian and a vector field. In this paper, Section II presents the necessary background on dynamical, Hamiltonian and separable Hamiltonian systems. Section III synthesises the related work and positions the main contribution. Section IV introduces the proposed separable Hamiltonian neural networks. Section V compares the proposed models against the baseline Hamiltonian neural network in regressing additively separable Hamiltonians and vector fields. Section VI concludes the paper. ## II Background ### _Dynamical Systems and Hamiltonian Systems_ Dynamical systems theory [9] studies the temporal dynamics, or time evolution, of dynamical systems. The phase or state space of a dynamical system is a \(\mathbb{R}^{2\times n}\) multidimensional space which represents all possible states of a dynamical system comprising the combination of the system's \(2\times n\) state variables [9], also called degrees of freedom, or parameters. Without loss of generality, we consider autonomous, time-independent systems. The dynamics of an autonomous system are captured by the vector field [9, 10] of dimension \(\mathbb{R}^{2\times n}\), formed by the time derivatives of the state variables. A Hamiltonian system [1] is a dynamical system characterised by a smooth, real-valued scalar function of the state variables [10] called the Hamiltonian function or simply the Hamiltonian, \(H\in\mathbb{R}\). The Hamiltonian system is governed by Hamilton's equations [1], a system of \(2\times n\) differential equations given in Equation 1 that define the vector field \(F(x,y)=\left(\frac{\mathrm{d}x}{\mathrm{d}t},\frac{\mathrm{d}y}{\mathrm{d}t}\right)\)[1, 11]. A Hamiltonian with \(2\times n\) state variables has a dimension, or axis, of \(n\). Conventionally, the set of variables can be evenly split into two sets called generalised variables, noted \(\vec{x}\) for the position, and \(\vec{y}\) for momentum. \[\frac{\mathrm{d}\vec{x}}{\mathrm{d}t}=\frac{\partial H(\vec{x},\vec{y})}{ \partial\vec{y}},\quad\frac{\mathrm{d}\vec{y}}{\mathrm{d}t}=-\frac{\partial H (\vec{x},\vec{y})}{\partial\vec{x}}. \tag{1}\] A classic example of a Hamiltonian system is the non-linear pendulum. Figure 0(a) shows the vector field of a non-linear pendulum in its two-dimensional phase space with state variables position (angle when oscillating in a plane) and momentum (mass multiplied by velocity). The vector field is formed by the time derivatives of the state variables. For a non-linear pendulum of unitary mass, the Hamiltonian is the sum of kinetic and potential energy. The heatmap in Figure 0(a) shows the value of the Hamiltonian in the phase space. ### _Separable Hamiltonian Systems_ A Hamiltonian system is separable if the Hamiltonian can be separated into additive terms, each dependent on either \(\vec{x}\) or \(\vec{y}\), where \(\vec{x}\) and \(\vec{y}\) are disjoint subsets of the state variables of a Hamiltonian system [12]. The separable Hamiltonian is defined \(H(\vec{x},\vec{y})=T(\vec{x})+V(\vec{y})\) where \(T\) and \(V\) are arbitrary functions [12]. Furthermore, the mixed partial derivative of the separable Hamiltonian is zero following Equation 2. \[\frac{\partial^{2}H}{\partial x\partial y}=\frac{\partial}{ \partial x}\left(\frac{\partial(T(\vec{x})+V(\vec{y}))}{\partial y}\right)= \frac{\partial}{\partial x}\left(\frac{\partial V(\vec{y})}{\partial y}\right) =0. \tag{2}\] ### _Examples of Separable Hamiltonian Systems_ For illustration and comparative empirical evaluation of the models presented, we consider eight Hamiltonian systems shown in Table I, of which five are classical and mechanical and three are abstract. These allow the models to demonstrate their performance in predicting a range of Hamiltonian dynamics comprising different functions, different values of \(n\), and well-behaved or chaotic dynamics, from observations. ## III Related Work ### _Predicting Function Values and Vector Fields_ There are multiple statistical methods to regress correlated vector-valued functions [13, 14, 15, 16]. Hastie et al. [6] addressed the regressing of vector fields using multiple output regression and machine learning. State-of-the-art works regress the vector field of a dynamical system using neural networks that model ordinary [17] and partial [18] differential equations. For the regressing of Hamiltonian dynamical systems, Bertalan et al. [2] and Greydanus et al. [3] independently use physics-informed machine learning methods to regress the value of the Hamiltonian from multiple evenly-spaced samples along multiple Hamiltonian trajectories. Our work emulates theirs, by embedding Hamilton's equations within the loss function of a neural network to regress the Hamiltonian [3] and using automatic differentiation of the regressed Hamiltonian to yield the regressed vector field [3]. However, our work uses instantaneous observations of the Hamiltonian vector field, which are sufficient to train the Hamiltonian neural network. Recent advancements in regressing Hamiltonian vector fields use neural ordinary differential equations [19, 20, 21, 22, 23] and leverage the symplectic property of the Hamiltonian. They use symplectic integration to regress the Hamiltonian vector field. Some further leverage Hamiltonian separability [19, 21] by using a Leapfrog integrator. Others additionally require the Hamiltonian to be mechanical [22]. Neural ordinary differential equations-based works require trajectories of the Hamiltonian system as input while our model only requires instantaneous observations of the Hamiltonian vector field. ### _Biases in Neural Networks_ Karniadakis et al. focus on three modes of biasing a regression model: observational bias, learning bias, and inductive bias [4]. Observational biases are introduced directly through data that embody the underlying physics, or carefully crafted data augmentation procedures. With sufficient data to cover the input domain of a regression task, machine learning methods Fig. 1: (a) Non-linear pendulum vector field (black arrows) and Hamiltonian (heatamap), (b) Random samples of the non-linear pendulum vector field. have demonstrated remarkable power in achieving accurate interpolation between the dots [4]. Learning biases are soft constraints introduced by appropriate loss functions, constraints and inference algorithms that modulate the training phase of a machine learning model to explicitly favour convergence towards solutions that adhere to the underlying physics [4]. Inductive biases are prior assumptions incorporated by tailored interventions to a machine learning model architecture, so regressions are guaranteed to implicitly and strictly satisfy a set of given physical laws [4]. Hamiltonian neural networks leverage learning biases and use Hamilton's equations as soft constraints in the loss function of the neural network to favour convergence toward the Hamiltonian [2, 3]. Seminal work on embedding additive separability within neural networks utilised block diagonal matrices to regress elements of an additively separable finite element problem [24]. Recently, Zhong et al. [22] leveraged the additive separability of mechanical Hamiltonians as an inductive bias to design neural ordinary differential equations. Gruver et al. [8] empirically examined these neural networks and found that their improved generalization resulted from the bias of a second-order structure [8], which arose from the additive separability of the modelled mechanical Hamiltonian and "_allowed the physics-informed neural network to avoid[...] artificial complexity from its coordinate system_ (the input variables)" and improve its performance [8]. Our work also exploits the additive separability of the modelled Hamiltonian. Our work incorporates knowledge regarding the additive separability of the Hamiltonian function within the Hamiltonian neural network in the style of Karniadakis' physics-informed machine learning [4] so that regressions sought of the Hamiltonian function and vector field are guaranteed to implicitly or explicitly satisfy this separability. ## IV Methodology Four Hamiltonian neural networks are compared for the task of regressing the Hamiltonian and vector field of a Hamiltonian system. One, the baseline, is uninformed of the additive separability of the Hamiltonian system. Three proposed models are informed via observational, learning and inductive biases respectively. Subsections IV-A, IV-B, IV-C, and IV-D introduce the four models. Section V empirically compares the models on their abilities to perform the task. ### _The Baseline Hamiltonian Neural Network_ We adapt Hamiltonian neural networks (HNNs) [2, 3] (leftmost, Figure 2) for the task of regressing the Hamiltonian and vector field of a Hamiltonian system from random samples of the vector field. Hamiltonian neural networks inform a neural network that a system is Hamiltonian by embedding a Hamiltonian learning bias into the neural network. Equation 3 defines the loss function of the Hamiltonian neural network. \(f_{0}\) is an arbitrary pinning term that "pins" the regressed Hamiltonian to one among several solutions that are modulo and additive constant, and reduces the search space for convergence. \(f_{1}\) and \(f_{2}\) are Hamilton's equations corresponding to Equation 1. \(f_{0}\), \(f_{1}\) and \(f_{2}\) introduce biases that favour the convergence of the Hamiltonian neural network toward the underlying physics of the regressed Hamiltonian. Equation \(f_{*}\) defines the loss function of the neural network as a linear combination of equations \(f_{0}\) to \(f_{2}\). \(c_{k}\) is the coefficient of each \(f_{k}\). One can assume that \(c_{k}=1\) although additional knowledge of the system can be used to emphasise any \(f_{k}\)[2]. \[f_{0}=\left(\hat{H}(\vec{x_{0}},\vec{y_{0}})-H_{0}\right)^{2}, f_{1}=\left(\frac{\partial\hat{H}}{\partial\vec{y}}-\frac{ \mathrm{d}\vec{x}}{\mathrm{d}t}\right)^{2},\] \[f_{2}=\left(\frac{\partial\hat{H}}{\partial\vec{x}}+\frac{ \mathrm{d}\vec{y}}{\mathrm{d}t}\right)^{2}, f_{*}(\vec{x},\vec{y},\frac{\mathrm{d}\vec{x}}{ \mathrm{d}t},\frac{\mathrm{d}\vec{y}}{\mathrm{d}t};w)=\sum_{k=0}^{2}c_{k}f_{k}. \tag{3}\] To perform the task of regressing the Hamiltonian and vector field of a \(n\) dimensional Hamiltonian system, the Hamiltonian neural network uses instantaneous, random samples of the \(2\times n\) state variables and the \(2\times n\) vectors, like those seen in Figure 1b. The state variables are input to the model. The output of the model is the regression surrogate, \(\hat{H}\), which is an estimator of the Hamiltonian \(H\). The training is supervised through the loss function, which uses the \(2\times n\) vectors corresponding to the \(2\times n\) state variables that were input to the model. Via gradient descent, \(f_{*}\) is minimised. The derivative of the surrogate \(\hat{H}\) at all input state variables is the surrogate vector field. It is computed via automatic differentiation [25] of the Hamiltonian neural network. The separable Hamiltonian neural networks introduced in subsections IV-B, IV-C and IV-D adopt the Hamiltonian learning bias of the Hamiltonian neural network and further embed biases regarding additive separability. They perform the task of regressing the Hamiltonian and vector field in the same way. ### _Embedding a Separability Observational Bias within a Hamiltonian Neural Network_ The baseline Hamiltonian neural network can be informed of additive separability by embedding an observational bias into the Hamiltonian neural network. Given data of the vector field comprising instantaneous state variables and vectors, additive separability is used to quadratically scale the amount of data. Training the Hamiltonian neural network on the new data embeds the observational bias and allows the model to regress a surrogate Hamiltonian that reflects the additive separability of the data. The model, with an embedded observational bias, is a separable Hamiltonian neural network. Given original data comprising samples of tuples \((\vec{x},\vec{y})\), new samples are generated. Additive separability means the generation comprises new combinations of the \(\vec{x}\) and \(\vec{y}\) from the original data. As the Hamiltonian is additively separable, the first derivatives of the new samples are dependent solely on \(\vec{x}\) or \(\vec{y}\), and can be inferred from the original samples at the respective values of \(\vec{x}\) or \(\vec{y}\). The amount of data available to train the Hamiltonian neural network has increased. Consider original data comprising two samples \((x_{1},y_{1})\) and \((x_{2},y_{2})\) in blue in Figure (a)a. With additive separability, two new samples created are \((x_{1},y_{2})\) and \((x_{2},y_{1})\) in red. Figure (b)b shows that as more samples from the original data (in blue) are available, quadratically more new samples (in red) can be created. Generally, up to \(N\times(N-1)\) new samples can be created from original data comprising \(N\) samples. The observational bias creates more data to improve coverage of the input domain of the Hamiltonian regression task, and the regressed surrogate Hamiltonian reflects the additive separability of the data. This improves regression performance but increases the time taken for forward and backward propagation of the separable Hamiltonian neural network. In selecting the optimal number of samples, there is a trade-off between regression performance and training time. ### _Embedding a Separability Learning Bias within a Hamiltonian Neural Network_ The baseline Hamiltonian neural network can be informed of additive separability by embedding a learning bias. The resulting separable Hamiltonian neural network with learning bias (centre, Figure 2) favours convergence towards a surrogate Hamiltonian that is additively separable. \(f_{3}\) in Equation 4 is the mixed partial derivative of the surrogate Hamiltonian \(\hat{H}\) corresponding to Equation 2. \(f_{*}^{sep}\) is the loss function of the separable Hamiltonian neural network with learning bias. It is a linear combination of equations \(f_{0}\) to \(f_{3}\) from Equation 3 and 4. It introduces a bias that favours convergence of the Hamiltonian neural network toward a surrogate Hamiltonian that is additively separable. \[f_{3}=\left(\frac{\partial^{2}\hat{H}}{\partial x\partial y}\right)^{2}\forall x \in\vec{x},y\in\vec{y},\hskip 14.226378ptf_{*}^{sep}=\sum_{k=0}^{3}c_{k}f_{k}. \tag{4}\] A larger \(c_{3}\) allows the separable Hamiltonian neural network to emphasise \(f_{3}\) and additive separability of the surrogate Hamiltonian, but may also decrease the emphasis of \(f_{0}\), \(f_{1}\) and \(f_{2}\), presenting a trade-off in the optimal value of \(c_{3}\). Fig. 3: (a) \(N\times(N-1)=2\) new samples are created from \(N=2\) samples, (b) \(N\times(N-1)=12\) new samples are created from \(N=4\) samples. Fig. 2: Architecture of the baseline Hamiltonian neural network and proposed separable Hamiltonian neural network with observational bias (left), proposed separable Hamiltonian neural network with learning bias (centre), and proposed separable Hamiltonian neural network with inductive bias (right). ### _Embedding a Separability Inductive Bias within a Hamiltonian Neural Network_ A baseline Hamiltonian neural network can be informed of additive separability by embedding an inductive bias. The resulting separable Hamiltonian neural network with inductive bias (rightmost, Figure 2) regresses a surrogate Hamiltonian that implicitly and strictly satisfies additive separability. The proposed separable Hamiltonian neural network with inductive bias is not fully-connected. The model comprises two smaller neural networks with the same number of layers, conjoined only at the output layer. Each smaller conjoined neural network has one output. Their sum (indicated by the dotted lines in Figure 2) is the surrogate Hamiltonian. The architecture of the proposed separable Hamiltonian neural network ensures additive separability as each smaller conjoined neural network has an input of either \(\vec{x}\) or \(\vec{y}\), and is, therefore, a function of either \(\vec{x}\) or \(\vec{y}\). During forward propagation, the sum of the conjoined neural networks ensures the surrogate Hamiltonian is always the sum of two independent functions of \(\vec{x}\) and \(\vec{y}\). The mixed partial derivative of the surrogate Hamiltonian is by design always zero. Additive separability is strictly satisfied. The conjoined neural networks are trained simultaneously in parallel to minimise training time. The separable Hamiltonian neural network has two smaller conjoined neural networks that can be trained consecutively or simultaneously in parallel. Furthermore, the summation layer of the separable Hamiltonian neural network can utilise a simple sum or introduce weights and biases. The different implementations for the separable Hamiltonian neural network with inductive bias present a trade-off in finding its optimal implementation. ## V Performance Evaluation The four models are compared on the task of regressing the Hamiltonian and vector field of a Hamiltonian system. To reach the comparison, the proposed separable Hamiltonian neural networks must first be empirically studied and optimised. Thereafter, the baseline Hamiltonian neural network and three optimised models can be compared. In Experiment 1, the optimal number of samples to create from the original data for the separable Hamiltonian neural network with observational bias is empirically studied and identified. In Experiment 2, the optimal value of \(c_{3}\) for the separable Hamiltonian neural network with learning bias is empirically studied and identified. In Experiment 3, the optimal implementation for the separable Hamiltonian neural network with inductive bias is empirically studied and identified. Finally, in Experiment 4, the baseline model and three optimised separable Hamiltonian neural networks are compared on the task of regressing the Hamiltonian and vector field, and their respective training times. In finding the optimal implementations of the three informed variants, we only compare their performance on the task of regressing the vector field. This is because regressing the Hamiltonian involves finding the real-valued sum of the integrated vector field. Errors made by the models in regressing the vector field may be cancelled out when regressing the Hamiltonian. Therefore, a model that regresses the Hamiltonian well may not regress the vector field well, but a model that regresses the Hamiltonian vector field well can also regress the Hamiltonian well. For completeness, and to demonstrate this phenomenon, we regress and present results for both the Hamiltonian and the vector field in Experiment 4. For the regression of the Hamiltonian, the performance of the models is measured by the absolute or L1-error between the surrogate Hamiltonian of a model and the true Hamiltonian from test data. The absolute error is computed following Equation 5. For the regression of the vector field, the performance of the models is measured by the vector error between the derivative of the surrogate Hamiltonian of a model and the true vector field from test data. The vector error is computed following Equation 6[26]. \(\hat{v}\) is the regressed vector and \(v\) is the true vector. The test data set comprises \(d=s^{2n}\) vectors, with \(s=10\) evenly spaced states in each dimension of the phase space for each system. \[E_{H}=\frac{1}{d}\sum_{k=1}^{d}||\hat{H}_{k}-H_{k}||_{1}, \tag{5}\] \[E_{V}=\frac{1}{d}\sum_{k=1}^{d}\frac{||\hat{v}_{k}-v_{k}||_{2}}{||v_{k}||_{2}}. \tag{6}\] All experiments are evaluated over the Hamiltonian systems shown in Table I. The general experimental setup for all models in all experiments is as follows. Training data comprising \(512\) samples of the state variables and vector field are generated uniformly at random within the sampling domain for each Hamiltonian system. The sampling domains are shown in columns 2 and 3 of Table II. The samples comprise tuples of the state variables \((\vec{x},\vec{y})\) and their corresponding vectors or time derivatives. The models to be experimented on are designed with two hidden layers, an Adam optimizer and softplus activation. In training, 20% of the training data is set aside as validation data for a dynamic stopping criterion using validation-based early stopping [27] and a batch size of 80 is used. All models have an input layer with width \(2\times n\), two hidden layers with width shown in columns 6 and 8 of Figure II, and one output layer with width one. Samples of state variables are input to the models. Samples of the vector field are used in the loss function of the models. All models are trained until convergence. In order to find an optimal bias-variance trade-off, the training will terminate if there is no decrease in the validation loss for 4,000 epochs in a row. The output of the models is the surrogate Hamiltonian. The surrogate vector field is computed via automatic differentiation of the surrogate Hamiltonian with respect to its inputs \(\vec{x}\) and \(\vec{y}\). It is equivalent to the vector field following Equation 1. All models are trained in Pytorch on a GeForce GTX1080 GPU with 32 GB RAM. The complete code in Python and results for the models discussed are available at github.com/zykhoo/SeparableNNs. All experiments are repeated for 20 random seeds. ### _Experiment 1: Optimising the Separable Hamiltonian Neural Network with Observational Bias_ This subsection details the experimental setup to empirically study the trade-off between regression performance and regression time taken for the separable Hamiltonian neural network with observational bias, by determining the optimal number of new samples to create from data with \(N\) samples. #### Iv-A1 Experimental Setup With original training data of size \(512\), 20% of the data is first set aside as validation data. The remaining 80% or \(N=409\) of training data comprising samples \((\vec{x}_{i},\vec{y}_{i})\) and time derivatives \((\frac{\mathrm{d}\vec{x}_{i}}{\mathrm{d}t},\frac{\mathrm{d}\vec{y}_{i}}{ \mathrm{d}t})\forall i\in N\), is doubled by creating new data comprising samples \((\vec{x}_{i},\vec{y}_{i+1})\) with time derivatives \((\frac{\mathrm{d}\vec{x}_{i}}{\mathrm{d}t},\frac{\mathrm{d}\vec{y}_{i+1}}{ \mathrm{d}t})\forall i\in N\), then appending this new data to the original data. Generally, the training data can be increased \(m\) times by creating new data comprising samples \((x_{i},y_{i+m})\) and with time derivatives \((\frac{\mathrm{d}x_{i}}{\mathrm{d}t},\frac{\mathrm{d}\vec{y}_{i+m}}{ \mathrm{d}t})\forall m=[1,k]\), then appending all \(m\) sets of new samples to the original data. The separable Hamiltonian neural network with observational bias is set up following the leftmost architecture in Figure 2. All details of the setup, training and evaluation follow from the description above in Section V. The separable Hamiltonian neural network is trained and evaluated for models where \(k=\{1,2,3,4,5,10,20,30,40,50,100,200,300,400\}\). #### Iv-A2 Experimental Results We report the vector error and training time for the separable Hamiltonian neural network with observational bias in Tables III and IV. Generally, from Tables III and IV, as \(k\) increases, the vector error decreases and training time increases. The decrease in vector error is not proportional to the amount of data. The marginal improvement in vector error decreases as \(k\) increases. Furthermore, the time taken for each epoch increases proportionately to \(k\). However, the training time does not, because as \(k\) increases, the number of epochs required for convergence decreases. The time taken for the model to converge when \(k=1\) (where the number of samples is doubled from the original data) is not 200 times more than the time taken for the model to converge when \(k=400\) (where the number of samples is four hundred times from the original data). These suggest that smaller values of \(k\) are sufficient to cover the input domain and emphasise the additive separability of the regression task. Nonetheless, in general, the model with \(k=400\) regresses the vector field best, and the model with \(k=1\) is the fastest. A good trade-off that balances reducing the vector error and training time is the model with \(k=2\). ### _Experiment 2: Optimising the Separable Hamiltonian Neural Network with Learning Bias_ This subsection details the experimental setup to empirically study the trade-off between emphasing additive separability of the Hamiltonian regression task and \(f_{0}\), \(f_{1}\) and \(f_{2}\) in Equation 3. The optimal value of \(c_{3}\) is empirically determined. #### Iv-B1 Experimental Setup The separable Hamiltonian neural network with learning bias is set up following the middle architecture in Figure 2 with loss function Equation 4. All details of the setup, training and evaluation follow from the description above in Section V. The separable Hamiltonian neural network is trained and evaluated for cases where \(c_{3}=\{0.25,0.50,1.00,2.00,4.00\}\). #### Iv-B2 Experimental Results We report the vector error calculated following Equation 6 for the separable Hamiltonian neural network with learning bias in Table V. Table V shows that as \(c_{3}\) increases, the vector error decreases. Table VI shows that as \(c_{3}\) increases, the number of epochs and training time required increases. These suggest that as the value of \(c_{3}\) increases, the emphasis on additive separability increases, and the separable Hamiltonian neural network with inductive bias places less emphasis on learning Hamilton's equations. As a result, the number of epochs required for the model to learn the surrogate Hamiltonian and converge increases, increasing training time. The model with \(c_{3}=4.00\) regresses the vector field best. The model with \(c_{3}=0.25\) is the fastest. A good trade-off that balances the importance of additive separability and Hamilton's equations appears to be \(c_{3}=1.00\) as it often outperforms the model when \(c_{3}=2.00\). Using \(c_{3}=1.00\) is optimal, and balances the importance of additive separability and Hamilton's equations. ### _Experiment 3: Optimising the Separable Hamiltonian Neural Network with Inductive Bias_ This subsection details the experimental setup to empirically study the trade-off between different implementations of the separable Hamiltonian neural network with inductive bias. #### Vi-C1 Experimental Setup The separable Hamiltonian neural network with inductive bias is set up following the rightmost architecture in Figure 2. The width of each smaller conjoined neural network is shown in the second last column of Table II. The proposed separable Hamiltonian neural network with inductive bias trains the two conjoined neural networks in two ways. Firstly, consecutively, where \(\vec{x}\) is first input to the first conjoined neural network, which outputs \(f(\vec{x})\), then \(\vec{y}\) is input to the second conjoined neural network, which outputs \(g(\vec{y})\). Their sum is the surrogate Hamiltonian. Secondly, simultaneously and in parallel, by designing each layer of the separable Hamiltonian neural network such that it appends the respective layers of the two conjoined neural networks. For the case where the model is trained in parallel, forward propagation through the proposed model is computed as \(x_{n}=\sigma(W_{n}\times x_{n-1}+B_{n})\) where \(\sigma\) is the activation function, \(x_{n}\) is the output of layer \(n\) calculated from the output of layer \(n-1\), \(W_{n}\) is the weight matrix of layer \(n\) of shape \(2\times L_{n}\times L_{n-1}\), and \(L_{n}\) and \(L_{n-1}\) are the widths of layers \(n\) and \(n-1\) respectively, and \(B_{n}\) is the bias matrix of layer \(n\) of shape \(2\times 1\times L_{n}\). The \(2\) in both the weight matrix and bias matrix corresponds to the _two_ disjoint subsets of the state variables of the additively separable Hamiltonian. With this architecture, both smaller conjoined neural networks are trained simultaneously in parallel, with each forward and backward propagation using one graphics processing unit. The separable Hamiltonian neural network with inductive bias is also trained and evaluated for five possible implementations of the summation layer. The zeroth implementation is a simple summation (sHNN-I (0)). The first implementation is a linear layer with fixed and equal weights, and no bias (sHNN-I (1)). The second implementation is a linear layer with fixed and equal weights, and a trainable bias (sHNN-I (2)). The third implementation is a linear layer with trainable weights, and no bias (sHNN-I (3)). The fourth implementation is a linear layer with trainable weights and bias (sHNN-I (4)). For the zeroth and first implementations, the last column in Table II shows the number of parameters of the model. The second, third and fourth implementations have one, two and three additional parameters respectively. All other details of the setup, training and evaluation follow from the description above in Section V. #### Vi-C2 Experimental Results We report the vector error calculated following Equation 6 for the separable Hamiltonian neural network with inductive bias in Tables VII and IX. From Table VII it is observed that training the conjoined neural networks in parallel and consecutively results in different vector errors and training time. Intuitively, they should have the same vector errors, but closer analysis reveals they have different floating point errors which cause their vector errors to diverge after many iterations. Generally, both separable Hamiltonian neural networks perform well in regressing the vector field as they have similar vector errors. However, from Table VIII, it is observed that the separable Hamiltonian neural network with conjoined neural networks trained in parallel is consistently faster than that which is trained consecutively. Therefore, it is preferred that the conjoined neural networks are trained in parallel. From Table IX, it can be observed that the simple summation (sHNN-I (0)) and linear layer with fixed and equal weights (sHNN-I (1)) have the same vector errors because they are identical in implementation. However, from Table X, it is observed that the simple summation (sHNN-I (0)) is consistently faster than the linear layer with fixed and equal weights (sHNN-I (1)). Generally, these two models also have the lowest vector errors among the five possible last-layer implementations. The second, third and fourth implementations have more trainable parameters but do not regress the vector field well. Additional trainable weights or a bias obfuscate the contributions of the conjoined neural network toward the surrogate Hamiltonian and introduce unnecessary complexities when regressing the surrogate Hamiltonian. Therefore, the zeroth implementation, with a simple summation, is the preferred implementation for the summation layer of the separable Hamiltonian neural network with inductive bias. _Experiment 4: Comparing the Four Variants on the Task of Regressing the Hamiltonian and Vector Field_ This subsection details the experimental setup to empirically compare the four models on the task of regressing the Hamiltonian and vector field of a Hamiltonian system. The four models are the baseline Hamiltonian neural network and the three proposed separable Hamiltonian neural networks with observational, learning and inductive biases. #### Iv-D1 Experimental Setup The Hamiltonian neural network is set up following the leftmost architecture in Figure 2. From subsections V-A, V-B and V-C, the optimal implementations of the various separable Hamiltonian neural networks are used. These are the separable Hamiltonian neural network with observational bias where \(k=2\), the separable Hamiltonian neural network with learning bias where \(c_{3}=1.00\) and the separable Hamiltonian neural network with inductive bias with a summation in the last layer. All details of the setup, training and evaluation follow from the description above in Section V. #### V-C2 Experimental Results We report the L1-error in regressing the Hamiltonian following Equation 5 and the vector error calculated following Equation 6 for all models in Table XI and Table XII respectively. We report the time taken to train each model in seconds in Table XIII. From Tables XI and XII, we observe that in general, all proposed separable Hamiltonian neural networks regress the Hamiltonian and vector field with a lower absolute error and vector error than the baseline Hamiltonian neural network. The proposed models leverage physics information regarding separability to penalise or prevent interaction between the state variables and this reduces the complexity of the Hamiltonian regression problem. The regression of the Hamiltonian and vector field are therefore improved. However, from Table XIII, the baseline model is faster than all proposed models. Generally, from Tables XI, XII and XIII, we observe that among the three proposed models, the separable Hamiltonian neural network with observational bias has the lowest absolute error and vector error, while the separable Hamiltonian neural network with inductive bias is the fastest. The observational bias generates more samples of the data, and this emphasises additive separability and covers the input domain of the Hamiltonian system to ease the interpolation task of the model. Conversely, the models with learning and inductive bias only rely on emphasising additive separability to regress the Hamiltonian and vector field. The additional effect of the observational bias in covering the input domain of the Hamiltonian system allows the model to regress the vector field and Hamiltonian better. The model with inductive bias generally outperforms the model with learning bias as it restricts regressions of the Hamiltonian and vector field to strictly satisfy separability, therefore forcing the model to simplify a complex regression problem into two smaller ones. It is also observed that the relative performance of the models between Tables XI and XII changes. Regressing the Hamiltonian involves finding the sum of the integrated vector field and errors made by the models in regressing the vector field may be cancelled out when regressing the Hamiltonian. From Table XIII, the proposed models generally require fewer epochs to converge as the knowledge of additive separability reduces the complexity of the Hamiltonian regression problem. However, the baseline model is the fastest to train. This is because the time taken for each epoch for the proposed models is longer. Compared to the baseline model, the proposed model with inductive bias is slower due to its conjoined architecture with higher dimensional weight and bias matrices that slightly increase the forward and backward propagation time for each epoch. The proposed model with learning bias is even slower due to the additional time taken to compute the mixed partial derivative. The proposed model with observational bias is the slowest as it has several times more samples that linearly scale the training time per epoch given the same batch size. Among the proposed models, the model with inductive bias generally requires the fewest number of epochs to converge and less time per epoch. It is the fastest proposed model. The separable Hamiltonian neural network with inductive bias is the optimal separable Hamiltonian neural network as it outperforms the baseline in regressing the Hamiltonian and vector field and has the smallest trade-off in training time. ## VI Conclusion Four models are compared for the task of regressing the Hamiltonian and vector field of a Hamiltonian system from discrete observations of the vector field. One, the baseline Hamiltonian neural network, is uninformed of the additive separability of the Hamiltonian system. Three proposed separable Hamiltonian neural networks are informed via observational, learning and inductive biases respectively. All proposed separable Hamiltonian neural network models leverage additive separability to avoid artificial complexity between state variables. They are more effective than the baseline in regressing Hamiltonian vector fields and can converge within fewer epochs, but are generally slower in training. The best model is the separable Hamiltonian neural network with inductive bias as it outperforms the baseline in the regression tasks and has the smallest trade-off in training time. We are now studying separable Hamiltonian neural networks that are simultaneously embedded with multiple biases. Preliminary results show that models embedded with both observational and inductive biases can regress Hamiltonian vector fields best. We are also working on using an inductive bias to recover the kinetic and potential energies of a Hamiltonian system for better interpretability, and dynamically testing for and embedding separability as an inductive bias by rewiring the Hamiltonian neural network on the fly.
2305.19295
Low Precision Quantization-aware Training in Spiking Neural Networks with Differentiable Quantization Function
Deep neural networks have been proven to be highly effective tools in various domains, yet their computational and memory costs restrict them from being widely deployed on portable devices. The recent rapid increase of edge computing devices has led to an active search for techniques to address the above-mentioned limitations of machine learning frameworks. The quantization of artificial neural networks (ANNs), which converts the full-precision synaptic weights into low-bit versions, emerged as one of the solutions. At the same time, spiking neural networks (SNNs) have become an attractive alternative to conventional ANNs due to their temporal information processing capability, energy efficiency, and high biological plausibility. Despite being driven by the same motivation, the simultaneous utilization of both concepts has yet to be thoroughly studied. Therefore, this work aims to bridge the gap between recent progress in quantized neural networks and SNNs. It presents an extensive study on the performance of the quantization function, represented as a linear combination of sigmoid functions, exploited in low-bit weight quantization in SNNs. The presented quantization function demonstrates the state-of-the-art performance on four popular benchmarks, CIFAR10-DVS, DVS128 Gesture, N-Caltech101, and N-MNIST, for binary networks (64.05\%, 95.45\%, 68.71\%, and 99.43\% respectively) with small accuracy drops and up to 31$\times$ memory savings, which outperforms existing methods.
Ayan Shymyrbay, Mohammed E. Fouda, Ahmed Eltawil
2023-05-30T09:42:05Z
http://arxiv.org/abs/2305.19295v1
Low Precision Quantization-aware Training in Spiking Neural Networks with Differentiable Quantization Function ###### Abstract Deep neural networks have been proven to be highly effective tools in various domains, yet their computational and memory costs restrict them from being widely deployed on portable devices. The recent rapid increase of edge computing devices has led to an active search for techniques to address the above-mentioned limitations of machine learning frameworks. The quantization of artificial neural networks (ANNs), which converts the full-precision synaptic weights into low-bit versions, emerged as one of the solutions. At the same time, spiking neural networks (SNNs) have become an attractive alternative to conventional ANNs due to their temporal information processing capability, energy efficiency, and high biological plausibility. Despite being driven by the same motivation, the simultaneous utilization of both concepts has yet to be thoroughly studied. Therefore, this work aims to bridge the gap between recent progress in quantized neural networks and SNNs. It presents an extensive study on the performance of the quantization function, represented as a linear combination of sigmoid functions, exploited in low-bit weight quantization in SNNs. The presented quantization function demonstrates the state-of-the-art performance on four popular benchmarks, CIFAR10-DVS, DVS128 Gesture, N-Caltech101, and N-MNIST, for binary networks (64.05%, 95.45%, 68.71%, and 99.43% respectively) with small accuracy drops and up to 31\(\times\) memory savings, which outperforms existing methods. spiking neural networks, memory compression, quantization, binarization, edge computing ## I Introduction Spiking neural networks (SNNs) have recently become a popular research field for machine learning enthusiasts and neuromorphic hardware engineers due to their temporal data processing, biological plausibility, and energy efficiency [1, 2, 3]. These properties make them ideal for reducing hardware implementation costs on resource-limited devices for real-world artificial intelligence (AI) applications [4]. Furthermore, recent advancements in event-based audio and vision sensors have opened up many opportunities for various domain based applications [4, 5]. Choosing the right neural network (NN) type can significantly impact resource utilization, but there are additional ways to alleviate the disadvantages of deploying large models on edge devices. While increasing the model's size leads to better performance, it also results in higher memory utilization and inference time. There are three widely used NN compression methods used to reduce the memory and computation costs of the model without significant performance degradation: quantization, pruning, and knowledge distillation [6, 7, 8]. Quantization is a process of reducing the number of bits to represent synaptic weights. Storing and operating on reduced bit precision weights allows for significantly improved memory savings and power efficiency [6]. Quantization methods are divided into two types depending on whether the quantization process occurs during or after network training. These types are quantization-aware training (QAT) and post-training quantization (PTQ). QAT usually results in lower accuracy loss than PTQ. However, training a quantized model from scratch increases the quantization time [9]. In PTQ, one can choose any pre-trained model and perform quantization much faster; however, getting good performance in low-bit precision is difficult [9]. At the same time, doing QAT can reduce the bit precision down to 1 bit. There are different scenarios where each of these methods can be preferable. QAT is preferable when the quantized model is to be deployed for a long time, and hardware efficiency and accuracy are the main goals. PTQ is a better option when there is insufficient data to train the model and if fast and simple quantization is needed. The state-of-the-art has used various techniques to quantize SNN parameters in either PTQ or QAT manner. While weight quantization has been applied to different artificial neural networks (ANNs), its benefits on SNNs are yet to be thoroughly studied. Limited works have reported applying weight quantization in SNNs. Amir et al. [10] use a deterministic rounding function to quantize convolutional SNN to be deployed on a TrueNorth neuromorphic chip. Eshraghian and Lu [11] explore how adjusting the firing threshold of SNN can help with the deterministic binarization of the weights. Schaefer and Joshi [12] train SNN models with integer weights and other parameters having variable precision using a deterministic rounding function. Lui and Neftci [13] show how a layer-wise Hessian trace analysis can quantify how a change in weights influences the loss. The authors claim that this metric can help with the intelligent allocation of a layer-specific bit-precision while training SNNs. Rathi et al. [14] perform quantization and pruning simultaneously by exploiting the natural advantages of the spike timing-dependent plasticity (STDP) learning rule. The authors use weight distribution to create weight groups and quantize them by averaging. Putra and Shafique [15] propose a framework for quantizing SNN parameters in PTQ and QAT using truncation, round to the nearest, and stochastic rounding techniques. Hu et al. [16] present an STDP-based weight quantization technique that uses a round function. Kheradpisheh et al. [17] use full-precision parameters in the backward pass and signs of the synaptic weights in the forward pass to binarize synaptic weights. Another binarization method is proposed by Kim et al. [18] in which binary weights are learned in an event-based manner with the help of a constraint function and a Lagrange multiplier. The authors also use event-driven random backpropagation instead of STDP. Most existing quantization methods are based on non-differentiable quantization functions, making it infeasible to compute gradients to train NNs by the most widely used training algorithm - the gradient descent algorithm [19]. The quantization functions are approximated for gradient calculation at the backward propagation to make the quantization methods compatible with the gradient descent algorithm. This approximation function is called straight-through estimator (STE) [19, 20]. However, since the approximation function cannot fully describe the quantization function, it produces different gradients from the \(true\) gradients. SNNs also operate on discrete, non-differentiable spiking signals. Thus, the quantization of SNN models involves two non-differentiable functions: spikes and quantization. The non-differentiable spike function is usually solved through the surrogate gradient method (SGM), which approximates the spikes with differentiable surrogate analog in the backward pass to train the network [21]. As both non-differentiable functions are approximated by SGM and STE for the training, quantized SNN models may encounter significant gradient mismatch and fail to achieve competitive performance compared to a full-precision counterpart. In this work, we propose adopting the QAT-based ANN differentiable quantization function by Yang et al. [22] for SNN quantization to reduce the number of gradient approximations involved in training and quantizing SNN models. This quantization function is based on a linear combination of sigmoid functions. Due to the benefits of using SNNs and quantization together, this paper aims to provide the findings on the performance of the mentioned quantization function for SNN quantization. The main contributions of this paper are: 1. We present the framework for quantizing SNN models using a differentiable quantization function based on a linear combination of sigmoid functions. The proposed QAT method demonstrates the effectiveness of using a differentiable quantization function to reduce the number of approximations in the training and quantization of SNN models. 2. We evaluate the quantization method on popular benchmark datasets such as CIFAR10-DVS, DVS128 Gesture, N-Caltech101, and N-MNIST. We find that the presented quantization method outperforms the state-of-the-art methods in terms of accuracy and memory savings. The paper is organized as follows: Section II provides the mathematical modeling of spiking neurons and describes the quantization method, represented as a linear combination of sigmoid functions, as well as formulates it in the scope of SNN quantization, Section III provides the experimental setup, Section IV presents and discusses the obtained results, and Section V summarizes the findings on using the proposed differentiable quantization function for SNN quantization. ## II Proposed Method ### _Spiking Neural Network_ In a real neuron, impulse transmission is determined by differential equations corresponding to the biophysical processes of potential formation on the neuron membrane. A leaky integrate-and-fire (LIF) neuron is one of the most widely used mathematical models employed to accurately mimic the biological behavior of the neuron [2, 23, 24]. This model assumes that the input, \(X(t)\), comes as a voltage increment, then causes the hidden state (membrane potential), \(H(t)\), to be updated, which in its turn causes the output, \(S(t)\), to show spikes. \(V(t)\) represents the membrane potential after the spike trigger. Its behavior can be described using the following equations: \[H(t)=f(V(t-1),X(t)) \tag{1}\] \[S(t)=g(H(t)-V_{threshold})\] (2) \[=\Theta(H(t)-V_{threshold})\] \[V(t)=H(t)(1-S(t))+V_{reset}S(t) \tag{3}\] where \(X(t)\) is the input to the neuron at timestep \(t\), \(V_{threshold}\) is the threshold for spike firing, and \(V_{reset}\) is the potential to which the neuron returns after the spike. \(f(V(t-1),X(t))\) denotes the state update equation of the neuron. For the LIF neuron, the following update function is used, where \(\tau\) is a membrane time constant: \[H(t) =f(V(t-1),X(t)) \tag{4}\] \[=V(t-1)-\frac{1}{\tau}((V(t-1)+V_{reset})+X(t))\] For the spike generation function \(g(t)\), the Heaviside step function \(\Theta(x)\) is used, which has the following definition: \[\Theta(x)=\begin{cases}1,&\text{for }x\geq 0\\ 0,&\text{for }x<0\end{cases} \tag{5}\] ### _Quantization_ The quantization function presented in [22] is represented by a linear combination of sigmoid functions to eliminate gradient approximations while training and quantizing ANNs. This work uses this method to reduce the number of approximations for gradient computations while training and quantizing SNN models by the SGM rule. At inference, the quantization function is formulated as a combination of several unit-step functions with respective scaling and biasing: \[W_{Q}=\sum_{i=1}^{n}s_{i}\Theta(\beta W-b_{i})-o \tag{6}\] In Eq. (6), \(W\) is the full-precision weight that needs to be quantized, and \(W_{Q}\) is the discrete output value, \(\Theta\) is the unit step function. The possible discrete values that \(W_{Q}\) can take are pre-defined. \(\beta\) is an input scale factor that maps the ranges of \(W\) to the range of \(W_{Q}\). The number of unit step functions is defined by the necessary number of quantization levels, \((n+1)\). \(s_{i}\) represents the difference between adjacent quantization levels, and \(b_{i}\) defines their border. The term \(o\) is used to place the center of the quantization function at \(0\), \(o=\frac{1}{2}\sum_{i=1}^{n}s_{i}\). Since the step function is not differentiable, training feedforward networks directly with the backpropagation method is not feasible. Step functions are replaced with sigmoid functions for training, while step functions are kept for inference. The following sigmoid function has a term \(T\) (called 'temperature') that controls the gap between two quantization levels: \[\sigma(Tx)=\frac{1}{1+e^{-Tx}} \tag{7}\] In Eq. (7), increasing \(T\) makes the border between quantization levels sharper, making this sigmoid function closer to a step function while remaining differentiable. However, choosing a large value for \(T\) at the initial epochs of training yields poor training since most of the gradients will be zero. Hence, networks should be trained with small values of \(T\) at the beginning, which is later increased by a small amount during each epoch. ### _Forward pass_ During forward propagation, weights in each layer of the network are mapped to the discrete integers according to the quantization function. In the inference stage, a step function, Eq. (8), is used, and in the training stage, it is replaced by a sigmoid function, Eq. (9). \[W_{Q}=\alpha\left(\sum_{i=1}^{n}s_{i}\Theta(\beta W-b_{i})-o\right) \tag{8}\] \[W_{Q}=\alpha\left(\sum_{i=1}^{n}s_{i}\sigma(\beta W-b_{i})-o\right) \tag{9}\] There are two kinds of parameters in Eq. (9): (1) learned during training and (2) pre-defined. Learned parameters are \(\alpha\) and \(\beta\), scale factors of output and input, respectively. Pre-defined parameters are the set of discrete integers representing quantization levels, and \(b_{i}\), \(T\), \(s_{i}\), \(o\). Each layer has different learned parameters and can have different pre-defined parameters. ### _Backward pass_ The gradients of loss \(L\) should be backpropagated during the training stage, considering the quantization function. The mean squared error (MSE) loss is utilized as a loss function, which can be defined as: \[L=\frac{1}{T}\sum_{t=0}^{T-1}\cdot\frac{1}{N}\sum_{n=0}^{N-1}(S(t,n)-y(t,n))^ {2} \tag{10}\] where \(T\) is timesteps, \(N\) is a number of classes, \(S(t,n)\) is a output spike and \(y(t,n)\) is a true label at a timestep \(t\) for class \(n\). The weighted inputs from \(l-1\) layer can be defined as \(X^{l}(t)=W_{Q}^{l-1}I^{l}(t)\), where \(W_{Q}^{l-1}\) is a quantized weight matrix and \(I^{l}(t)\) is input spike matrix. Using Eq. (1) - (2), we can formulate the gradients of loss as following: \[\begin{split}\frac{\partial L}{\partial W_{Q}^{l-1}}& =\sum_{t=0}^{T-1}\frac{\partial L}{\partial H^{l}(t)}\cdot\frac{ \partial H^{l}(t)}{\partial W_{Q}^{l}}\\ &=\sum_{t=0}^{T-1}\frac{\partial L}{\partial S^{l}(t)}\cdot\frac {\partial S^{l}(t)}{\partial H^{l}(t)}\cdot\frac{\partial H^{l}(t)}{ \partial X^{l}(t)}\cdot I^{l}(t)\end{split} \tag{11}\] where \(\frac{\partial S^{l}(t)}{\partial H^{l}(t)}=\Theta^{\prime}(H^{l}(t)-V_{ threshold}^{l})\) and \(\frac{\partial H^{l}(t)}{\partial X^{l}(t)}=\frac{1}{\tau}\). Substituting these terms into Eq. (11) yields: \[\frac{\partial L}{\partial W_{Q}^{l-1}}=\sum_{t=0}^{T-1}\frac{\partial L}{ \partial S^{l}(t)}\cdot\Theta^{\prime}(H^{l}(t)-V_{threshold}^{l})\cdot\frac {1}{\tau}\cdot I^{l}(t) \tag{12}\] According to the SGM, \(\Theta\) is replaced by a differentiable function for gradient calculation during the backward pass. Three network parameters are learned during the training stage: synaptic weights, input scale factor, and output scale factor. The gradients in a \(d\) layer w.r.t. these parameters and quantization function can be computed as shown below: \[\frac{\partial L}{\partial W^{d}}=\frac{\partial L}{\partial W_{Q}^{d}}\cdot \sum_{i=1}^{n}\frac{T\beta}{\alpha s_{i}}g_{d}^{i}(\alpha s_{i}-g_{d}^{i}) \tag{13}\] \[\frac{\partial L}{\partial\alpha}=\sum_{d=1}^{D}\frac{\partial L}{\partial W_ {Q}^{d}}\cdot\frac{W_{Q}^{d}}{\alpha} \tag{14}\] \[\frac{\partial L}{\partial\beta}=\sum_{d=1}^{D}\frac{\partial L}{\partial W_{Q} ^{d}}\cdot\sum_{i=1}^{n}\frac{TW_{Q}^{d}}{\alpha s_{i}}g_{d}^{i}(\alpha s_{i }-g_{d}^{i}) \tag{15}\] where \(g_{d}^{i}=\sigma(T(\beta W_{Q}^{d}-b_{i}))\). ## III Experimental Setup We built a platform for SNN quantization using PyTorch. We utilized SpikingJelly [25] package for working with GPU-accelerated spiking neuron models. In this platform, one can set different quantization levels, choose the temperature rate, and control the simulation hyperparameters. ### _Datasets_ The performance of SNN models is evaluated on four widely used datasets: CIFAR10-DVS, DVS128 Gesture, N-Caltech101, and N-MNIST. The event-to-frame integrating method [2] is used to pre-process these datasets, which are initially represented in address event representation (AER) format. This method splits the event data into slices and integrates each into a single frame. The event data is sliced into ten frames for all datasets, representing \(T=10\) timesteps. * CIFAR10-DVS dataset consists of 10,000 samples corresponding to 10 classes, with 1,000 samples in each class [26]. Since there are no direct train and test splits, this dataset is divided randomly into train and test sets in an 80/20 ratio, respectively. * DVS128 Gesture dataset consists of 11 hand gestures collected from 29 subjects under three different illuminations [10]. The dataset comes with the train, and test splits in the ratio of 80/20 (1,176/288 samples). * N-Caltech101 dataset is a spiking version of Caltech101, consisting of 101 classes [27]. In this work, the dataset is split randomly into train and test sets in the ratio of 80/20 (7,000/1,709 samples). * N-MNIST is a spiking version of the MNIST dataset [27]. Same as the original dataset, it consists of 60,000 training samples and 10,000 testing samples corresponding to 10 classes. ### _Network_ We evaluate the performance of the quantization method described earlier by building the SNN model formulated by [2]. This SNN model consists of the spiking encoder network and the classifier network, as shown in Fig. 1. A spiking encoder network is built from a convolutional layer, spiking neurons, and a pooling layer, whereas a classifier network - is from a fully connected dense layer and spiking neurons. Synaptic connections, represented by convolutional Conv2d and fully connected layers, are stateless, while spiking neuron layers have connections in the temporal domain, as depicted in Fig. 1. In pooling layers, max-pooling (MP) is utilized instead of commonly used average-pooling (AP). The output of the max-pooling layer is binary spikes, in contrast to floating-point numbers in average pooling, which, together with quantization, provide the possibility of accelerated computation. Two kinds of experiments are presented to analyze the performance of the presented quantization method. In the first one, a similar SNN model structure is trained on the four mentioned datasets. Five convolutional layers, each followed by a spiking neuron layer and max-pooling layer, are used for the spiking encoder network. A classifier network is built from two fully connected layers with spiking neurons, followed by a voting layer, which is a simple average-pooling layer with a window size of 10. Piecewise leaky ReLU is used as a surrogate function. The network structures for each dataset can be found in Table I. In the second experiment, Fig. 1: SNN model with a spiking encoder network and a classifier network. the performance of the quantization method is evaluated by using SNN models that are presented in other literature with SNN quantization method such as [11] (DVS128 Gesture) and [13] (N-MNIST). These works are chosen for comparison because they achieve the highest quantized model accuracies. This selection minimizes the influence of a network structure and emphasizes the performance of the quantization method. Corresponding SNN model configurations can be found in Table II. In this work and [11], the SNN models are trained using SGM, while [13] uses DECOLLE [33]. ### _Training_ The SNN models are trained on Nvidia V100 GPU for 500 epochs. An Adam optimizer with a learning rate of 0.001 is used during training. As a learning rate scheduler, we utilize a cosine annealing learning rate with the maximum number of iterations \(T_{max}\) set to 64. The membrane time constant, \(\tau\), is set to two for all spiking neurons. To evaluate the performance of the quantized SNN model, we choose four different bit-precision (fixed-point format) to compare with the full-precision: 8-bit, 4-bit, 2-bit (ternary), and 1-bit (binary). For the 8-bit precision, we use the quantization levels in the range {-127, 127}, where the gap between two levels is 1, e.g., {-127, -126,..., -1, 0, 1,..., 126, 127}. Similarly, for the 4-bit precision, the quantization levels lie in the range {-7, 7} (e.g., {-7, -6,..., -1, 0, 1,..., 6, 7}), for the ternary quantization there are three levels {-1, 0, 1}, and for the binary, there are two levels {-1, 1}. As can be seen, the quantization levels are chosen to be centered at 0 in all bit precision. The borders between two adjacent quantization levels are placed precisely in the middle. It is worth noting that these quantization levels and borders can also be placed non-uniformly. The temperature is initially set to T = 1, incrementing by 2 in each subsequent epoch. Quantized SNN models are evaluated using accuracy and memory savings metrics compared with the full-precision counterpart. Memory savings are defined by the compression ratio of the quantized models with respect to the full-precision models. Comparison with other state-of-the-art SNN models includes full and reduced bit precision works available in the literature. ## IV Results and Discussions ### _Comparison with prior works_ Table III illustrates how the studied method compares with other full-precision and quantized SNN implementations. It shows the summary of learning rules and quantization methodology used in recent research works and highlights the bit precision, memory size, and final test accuracy. #### Iv-A1 Dvs128 Gesture For the DVS128 Gesture dataset, three quantization methodologies are available in the literature. Two of them reduce the bit precision to 2 bits, while the third binarizes the model. The full-precision model in this work shows higher accuracy than models reported in the literature. Also, our binarized model outperforms the 2-bit [10, 12] and 1-bit [11] cases shown in the table. Applying the proposed quantization method to the SNN model from [11] allows for evaluating its performance based on accuracy. Table IV presents accuracy results corresponding to 1-bit and 32-bit precisions. In this work, accuracies of both full-precision and 1-bit quantized SNN models are higher than the ones reported in [11] by \(1.44\%\) and \(1.5\%\), respectively. #### Iv-C2 N-Mnist As seen in Table III, our full-precision and binarized SNN model shows competitive performance compared to other full-precision and 4-bit quantized models. Only [28] with its 32-bit SNN model achieves slightly higher accuracy than our binarized model. The accuracy comparison between the proposed method and [13] are shown in Table V, where the SNN network structure is identical. Our binarized and 4-bit precision models outperform the 4-bit one from [13] by \(18.33\%\) and \(18.45\%\), respectively. At the same time, our full-precision model shows an accuracy that is \(1.53\%\) higher than the one reported in [13]. It is worth mentioning that although the network structure is identical, the SNN training methods and selected optimization techniques are different in our work, which can explain the significant difference in the obtained results. #### Iv-C3 Cifar10-Dvs No quantized SNN model can be found in the literature for the CIFAR10-DVS dataset. Hence, the performance of the proposed quantized models is compared with other full-precision models. Both the 32-bit and 1-bit models in this work significantly outperform other works in terms of accuracy. Binarized model, while requiring almost 32 times less model size, still performs better than full-precision models with large model sizes, as reported in other publications. #### Iv-C4 N-Caltech101 Since quantized models are not available in the case of the N-Caltech101 dataset, the models reported in this work are compared with other full-precision models in the literature. While the accuracy of our full-precision model is highest, our binarized model has slightly lower accuracy than the full-precision model in [31]. However, there is a substantial difference in memory size between our binarized model and other full-precision models while having comparable accuracy. Fig. 3: Accuracy drop and compression ratio for quantized SNN models. Fig. 2: Test accuracies of SNN models for each dataset. ### _Accuracy - bit-precision trade-off_ Fig. 2 illustrates the final test accuracy obtained for a particular combination of bit-precision value and dataset. The blue bar corresponds to the non-quantized, full-precision model and serves as a baseline for quantized models to be compared. Overall, the proposed quantization method provides similar performance for different classification tasks. Quantization has more impact on the SNN performance for more complex tasks, such as CIFAR10-DVS and N-Caltech101. Quantization of the SNN model for the CIFAR10-DVS dataset yields the highest degradation in accuracy compared to other datasets. While the accuracy drop (1.78 %) for the 8-bit model is not dramatic, performance degradation for lower-bit quantized models is more evident with accuracy drops of 7.98%, 7.58%, and 8.03% for 4, 2, and 1-bit precisions respectively. The possible reason for such degradation can be the complex nature of the dataset. The accuracy drop for the DVS128 Gesture dataset is much lower than for the CIFAR10-DVS dataset. The 8-bit quantized model achieves the lowest accuracy degradation (0.27%). For the 4-bit quantization, the model experiences a 0.81% drop in accuracy compared to the full precision model. For the 2-bit and 1-bit quantization, the accuracy drop is slightly higher than 1% (1.12% and 1.18%, respectively). Hence, for this dataset, the model can be quantized down to 1-bit with an accuracy degradation of around 1%. For the N-Caltech101 dataset, on the other hand, the binarized SNN model has a more significant accuracy drop compared to other bit-precision models. While 8-bit, 4-bit, and 2-bit quantized SNN models have 1.07 %, 1.39 %, and 1.8 % accuracy drops, the 1-bit model has 3.47 % accuracy degradation compared to its full-precision counterpart. For the N-MNIST dataset, the accuracy degradation is the lowest among the presented datasets. The accuracy drop is negligible and fluctuates between different bit-precisions. The accuracy drop in the \(<0.2\%\) range is achieved compared to the full-precision model. ### _Accuracy drop - memory savings trade-off_ The model size is calculated by multiplying the total number of parameters of the SNN model by the corresponding bit precision. Compression ratio (CR) is defined as the ratio of full-precision model size to quantization bit-precision model size. CR shows how much the model can be reduced when quantized to a particular bit-precision compared to the full-precision baseline. In Fig. 3, amount of performance degradation can be observed to achieve a particular model size reduction. For the CIFAR10-DVS dataset, 8-bit quantization can reduce the model size by four times while keeping the accuracy degradation within 2% from the baseline. In the DVS128 Gesture dataset, the most significant compression ratio, close to 31\(\times\), is characterized by an accuracy drop slightly larger than 1%. For the N-Caltech101 dataset, the SNN model can be reduced by 16\(\times\) with the test accuracy degradation of 2%. Finally, for the N-MNIST dataset, the SNN model shows the best accuracy-memory savings trade-off by having only a 0.2% accuracy drop with a near 32-fold model size reduction. ### _Effect of temperature rate on convergence_ As discussed in Section II, the temperature parameter \(T\) controls the difference between sigmoid functions used during training Eq. (9) and step functions used in inference Eq. (8). To investigate the effect of different temperature rates during training on convergence, we compare train and test accuracy with several values of \(T\) for the binary precision, as it is the most challenging. As similar results are obtained for different datasets, we only include the results for one dataset. Fig. 4 shows the convergence of the binarized SNN model under three \(T\) rates for the DVS128 Gesture dataset. Train and test accuracies are monitored for 200 epochs with \(T=(5/10/20)\times epoch\ number\). The graphical representation of these increments is shown in the form of green lines. When the slope of these lines is greater, the temperature increase rate is higher. As it can be seen, the convergence of binary SNN models is stable w.r.t. to different rates of \(T\); thus, it has a negligible effect on accuracy if given any reasonable value. Fig. 4: Train and test accuracies of the binarized SNN model for different rates of temperature increase on the DVS128 Gesture dataset. ## V Conclusion In this work, we proposed an efficient method for the quantization of SNNs while focusing on the memory saving - accuracy drop trade-off. Using the differentiable quantization function, we can reduce the number of approximations for gradient computation when the SGM learning rule is employed. The obtained results demonstrate that using the proposed differentiable quantization function outperforms prior works based on rounding and threshold-adjusting techniques. The following results can be achieved using the linear combination of sigmoid functions as quantization function: binarized SNN models trained on CIFAR10-DVS, DVS128 Gesture, N-Caltech101, and N-MNIST show an accuracy drop of 8.03%, 1.18%, 3.47%, and 0.17%, respectively, (as compared with full-precision counterparts) while providing up to 31\(\times\) memory savings. ## VI Acknowledgment This work was supported by The King Abdullah University of Science and Technology under the award ORA-2021-CRG10-4704.
2305.11805
PANNA 2.0: Efficient neural network interatomic potentials and new architectures
We present the latest release of PANNA 2.0 (Properties from Artificial Neural Network Architectures), a code for the generation of neural network interatomic potentials based on local atomic descriptors and multilayer perceptrons. Built on a new back end, this new release of PANNA features improved tools for customizing and monitoring network training, better GPU support including a fast descriptor calculator, new plugins for external codes and a new architecture for the inclusion of long-range electrostatic interactions through a variational charge equilibration scheme. We present an overview of the main features of the new code, and several benchmarks comparing the accuracy of PANNA models to the state of the art, on commonly used benchmarks as well as richer datasets.
Franco Pellegrini, Ruggero Lot, Yusuf Shaidu, Emine Küçükbenli
2023-05-19T16:41:59Z
http://arxiv.org/abs/2305.11805v1
# PANNA 2.0: Efficient neural network interatomic potentials and new architectures ###### Abstract We present the latest release of PANNA 2.0 (Properties from Artificial Neural Network Architectures), a code for the generation of neural network interatomic potentials based on local atomic descriptors and multilayer perceptrons. Built on a new back end, this new release of PANNA features improved tools for customizing and monitoring network training, better GPU support including a fast descriptor calculator, new plugins for external codes and a new architecture for the inclusion of long-range electrostatic interactions through a variational charge equilibration scheme. We present an overview of the main features of the new code, and several benchmarks comparing the accuracy of PANNA models to the state of the art, on commonly used benchmarks as well as richer datasets. + Footnote †: preprint: APS/123-QED ## I Introduction In recent years, machine learning (ML) based approaches have been successfully applied to numerous problems, spanning from image and natural language processing to many areas of physics. Within the field of atomistic simulations, several approaches have been presented to leverage ML for the accurate prediction of molecular and material properties. In particular, one of the main goals has been the fast computation of energies and forces, leading to the creation of ML-based interatomic potentials (MLIPs), able to achieve the accuracy of _ab initio_ methods on selected systems, for a fraction of the cost. While we refer the reader to the many available reviews [1; 2; 3] for an exhaustive presentation of ML approaches in material science, we will briefly present here the main flavors of MLIPs present in literature, to provide a context for implementations within PANNA. Most MLIPs are based on two approximations: i) the possibility to write the total energy of a system as a sum of atomic contributions, ii) spatial locality. This allows to roughly break down the problem into two parts: defining--or learning--a description of a local atomic environment, and learning a function to map the descriptor to the local energy. The requirement for invariance with respect to translations, rotations, and permutations of the atoms is enforced exactly by either invariant descriptors or equivariant network architectures. The earlier methods in the field to describe the local environment relied on fixed descriptors, e.g. Behler-Parrinello [4; 5] (BP) type descriptors sample the two- and three-body distribution function with local sampling functions; while the Smooth Overlap of Atomic Positions [6] (SOAP) relies on spherical harmonics to obtain a rotationally invariant description of a power of the smoothed atomic density. In these cases, ML was limited to the mapping of descriptors to atomic quantities, which relied, for example, on multilayer perceptrons [4; 5] (MLPs), or kernel methods, as in the case of the Gaussian approximation potential [7] (GAP). These and similar approaches have been shown to achieve chemical accuracy in a host of different systems [8; 9; 10; 11], typically given a ground truth of a few thousands configurations to train on. A limit to the generalization capacity for a given number of training points, however, is related to the architectural bias of the approach, depending both on the ML model and the descriptors. Indeed, more advanced descriptors like the Atomic Cluster Expansion [12] (ACE) were shown to obtain lower generalization errors with the same amount of data, even when the fitting was done through a simple linear model [13]. In search for a better architectural bias, more advanced message passing [14] (MP), interaction layers [15], continuous filter convolution [16], or graph neural networks (GNN) were introduced: some using vectors, angles and other geometric information to define the node functions [17; 18; 19] and some promoting the states of the networks themselves to equivariant entities based on vectors [20] or a basis of spherical irreducible representations [21; 22; 23]. While the distinction between (learned) descriptor and ML model becomes blurred in these cases [24], it has been clearly shown that the bias imposed by these architectures can lead to better generalization accuracy with the same amount of data. However, several layers of message passing can lead to very large effective receptive fields for each atom, and even when this can be avoided, each layer typically involves the use of several MLPs, leading to a larger overall computational cost. While the scaling with respect to _ab initio_ is still favorable, this increased cost renders the earlier MLP approaches still valuable, especially for applications where sufficient data can be generated. In this work, we present a new implementation of PANNA25 (Properties from Artificial Neural Network Architectures), version 2.0, a package for the creation and deployment of MLIPs based on local descriptors and Multilayer perceptrons. Several packages have been proposed to train this type of networks, e.g. DeepMD [26], AENet [27], AMP [28], TorchANI [29], and SIMPLE-NN [30]. These packages are written in different languages (FORTRAN, Python), over different back ends (TensorFlow, PyTorch), and while some of them are based on input files, others provide an API to be called from user-written code. They are all based on atomic MLPs, but they support different descriptors, from the original BP4 to ones with modifications [5; 31]; supporting different training features, network customization, learning schedules, ensemble approaches and so on. With this latest version of PANNA we hope to enrich this landscape, where variety allows more options to be explored and more needs to be met. With respect to the previous version, the PANNA suite has been entirely rewritten to be compatible with the newest versions (2.x) of the TensorFlow [32] framework. While supporting all the features of the earlier version, the code has been optimized to run on GPU, and it supports new features, such as the computation of descriptors during training, and a new architecture to handle long range electrostatic interactions. PANNA is written in Python, and it can be simply run by supplying appropriate input configuration files. It includes several tools to customize and monitor the training, both through a graphical interface and from command line, as well as tools to import and export data from and to different external codes. Finally, PANNA models can be exported to run molecular dynamics (MD) directly in popular packages such as LAMMPS [33] and ASE [34], or to even more codes through an interface with OpenKIM [35]. The PANNA code is released under an MIT license, and it can be downloaded at Ref. [36]. A thorough documentation, including a list of all input file keywords and several tutorials on how to run different example cases, is available at Ref. [37]. In the following, we will present the main features of the code and the underlying theory in Sec. II, and we will report benchmarks on accuracy on different systems, speed and data scaling in Sec. III. ## II The implementation The core of PANNA 2.0 is based on the creation of fixed-size atomic descriptors as inputs to MLPs for the computation of atomic energies, summing to the total energy of a system. Distinct architectures can be defined for each atomic species, and weights are shared between all atoms of the same species. The training procedure consists in optimizing the MLP parameters to match the energy, and forces as its derivatives, on known configurations. This optimization is performed by minimizing a _loss function_ of the error, through stochastic gradient descent on small sets of examples known as _mini-batches_. In the next sections, we will highlight the options available in PANNA 2.0 for each step of this training procedure, and we will discuss specifically a new architecture that models long-range electrostatic interactions. ### General structure A typical MLIP training pipeline starts with the reference energies and forces being computed with density functional theory (DFT) or some other reference approach. In PANNA, we offer tools to convert the output of codes such as Quantum ESPRESSO [38], VASP [39], USPEX [40] and LAMMPS [33] to a simple human readable format. This format is completely documented, so that a user can easily create a new converter from a different code. In the next step of the pipeline, features or descriptors need to be computed for each atom. PANNA offers two ways of computing the descriptors: they can either be precomputed for the whole dataset, or they can be computed during training on-the-fly. The first option is computationally advantageous as examples are typically reused multiple times throughout the training, and since the descriptor is fixed it is possible to create it once and for all. This can however pose the problem of storing a large amount of data (especially when the derivatives of the descriptor are needed to compute the forces), and reading the data multiple times from storage (if they do not fit in the working memory). For this reason, we also offer the second option which, while more computationally expensive, makes it feasible to train on very large datasets (see Sec. III.2). This option is also convenient for performing quick training cycles with various descriptor types and shapes for testing purposes without having to read/write large files. PANNA natively includes routines to compute the standard BP descriptor [4] and a modified version (mBP) as detailed in the previous version of PANNA [25]. For precomputed descriptors, the binary format used for storage is carefully documented such that descriptors computed with external routines can be adapted to the PANNA pipeline. PANNA currently implements MLP type networks. The general equation for the architecture is as follows: \[a_{i}^{l}=\sigma\left(\sum_{j=1}^{n_{l-1}}w_{ij}^{l}a_{j}^{l-1}+b_{i}^{l} \right), \tag{1}\] where \(a_{i}\) is a node of layer \(l\), \(w\) and \(b\) are weights and biases--the parameters of the network--\(\sigma\) is a nonlinear function, and \(n_{l}\) is the number of nodes in layer \(l\) (input is considered as layer 0). In PANNA, users can easily define the desired architecture, on a per-species basis, by specifying the size of the layers. The last layer is typically of size one for a single output for energy, but see Sec. II.2 for a different case. The activation function \(\sigma\) can be chosen for each layer as Gaussian, rectified linear unit (ReLU), or hyperbolic tangent, besides the linear function which is typically used for the output. Since PANNA is built on TensorFlow, the supported activation functions can be easily extended if desired, to the vast list supported by this framework. The remaining elements of a training pipeline are the loss and learning schedule. The loss function in PANNA is made up of contributions coming from the energy and the forces errors, with an adjustable relative factor. For both, users can choose between a quadratic and exponential function of the difference between the computed and expected values, and whether to consider per configuration or per atom quantities. A further regularization term can be added to the loss function as the sum of the absolute value (L1) or square (L2) of all weights, with a chosen prefactor. The optimization of weights and biases, i.e. training, is finally performed on mini-batches of chosen size, modifying \(w\) and \(b\) according to the gradient of the loss through the Adam [41] algorithm, with a learning rate that can be chosen as constant or exponentially decaying. As in the case of activation functions, TensorFlow backend provides PANNA with multiple options for optimizers. Additionally, freezing weights of selected layers for selected species is allowed to facilitate fine tuning or transfer learning studies. During training, several tools within PANNA can be used to monitor the progress of the optimization. From the command line, users can decide to monitor the loss components or a figure of merit such as the root mean square error (RMSE) or mean absolute error (MAE). TensorFlow provides a graphical interface, TensorBoard, a browser based visualization tool. TensorBoard allows PANNA users to visualize loss components as well as other figures of merit, along with the evolution of the distribution of the weights and biases throughout the training. Moreover, once per epoch (or at a chosen frequency) the model can also be automatically evaluated on a validation set on-the-fly, to keep track of the generalization capacity or for decision of early-stopping. After the model is trained, it can be stored as a checkpoint, and PANNA's inference tool can be used to assess its performance on a testset. The model can also be exported to a format usable in external MD codes, e.g. in LAMMPS [33] thanks to the plugin included in the PANNA (now improved with OpenMP parallelization), or with many other MD packages supported via OpenKIM [35]. Alternatively, the internal checkpoint format, can be imported in ASE [34] through the calculator included in PANNA. The performance of this new plugin is tested in Sec. III.2. Extension of PANNA potentials for modern MD packages such as the differentiable JAX-MD [42] will be supported in the next version. ### Long range interactions PANNA 2.0 supports a new method to address long-range electrostatic interactions. Most MLIP schemes rely an a locality approximation. While this is often safe to do in neutral systems, and might work for shielded charges, it is bound to fail when electrostatics plays a role in a range larger than the effective cutoff radius or in charged systems. In recent years, to address this challenge, methods that couple a local network predicting atomic electronegativity with a system-wide charge equilibration scheme have been proposed [43; 44; 45; 46]. Ref. [43] only deals in the electrostatic part, and Ref. [44] proposes to employ two different networks (one dependent on the other). The implementation within PANNA is based on Ref. [46] and relies on a single network to predict the coefficients for a Taylor expansion of the energy in local charges, up to the second order. This allows PANNA to compute the total energy, including electrostatics, by evaluating a single network and solving a linear charge equilibration system. We leave a more in-depth theoretical exposition of this approach to be presented elsewhere [46] and focus on the implementation changes it brings. In this approach there are 3 different outputs for each atom: the coefficients of the Taylor expansion corresponding to local energy, electronegativity and atomic hardness (or their corrections with respect to a reference). These outputs are fed to a newly introduced network layer where the charge equilibration optimization takes place. The final outcome is the predicted local charges and the total energy. While the ground-truth local charge information can be used during training, it is not strictly necessary. This allows the use of many publicly available datasets where no local charge decomposition information has been stored. Interestingly, we find that the absence of reference local atomic charges (an approximate partition in many cases) can even improve the ability of the network to predict total energies and forces (see Sec. III.3). ## III Benchmarks ### rmd17 The rMD17 benchmark set consists of configurations of 10 small organic molecules, with energies and forces computed in DFT with a tight convergence threshold [47]. In recent years it has been commonly used to benchmark the data efficiency of MLIPs, specifically restricting the training set to a budget of 1000 randomly selected configurations per molecule. While this has shown the high data efficiency of equivariant GNN, the only typically reported MLP with BP type descriptors (ANF) seemed to fall considerably behind. We demonstrate here the performance of PANNA with an equivalent BP type network for comparison. The computational details are as follows: An mBP type descriptor [25] with a maximum cutoff of 5Awas used with 24 radial bins. For the angular part, 8 bins for the average radius, and 8 for the angles, i.e. a total of 64 angular bins were used. Considering 2, 3 or 4 species, this resulted in descriptors of size 240, 456, and 736, respectively. We then trained networks with 2 hidden layers of size 256 and 128, training for \(10^{6}\) steps with learning rate \(10^{-4}\), and then reducing it to \(10^{-6}\) over a further \(10^{6}\) steps. We employed quadratic loss with a force cost of 1 and a small \(10^{-5}\) L1 regularization. The validation MAE in energy and force components is reported in Table 1, alongside the results of ANI and the state-of-the-art selected from literature considering all architectures including kernel methods. While the supremacy of equivariant GNN remains unreachable for this kind of networks (all SOTA values are found to be from various equivariant GNNs), errors from PANNA are considerably lower than ANI, for almost all molecules. This is particularly interesting as the most performant ANI network reported thus far in these benchmarks were obtained by retraining a pretrained network ("ANI (pre)" in Table 1) for increased accuracy [13]. Our results show that even for the same method and data, the differences in training and implementation can have a major impact on the final model quality, strenghtening the argument that well-written and well-documented MLIP generation packages are needed to reach a consistent quality of applications in the literature. ### Dataset scaling: Carbon To further assess the generalization capacity of our model, as a function of the size of the dataset, we consider a more challenging problem: a dataset with more than 60000 configurations of various allotropes of Carbon, created in a recent study using an evolutionary algorithm and the previous version of PANNA[9]. The datasets consists mostly of configurations with 16 or 24 Carbon atoms, and a few larger (200 atoms) configurations; it includes configurations under high pressure, snapshots of high temperature MD and highly defected configurations (see Ref. [9] for a complete description and construction procedure). We split the dataset in 50000 randomly chosen configurations for training and we save the rest for validation. In order to generate well sampled training datasets of different sizes, we employ a farthest point clustering algorithm: we consider the cosine fingerprint distance as defined in Ref. [9], we then start from a set of a single configuration (the lowest in energy) and progressively add the configuration that is farthest. In this way we generate datasets ranging from 100 to the whole 50000 configurations. We sample 1000 configurations from the validation set with the same approach. For PANNA, we employ a mBP descriptor with 5 A cutoff, \begin{table} \begin{tabular}{l l l l l|l} \hline \hline & \multicolumn{2}{c}{PANNA} & \multicolumn{1}{c}{ANI\({}^{48}\) (pre)} & \multicolumn{1}{c|}{ANI\({}^{48}\) (rand)} & \multicolumn{1}{c}{SOTA} \\ \hline \hline **Aspirin** & E & 10.6 & 16.6 & 25.4 & 2.2[22] \\ & F & 32.9 & 40.6 & 75.0 & 6.6 \\ \hline **Azobenzene** & E & 5.8 & 15.9 & 19.0 & 1.2[23] \\ & F & 18.4 & 35.4 & 52.1 & 2.6 \\ \hline **Benzene** & E & 1.0 & 3.3 & 3.4 & 0.3[23] \\ & F & 5.4 & 10.0 & 17.4 & 0.2 \\ \hline **Ethanol** & E & 2.9 & 2.5 & 7.7 & 0.4[22, 23] \\ & F & 16.5 & 13.4 & 45.6 & 2.1 \\ \hline **Malonaldehyde** & E & 4.0 & 4.6 & 9.4 & 0.6[23] \\ & F & 24.3 & 24.5 & 52.4 & 3.6 \\ \hline **Naphthalene** & E & 3.0 & 11.3 & 16.0 & 0.2[23] \\ & F & 13.2 & 29.2 & 52.2 & 0.9 \\ \hline **Paracetamol** & E & 6.3 & 11.5 & 18.2 & 1.3[22] \\ & F & 22.0 & 30.4 & 63.3 & 4.8 \\ \hline **Salicylic acid** & E & 4.1 & 9.2 & 13.5 & 0.9[23] \\ & F & 19.4 & 29.7 & 53.0 & 2.9 \\ \hline **Toluene** & E & 3.9 & 7.7 & 12.6 & 0.5[22] \\ & F & 15.9 & 24.3 & 52.9 & 1.5 \\ \hline **Uracil** & E & 2.4 & 5.1 & 8.4 & 0.6[23] \\ & F & 13.7 & 21.4 & 44.1 & 1.8 \\ \hline \hline \end{tabular} \end{table} Table 1: Mean absolute error in energy (meV on the whole molecule) and forces (meV/Å per component) of different models trained on 1000 configurations from each molecule in the rMD17 dataset[47]. The ANI results are taken from Ref. [13], where ANI was either trained from scratch, column “ANI (rand)” or starting from a pretrained model, column “ANI (pre)”. In the last column we report the state of the art (SOTA), i.e. the best result found for any model, giving priority to the force error, and the respective reference. Figure 1: Scaling of the mean absolute error in energy per atom (top) and forces per component (bottom) for different models, as a function of the size of the training dataset. See the main text for further details on the training, including the definition of the three architectures PANNA small, mid and big. 24 radial bins for the two body and 8 radial and 16 angular bins for the three body terms, for a total size of 152. We train 3 different networks, a small one with two layers of sizes 64 and 32, a middle one with two layers of 256 and 128 nodes and a large one with three layers of sizes 1024, 512 and 256. Networks are trained on batches of 10 examples with a starting learning rate of \(10^{-4}\) for a number of steps ranging from 100000 for the smallest dataset, to 6 millions for the largest one, after which a further quench to a learning rate \(10^{-6}\) for 1 million steps is performed for all datasets larger than 1000 points. As reference state-of-the-art models we consider NequIP[21] and MACE[22]. We train both on all datasets relying on default parameters as needed: for NequIP we consider two models, with \(\ell=1\) and \(\ell=2\), both with 4 interaction blocks, 32 features and a radial network with 8 basis functions, and 2 hidden layers of size 64. For MACE we use the standard setup with 128 even scalars and 128 odd \(\ell=1\) irreps. We use the same cutoff of 5 A for all models although the effective receptive field will be larger depending on the number of layers of GNNs. For all networks we train until loss convergence. An important remark needs to be made about the training dynamics: There is an apparent trade-off between energy and force components in the loss especially close to convergence and for the case of very large datasets. To tackle this, MACE training schedule implements stochastic weight averaging (SWA)[48] and increases the energy weight for the loss in the loss 20% of the training. We find that in the case of NequIP and PANNA where a standard non-averaged optimizers such as Adam[41] with fixed energy and loss weighing is used, the energy loss decreases very minimally even when increasing the relative weight of it in the loss component, even in long training scenarios, suggesting the SWA and energy re-weighing during training can be valuable. Overall we observe the force error to be more stable for all networks (given also the larger number of force data), and because the trade-off is hard to quantify for each model, we focus more on the force error during analysis. Without significant hyperparameter tuning we do not expect these to be the best possible networks (including the one of PANNA), yet informative for a typical user experience. Fig. 1 shows the MAE of the error in energy per atom and in forces per component. Overall, equivariant models, especially with high \(\ell\) orders, perform better. As mentioned, the failure to improve the energy error for the larger datasets is visible for PANNA and NequIP. Among the PANNA models, we can see all models obtain similar results for small datasets, and as the dataset becomes larger the smaller model seems to reach its capacity and its performance drops. Considering that MLP architectures such as BP networks are much more computationally affordable due to simple underlying tensor algebra compared to irrep algebra of equivariant GNNs, it would be desirable to find a strategy to overcome for their data-inefficiency. Here we show a potential workaround with a data augmentation experiment. Starting with the largest training dataset, for each example we create 10 copies by perturbing the atomic positions randomly with a small Gaussian noise of standard deviation 0.075A. We then take the best MACE network and use it to compute energy and forces, obtaining a new dataset of half a million configurations, at a fraction of the DFT cost. Retraining PANNA models on this new larger dataset shows that a further improvement in accuracy can be obtained for large enough models. This "knowledge distillation" procedure is well known in the ML literature[50] and here too it proves to be a potential approach to keep less data-efficient models viable at a reduced cost. Lastly, we consider the computational performances of these potentials when used for inference: we take one of the configurations from the dataset with 16 atoms and perform 1000 steps of Langevin dynamics at a temperature of 300 K with ASE[34] on a A100 GPU, discarding the first few steps which typically require extra setup time, not representative of the speed of the codes. To judge the natural scalability of different algorithms we refrain from using any specialized optimization techniques such as CUDA based featurization library implemented in TorchANI[29] for BP networks, as similar ones for tensor product in the equivariant GNNs are not yet widely available. Table 2 reports the time per step per atom of each code, as an average of 5 repetitions. We note that due to small system size, these values should be taken as upperbound, since GPU utilization is far from optimal for these system sizes. Hardware specific strategies such as multi-instance or multi-process (MIG or MPS) GPU features can potentially bring significant improvements to GPU use. These and further optimization opportunities of BP networks will be employed in future versions of PANNA. Nevertheless, without specific strategies, a raw comparison of algorithms on similar grounds show that PANNA is consistently faster, as expected from a simpler architecture. It is noteworthy that larger PANNA models are not slower, as the computational bottleneck is in the calculation of the descriptor, hence, acceleration CUDA libraries as mentioned earlier, or descriptors with lesser computational load such as ACE[12] can bring further speedup. ### Long range: NaCl clusters In this section, we demonstrate the long range electrostatic approach implemented in PANNA on charged sodium chloride clusters. The training set is obtained from Ref. [44] and comprises of configurations of Na\({}_{3}\)Cl\({}_{8}^{+}\)--shown in Fig.2a-- and Na\({}_{8}\)Cl\({}_{8}^{+}\), obtained by removing the Na atom in the rightmost corner of Fig. 2a. Each cluster has a total charge of +1. Here we compare the accuracy of long range model within PANNA with that reported in Ref. [44]. The MLIP is constructed with mBP atomic environment descriptors of size 45, two hidden layers each with 15 nodes and \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & PANNA & PANNA & PANNA & NequIP & NequIP & MACE \\ & small & middle & big & \(\ell=1\) & \(\ell=2\) & \\ \hline Time [ms] & 0.78 & 0.79 & 0.79 & 3.32 & 4.88 & 3.09 \\ \hline \hline \end{tabular} \end{table} Table 2: Time per step per atom to run a Langevin MD on a small Carbon cell with different codes, invoked through ASE on GPU. an output layer with 3 nodes. As explained in Sec. II.2, in PANNA reference charges can be either used as an extra target in the loss function or omitted: we present here one model with (\(\gamma_{q}>0\)) and one without (\(\gamma_{q}=0\)) this extra loss. We also compare with the PANNA model in the absence of long range electrostatics (SR for short range), with the same architecture but only predicting energy and forces. In Table 3, RMSE in charges, energy and forces for PANNA models are compared with the results obtained in Ref. [44] with and without long range interactions, denoted as 2G- and 4G-HDNNP, respectively. The PANNA model with long range electrostatics reaches the lowest RMSE in energy and forces irrespective of the use of atomic charges as target. It is reassuring for verification reasons that, as a baseline, The PANNA model without electrostatics attains similar RMSE in energy and forces to 2G-HDNNP [44] that also omits this contribution. We examine in further detail the performance on the potential energy surface of these systems by computing the energy and force acting on a Na atom, indicated by 2 in Fig 2a, when moved along the arrow depicted in the same figure. Fig. 2(b) shows the force on the Na atom projected along the direction shown by the arrow. The DFT results are obtained from Ref. [44]. As expected, for the PANNA model without electrostatics, we obtain similar trends to those reported in Ref. [44] for the 2G-HDNNP, where the equilibrium distances for Na\({}_{8}\)Cl\({}_{8}^{+}\) and Na\({}_{0}\)Cl\({}_{8}^{+}\) are the same. Instead, model with long range electrostatics accurately reproduces the DFT forces for both Na\({}_{9}\)Cl\({}_{8}^{+}\) and Na\({}_{8}\)Cl\({}_{8}^{+}\), with and without regressing against reference charges. We note that the forces as a function of distance is smooth, suggesting stable dynamics and possibility of obtaining energy differences through integrating the forces if needed. ## IV Conclusion We have given a brief overview of PANNA 2.0, the latest version. Besides the support for the new version of the Tensorflow back end--a needed upgrade to run on newer hardware where previous versions are becoming increasingly harder to obtain--this new version features several improvements aimed at simplifying the training procedure for the end user. Removing the need to precompute descriptors simplifies the exploration of new parameters, or training on very large datasets; new figures of merit and validation on-the-fly make it easier to monitor the optimization in real time. Importantly, PANNA 2.0 introduces support for long-range electrostatics, which opens the possibility to tackle charged systems that were not accessible before. Moreover, we have shown in a series of benchmarks that while the PANNA models are not as data efficient as the newest equivariant GNN architectures, they can be more accurate than what previously reported for similar models, and they do show an accuracy-scaling power law dependence on the size of the dataset that is comparable to some equivariant models. We have also proposed the "knowledge distillation" scheme to employ the more data efficient networks to extend the training set for the less data efficient ones. Paired with fast MD plugins, these results point towards a possibility where simple architectures like PANNA can become the workhorse of large scale simulations, trading minimal accuracy for a Figure 2: Comparison of energy and forces between MLIPs and DFT. (a) Atomic structure of Na\({}_{9}\)Cl\({}_{8}^{+}\). (b) Projected force on Na atom 1 in the direction of the arrow shown, as a function of the distance between Na atom 1 and 2. \begin{table} \begin{tabular}{l l c c c} \hline \hline Model & \multicolumn{2}{c}{Charge} & \multicolumn{2}{c}{Energy} & Force \\ & & [me] & [meV/atom] & [meV/Å] \\ \hline SR & & & & \\ & train & - & 1.6 & 49 \\ & test & & 1.6 & 50 \\ \hline LR & & & & \\ \(\gamma_{q}>0\) & train & 11.5 & 0.4 & 17 \\ & test & 11.8 & 0.4 & 18 \\ \(\gamma_{q}=0\) & train & 237.3 & 0.3 & 17 \\ & test & 238.9 & 0.3 & 18 \\ \hline 2G & train & - & 1.7 & 58 \\ & test & - & 1.7 & 57 \\ 4G & train & 15.9 & 0.5 & 32 \\ & test & 15.8 & 0.5 & 33 \\ \hline \hline \end{tabular} \end{table} Table 3: Training and validation RMSE of different quantities for the sodium chloride cluster with a total charge of +1 for long range (LR) and short range (SR) models, and models from Ref [44] (2G, 4G). faster computation. We will keep improving PANNA with state of the art optimization techniques such as CUDA based featurization libraries, and support for new descriptors and improved architectures to move towards making this possibility a reality in materials modeling. ## V Data Availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
2304.01081
FMGNN: Fused Manifold Graph Neural Network
Graph representation learning has been widely studied and demonstrated effectiveness in various graph tasks. Most existing works embed graph data in the Euclidean space, while recent works extend the embedding models to hyperbolic or spherical spaces to achieve better performance on graphs with complex structures, such as hierarchical or ring structures. Fusing the embedding from different manifolds can further take advantage of the embedding capabilities over different graph structures. However, existing embedding fusion methods mostly focus on concatenating or summing up the output embeddings, without considering interacting and aligning the embeddings of the same vertices on different manifolds, which can lead to distortion and impression in the final fusion results. Besides, it is also challenging to fuse the embeddings of the same vertices from different coordinate systems. In face of these challenges, we propose the Fused Manifold Graph Neural Network (FMGNN), a novel GNN architecture that embeds graphs into different Riemannian manifolds with interaction and alignment among these manifolds during training and fuses the vertex embeddings through the distances on different manifolds between vertices and selected landmarks, geometric coresets. Our experiments demonstrate that FMGNN yields superior performance over strong baselines on the benchmarks of node classification and link prediction tasks.
Cheng Deng, Fan Xu, Jiaxing Ding, Luoyi Fu, Weinan Zhang, Xinbing Wang
2023-04-03T15:38:53Z
http://arxiv.org/abs/2304.01081v1
# FMGNN: Fused Manifold Graph Neural Network ###### Abstract. Graph representation learning has been widely studied and demonstrated effectiveness in various graph tasks. Most existing works embed graph data in Euclidean space, while recent works extend the embedding models to hyperbolic or spherical spaces to achieve better performance on graphs with complex structures, such as hierarchical or ring structures. Fusing the embedding from different manifolds can take advantage of the embedding capabilities over different graph structures. However, existing embedding fusion methods mainly focus on concatenating or summing up the output embeddings without considering interacting and aligning the embeddings of the same vertices on different manifolds, which can lead to distortion and imprecision in the final fusion results. Besides, it is also challenging to fuse the embeddings of the same vertices from different coordinate systems. In the face of these challenges, we propose the **F**used **M**anifold **G**raph **N**eural **N**etwork (**FMGNN**). This novel GNN architecture embeds graphs into different Riemannian manifolds with interaction and alignment among these manifolds during training and fuses the vertex embeddings through the distances on different manifolds between vertices and selected landmarks, geometric coresets. Our experiments demonstrate that FMGNN yields superior performance over strong baselines on the benchmarks of node classification and link prediction tasks. Graph Representation Learning, Geometric Deep Learning, Manifold Fusion, Geometric Coreset + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Journal: Information Systems + Footnote †: spaces. This kind of interaction and aggregation can be operated by the geometry interaction over manifolds. Nevertheless, without proper interaction and alignment, the embeddings of same vertices obtained from different spaces can be of random shift and distortion due to randomness during model training, even though the mutual relative relations between vertices in each respective space are preserved at best. Thus, concatenating or mixing such embeddings results in imprecision and fluctuation, which we will demonstrate in this work. Moreover, it would be difficult to reasonably fuse the embeddings and design further operations, such as regression and softmax [17], since such operations on the coordinates of different spaces are different. To this end, in this work, we propose Fused Manifold Graph Neural Network (**FMGNN**), which properly aligns embeddings on different manifolds with interations via tangent mapping before neighborhood aggregation and fuses information on each manifold by distances to landmark geometry coresets. With **FMGNN**, embeddings from different manifolds are appropriately aligned during training and efficiently fused in the final results. The overview flowwork of FMGNN is illustrated in Figure 1. While elaborating the details in Section 5, for ease of understanding, here we would like to briefly unfold the cores components of FMGNN. We first utilize the exponential and logarithmic mappings to bridge the spherical, Euclidean, hyperbolic manifold embeddings and propose the tangent mapping for the interaction among the vertex embeddings on different manifolds. In this way, the embedding of the same vertices are properly aligned and the information of the neighborhood on different manifolds are well communicated. When it comes to fuse the final embedding obtained from different manifolds, we turn vertex embedding into the features measured by the distance to the centroid coreset. Hence, FMGNN still guarantees near vertices can have similar distances, a figuration of coordinates. Our contributions are summarized as follows: * We propose a novel graph representation learning framework, fused manifold graph neural network (FMGNN), which, to the best of our knowledge, first studies graph embedding on all the Euclidean, hyperbolic, and spherical manifolds, with novel fusion method based on tangent mapping to align embeddings from different spaces and integrate neighborhood information throughout each embedding layer during training. * We put forward a novel geometric coreset-based centroid mapping method, which can find the coreset of nodes from randomly sampled nodes in manifolds, and unify the graph embedding of different manifolds into pairwise distance between the vertices and the geometric coresets on the corresponding manifolds. Figure 1: Overview of the Fused Manifold Graph Neural Network (FMGNN). In FMGNN, a graph will be embedded via GCNs on three Riemannian manifolds, manifold fusion via tangent mapping before neighborhood aggregation, information aggregation on each manifold by distances to landmark geometry coresets, and adjust to the final representation with a self-attention mechanism. * Extensive experiment results show that FMGNN has achieved SOTA performance on several benchmarks, especially including graphs with sparsity and high hyperbolicity. Our model absorbs the advantages of hyperbolic models on tree structure and maintains the strength of Euclidean space in some free scale topology. The rest of the paper is organized as follows: In Section 2, we list the related works of FMGNN. In Section 3, the background, metrics, operational rules of Riemannian manifolds are introduced. Particularly, we present the observation on GNN training over different manifolds in Section 4. Then, we share the methodology of FMGNN in Section 5. Finally, the details of experiments on FMGNN will be discussed in Section 6. ## 2. Related Work In this section, we briefly review the related works about GNNs on both Euclidean and non-Euclidean manifolds, along with the geometric coreset. **GNN on the Euclidean manifold.** GNNs has received rising attention in the field of machine learning since it was first put forward [12, 26], especially after the GCN [16] was proposed, which combines the information of neighbor nodes with the message passing mechanism. Further, the GAT [31] introduces the attention mechanism into GCN and the SGC [35] convert the nonlinear GCN into a simple linear model. For large-scale graphs, with the high computation cost of unified message passing, GraphSAGE [14] is proposed on node sampling and hierarchical sampling. In complex network analysis, topology on edges have different effects of information dissemination, CurvGN [38] incorporates Ricci curvature to measure the topology difference into the message passing as an important parameter in the attention mechanism of GCN, while M\({}_{2}\)GNN [32] set curvature as a learnable parameter to train knowledge graph embedding. **GNN on the non-Euclidean manifold.** GNN on the non-Euclidean manifold has attracted more and more attention recently. Texts are embedded into the hyperbolic or spherical manifold for better performance with word2vec [10, 20, 36], and the multiple relations in the knowledge graph are also successfully embedded in the hyperbolic space [7]. The HNN [11] trains neural network in the hyperbolic space, in the tasks of the parametric classification and regression methods. HGCN [8] and HGNN [17] embed graphs into hyperbolic space and apply convolution computation on hyperbolic vector mapping and transition. Recently, Geometric Interaction Learning (GIL) [40] put forward a schema to fuse features at the last layer of the GCN on different manifolds, while \(\kappa\)-GCN [4] investigate the constant curvature GNN finding that different datasets suit different manifolds. In addition, H2H-GCN [9] proposes the direct convolution mechanism without the transformation via the tangent space in a hyperbolic manifold. What's more, a theoretical analysis on the suitable curvature space for a real world graph is conducted [33]. And GIE [6] deploys a similar mechanism as GIL [40] on knowledge graph embedding scenario. **Geometric Coreset.** In our work, we make use of the geometric coreset [3, 27], which is widely used to approximate the geometry properties with a constant size of points, in the field of computational geometry, statistics and computer vision. In geometry deep learning, geometric coreset is used to draw the outline of the geometry elements efficiently with the constant number of points [30] and analyzing the coresets or sub-level sets of the distance to a compact manifold is a common method in topology data analysis to understand its topology [5]. Moreover, the paradigm of coresets has emerged as an appropriate means for efficiently approximating various extent measures of a point sets [2]. This inspires us to adopt coreset-related means to measure the features of the nodes on graph so that the nodes can be represented with a relatively low-dimension embedding. As mentioned and discussed in HGNN [17], regression and classification methods can not suit well on non-Euclidean manifolds. HGNN uses a set of coordinate-trainable nodes as reference frame to transform the nodes coordinates into nodes and these features represents the node-wise distance with the reference nodes. FMGNN, proposed in this paper, tries to avoid downgrading and unstable performances by deploy tangent mapping before neighborhood aggregation to align the embedding (coordinates) from Euclidean and non-Euclidean manifolds, and take coreset as a heuristic "landmark" to extract and fuse information on each manifold by distances. ## 3. Preliminaries In this section, we will recall the metrics and operational rules on Riemannian manifolds (S mapping over surfaces, at each point \(x\in\mathcal{M}^{d}\), the tangent map \(F^{*}\) is a linear transformation from the flat tangent space \(\mathcal{T}_{x}\mathcal{M}^{d}\) to the tangent space \(\mathcal{T}_{F(x)}\mathcal{N}^{d}\). [(23)] ### Graph Neural Network Graph neural networks can be viewed as an appropriate means to perform message passing between nodes [(26)]. FMGNN uses a graph convolutional network proposed in [(16)], where node embeddings are updated by aggregating adjacent node information. Based on general GCN message passing rule, and referring to the constant curvature graph convolution neural network, HGNN and HGCN, via exponential and logarithmic mapping, we can have the general GCN message passing rules at layer \(\ell\) for node \(v_{i}\) on manifold \(\hat{\mathcal{M}}\) (\(\mathbb{B},\mathbb{H}\) and \(\mathbb{S}\)) are shown as follows: Feature transform function is : \[\mathrm{h}_{i}^{\ell,\hat{\mathcal{M}}}=W^{\ell}\exp_{x^{\prime}}^{\hat{ \mathcal{M}}}(\mathrm{v}_{i}^{\ell-1,\hat{\mathcal{M}}})+\lambda^{\ell}. \tag{2}\] Neighborhood aggregation function is : \[\mathrm{v}_{i}^{\ell,\hat{\mathcal{M}}}=\sigma(\log_{x^{\prime}}^{\hat{ \mathcal{M}}}(\mathrm{h}_{i}^{\ell,\hat{\mathcal{M}}})+\sum_{v_{j}\in\mathcal{ N}(v_{i})}w(v_{i},v_{j})\log_{x^{\prime}}^{\hat{\mathcal{M}}}(\mathrm{h}_{j}^{ \ell,\hat{\mathcal{M}}})), \tag{3}\] where \(h\) represents the input and output of the middle layer of a network, \(i\) for vertice \(i\), \(l\) for \(l\)-th layer, \(M\) for different manifolds. Besides, \(w(v_{i},v_{j})\) is an aggregation weight that can be computed following the dot product and other operations of the respective Riemannian manifolds, \(\lambda^{\ell}\) is a bias parameters for layer \(\ell\), and \(\sigma\) is denoted as a non-linear activation function. ## 4. Observations of Training GNN on Manifolds In this section, we present our observations that the inherent randomness in GNN training can result in embedding centroid shifts and task performance fluctuations. When training GNN, randomness is introduced from initialization and stochastic gradient descent. Due to such randomness, the final output embeddings of GNNs can be overall shifted or distorted, though the relative mutual relations are still preserved. Such a natural phenomenon can become a crucial issue when combining graph embeddings from different spaces. An example can be found in Figure 3. Figure 2. The illustration of exponential (Mapping from tangent spaces to manifolds), logarithmic (Mapping from manifolds to tangent spaces), and tangent mappings (\(F^{*}\)) on non-Euclidean manifolds (e.g. Mapping from hyperbolic space to spherical space, \(F:\mathcal{M}^{d}\rightarrow\mathcal{N}^{d}\)). We investigate the centroid offsets and performance fluctuations over the final embeddings obtained by repeatedly running the same training process with different randomly generated seeds for GCNs on the three-manifolds, GIL, Hybrid-GCN, and our proposed FMGNN. We design Hybrid-GCN to demonstrate the performance over directly combining the embedding from three different manifolds without interaction, which extends the HGCN framework to directly superimpose the vertex embeddings from the three-manifolds, inspired by the constant curvature graph neural network (Beng et al., 2019). FMGNN is our proposed model, listed for comparison, whose details we will describe in the following sections. We train GCN, HGCN (GCN on Hyperbolic manifold), SGCN (GCN on spherical manifold), GIL, Hybrid-GCN, and FMGNN on the datasets of Cora, PubMed and CiteSeer, for \(10\) times and compare the performance. ### Centroid Offsets For centroid offsets, we analyze the random coordinate shifts and distortions of embeddings. Given graph \(G(V,E)\), on all the models, the graph embeddings \(X=\{x_{i}|o_{i}\in V\}\) are obtained after each training, and we use the average offsets among vertex embedding centroids \(c=\frac{\sum_{i\in V}x_{i}}{|V|}\) of different training times for the same model to measure the shifts. Thus, the centroid offset is defined as \[offset=\frac{2}{T(T-1)}\sum_{i\neq j}d(c_{i},c_{j}), \tag{4}\] where \(c_{i}\) is the centroid on the \(i\)th run, \(T=10\) is the number of training times, and \(d(\cdot)\) is the distance function on respective spaces. Considering that the training results can have different scales, to make a fair comparison and Figure 3. An example of combining embeddings from different spaces with random shifts. We fix the embedding on Euclidean space, obtain the hyperbolic embedding from two trails with different random seeds, and compare the results when combining the embeddings. Take the docker in orange as an example, from which we can see that nodes 2 and 3 with connection on the original graph has a greater distance than nodes 2 and 7 without direct connection. Such a combination can result in errors in downstream tasks. reflect topology difference, we measure the scales of the graph embeddings by calculating the average distance between vertices and their centroids: \[scale=\frac{1}{|V|}\Sigma_{u_{i}\in V}d(x_{i},c). \tag{5}\] Thereafter, the centroid shift can be normalized by scale as \[Norm.\ Offset=\frac{offset}{scale}. \tag{6}\] The results can be found in Table 2. We can see that GCN, HGCN, and SGCN have the same order of magnitude on normalized centroid offsets. If we combine the embedding results of different spaces as Hybrid-GCN or interact between manifolds without proper alignment, the normalized shifts can be accumulated from each space and become even greater. Such offset can result in variance and fluctuation in the performance, as we will show in the following subsection. Our proposed model has the most minor offsets. ### Performance Fluctuation We proceed to describe our observations on the performance fluctuation of models combining graph embeddings from different manifolds, namely Hybrid-GCN and GIL, to show that mixing such different manifold embeddings without proper alignment, can also result in performance variances, indicated by the large centroid offsets. For GIL and Hybrid-GCN, we choose to change embedding training only on one manifold, each at a time and remain the embeddings of the other manifolds unchanged. We observe the performance fluctuation introduced by the random shift of manifold embedding training with changed randomly selected seeds. The performance measured by the accuracy of node classification over the three datasets is demonstrated in Figure 4. For results in each dataset, the three rightmost boxplots are the performance variances of GCN, HGCN, and SGCN, which are \begin{table} \begin{tabular}{c|c c c c} \hline \hline **Benchmark** & **Model** & **Scale** & **Offset** & **Norm. Offset** \\ \hline \multirow{6}{*}{**Cora**} & GCN & 0.58 & 83.16 & 144.48 \\ & HGCN & 0.48 & 31.20 & 65.49 \\ & SGCN & 0.51 & 32.21 & 63.44 \\ & Hybrid-GCN & 1.30 & 393.05 & 301.60 \\ & GIL & 1.32 & 215.22 & 162.27 \\ & **FMGNN** & **1.60** & **11.14** & **6.97** \\ \hline \multirow{6}{*}{**PubMed**} & GCN & 0.79 & 112.38 & 141.59 \\ & HGCN & 0.69 & 51.37 & 74.71 \\ \cline{1-1} & SGCN & 0.66 & 47.93 & 72.99 \\ \cline{1-1} & Hybrid-GCN & 1.49 & 531.12 & 356.45 \\ \cline{1-1} & GIL & 0.34 & 161.18 & 475.32 \\ \cline{1-1} & **FMGNN** & **1.89** & **14.93** & **7.91** \\ \hline \multirow{6}{*}{**CiteSeer**} & GCN & 0.43 & 74.78 & 173.54 \\ \cline{1-1} & HGCN & 0.25 & 22.80 & 90.56 \\ \cline{1-1} & SGCN & 0.28 & 25.96 & 92.96 \\ \cline{1-1} & Hybrid-GCN & 2.58 & 414.93 & 160.82 \\ \cline{1-1} & GIL & 0.25 & 207.75 & 848.54 \\ \cline{1-1} & **FMGNN** & **3.20** & **11.75** & **3.67** \\ \hline \hline \end{tabular} \end{table} Table 2. Comparison of Models’ Centroid Offsets. used as benchmarks for the variances of GCN on the three different manifolds, while the results for Hybrid-GCN are shown on the leftmost and those for GIL are in the middle. We can see that for both Hybrid-GCN and GIL, the performance variances are more significant than the variances of GCN on any manifold. This indicates that simplifying combining embeddings from different manifolds without considering the proper alignment, the random shift and distortion of graph embedding on each manifold can be enlarged when combining the different embedding and result in greater performance fluctuation. Therefore, it is crucial to consider the random shifts of embedding on different manifolds on the embedding fusion and provide interaction and alignment between embeddings during training on all the manifolds if we choose to combine embeddings on different manifolds to approximate the best-fit space for each vertex and enhance the performance. Now we present our proposed work FMGNN, which fusing different manifold embeddings with alignment during training and extract the mutual relation information on different manifolds by distances to the geometric coresets. ## 5. The Fused Manifold Graph Neural Network Based on the observation above, FMGNN incorporates different manifolds information using a feature fusion mechanism, and converts coordinates on three manifolds into low-dimensional representations through geometric coreset based centroid mapping. We present our method in details as follows. ### Fused Manifold Aggregation FMGNN fuse the embedding from each manifold by interaction and alignment of the embeddings. First, according to Eq. (2), we execute the feature transform function to obtain the node representation of each manifold, which is denoted as \(\mathrm{h}_{i}^{\ell,\mathbb{E}},\mathrm{h}_{i}^{\ell,\mathbb{H}}\) and \(\mathrm{h}_{i}^{\ell,\mathbb{S}}\). According to the tangent mapping, if there is a smooth mapping between general manifolds, there is a linear mapping between the tangent spaces of the points before and after the mapping. Therefore, the representation vector of a point on one manifold can be mapped to another one via a selected linear mapping on the tangent spaces, and FMGNN takes advantage of this property. The linear mapping in FMGNN is trained from six trainable parameters denoted as \(\lambda_{\mathbb{E}\rightarrow\mathbb{H}},\lambda_{\mathbb{E}\rightarrow \mathbb{S}},\lambda_{\mathbb{H}\rightarrow\mathbb{E}},\lambda_{\mathbb{E} \rightarrow\mathbb{S}}\), \(\lambda_{\mathbb{S}\rightarrow\mathbb{E}}\) and \(\lambda_{\mathbb{S}\rightarrow\mathbb{E}}\), representing the weight of mapping between two manifolds. Taking hyperbolic manifold as an example, at first, the vectors on Euclidean manifold (\(\mathrm{h}_{i}^{\ell,\mathbb{E}}\)) are linearly mapped (\(\lambda_{\mathbb{E}\rightarrow\mathbb{H}}\)) to the tangent space of hyperbolic manifold, and further projected to the surface of hyperbolic manifold Figure 4. The performance fluctuations of the models on node classification task, where Hybrid-\(X\) denotes that we change the random seed of embedding training on \(X\) manifold of the Hybrid-GCN, and the same goes for GIL-\(X\). through exponential mapping, while the vectors on spherical space \(\mathrm{h}_{i}^{\ell,\mathbb{S}}\) requires logarithmic functions to project the vectors on the surface to the tangent space of the sphere, through the linear mapping weight (\(\lambda_{\mathbb{S}\rightarrow\mathbb{H}}\)) to the tangent space of the hyperbolic manifold, and further projected into the surface of hyperbolic manifold via the exponential mapping. Finally, we use plain-vanilla _Mobius_ addition Eq. (1) to adjust the node embeddings and conduct neighborhood aggregation via Riemannian GCN aggregation function Eq. (3). Hence the feature fusion functions in a FMGNN layer is as follows: \[\mathrm{h}_{i}^{\ell,\mathbb{B}}=\mathrm{h}_{i}^{\ell-1,\mathbb{B}}+\lambda_{ \mathbb{H}\rightarrow\mathbb{B}}\log^{\mathbb{H}}\mathrm{h}_{i}^{\ell-1, \mathbb{H}}+\lambda_{\mathbb{S}\rightarrow\mathbb{B}}\log^{\mathbb{S}}\mathrm{ h}_{i}^{\ell-1,\mathbb{S}}, \tag{7}\] (8) (9) Such operations can be viewed as aligning embeddings of the same vertex into their weighted center on each manifold through interaction. Then on all the manifolds, we conduct feature transformation and neighborhood aggregation via (2) and (3). In this way, FMGNN executes GCN on three different Manifolds sharing the same weight matrix, fuses the information from other manifolds, and finally gets three groups of node representations, denoted as \(A^{\mathbb{B}}=\{\alpha_{i}^{r},0<i\leq|\mathcal{V}|\}\) for the Euclidean manifold, \(A^{\mathbb{H}}=\{\beta_{i}^{r},0<i\leq|\mathcal{V}|\}\) for the hyperbolic manifold, and \(A^{\mathbb{S}}=\{Y_{i}^{r},0<i\leq|\mathcal{V}|\}\) for the spherical manifolds, among which \(\alpha\), \(\beta\) and \(\gamma\) denotes the final embeddings generated by three-manifolds. ### Geometric Coreset Based Centroid Mapping After obtaining embeddings from three manifolds, a challenge is that the parameter learning methods for regression and prediction are only applicable in the Euclidean manifold and the arithmetic rules in the three manifolds are different. In this work, we use distance to extract the mutual relation information between vertices on the three manifolds. We select a vertex set, called geometric coreset, as "landmarks" on each manifold, for each vertex calculate distances to all the vertices in the geometric coreset, and use such distances as the new representation vector of the vertex on the manifold. Such new coordinates are the mutual relations extracted from each manifold and can be applied to commonly used algebraic operations, such as addition and multiplication. We first define geometric coreset in FMGNN as follows: **Definition 5.1**.: **Geometric Coresets in Manifolds**: Given a node set in a manifold \(\mathcal{Z}=\{\zeta_{1},...,\zeta_{n},\zeta_{i}\in\mathcal{M}\}\) and a sensitive quality function \(Q(\mathcal{Z})\), a \(m\)-nodes geometric coreset \(C=\{c_{1},c_{2},...,c_{m}\},c_{i}\in\mathcal{M}\) is a subset of \(\mathcal{Z}\) whose \(Cost(\mathcal{C})\) can be used to approximate \(Q(\mathcal{Z})\) for any set \(\mathcal{Z}\) of \(x\) points. The process of finding the geometric coreset is as follows. 1. First, we find out a coreset \(C=\{c_{1},c_{2},...,c_{|C|}\},c_{i}\in\mathcal{M}\), noting that the coresets are generated based on parameter-free methods and independent of node embeddings (\(\mathcal{E}=e_{i},i\in|\mathcal{V}|\)). 2. Then, after project the coreset to the manifolds by exponential mapping, the pairwise distance between \(c_{i}\) and \(e_{j}\) can be calculated based on the distance measurement of each manifold (1), \(\alpha_{i,j}=d(c_{i},e_{j})\). 3. Finally, we sum up all the distances \((\alpha_{0,j},...,\alpha_{|C|,j})\in\mathbb{R}^{|C|}\) to represent position of \(e_{j}\) mapping to the coreset, and \(e_{j}\) is the node representation which can be learnt and updated in the GCN processes deployed on different Riemannian manifolds. The dimensions of vertex representations are only decided by the number of vertices in the geometric coreset, which is constant in theory (Kolmogorov, 1955) and small in empirical experiments. Therefore, the embedding of manifolds are further reduced to lower dimensions by the coreset. In geometric coreset based centroid mapping, on different manifolds, we take advantage of the KMeans coreset [15]. First, we generate a large number of nodes, \(\mathcal{Z}=\{\zeta_{1},...,\zeta_{n},\zeta_{i}\in\mathcal{E}\}\), represented by vectors with the same dimension with the embedding size in fused manifold aggregation process. Next, referring to [3], we perform KMeans clustering: randomly initialize \(k\) cluster centroid points, and calculate the distance from each point in \(\mathcal{Z}\) to these centroid points, then assign each data point to the cluster with the closest cluster centroid point and recalculate the centroid point for each cluster; we repeat the above steps until the algorithm converges. Finally, we have \(k\) centroid point: \[\mathcal{P}=\{\rho_{1},...,\rho_{k},\rho_{i}\in\mathcal{M}\}. \tag{10}\] After we obtain the geometric coreset and its corresponding vertex embeddings on the manifolds, we calculate distance between each vertex in \(\mathcal{P}\) and each vertex in \(\mathcal{G}\) with the embeddings from the output of fused manifold aggregation: \[E^{\mathcal{M}}=d_{\mathcal{M}}(A^{\mathcal{M}},\mathcal{C}_{\mathcal{M}}), \tag{11}\] where \(\mathcal{M}\) denotes manifold \(\mathbb{R}\), \(\mathbb{H}\) and \(\mathbb{S}\), and \(E^{\mathcal{M}}\) is the centroid mapping's output from each manifold. Each of \(E^{\mathcal{M}}\) is a \(|C|\)-dimensional vector. The embedding obtained by geometric coreset based centroid mapping is a new feature of a node after training and passing related nodes' messages in the corresponding manifold. In this way, the embedding obtained by graph convolution of each manifold are transformed to new representations with the dimension decided by the geometric coresets, the mutual relations on different manifolds are extracted through the distances between vertices and such geometric coreset landmarks, and such extracted information can be applied to common algebraic operations. ### Attention based Graph Embedding Fusion Furthermore, we fuse these three embedding vectors via attention mechanism. In this module, FMGNN first to use the representation vector through geometric coreset of each point as the _query_ vector and such embeddings on all three manifolds (_keys_) to calculate the similarity to obtain the weights, then use function \(softmax\) to Figure 5: The self-attention mechanism in FMGNN. We take hyperbolic manifold as an example, we first _query_ the similarity with embeddings on other manifolds including hyperbolic manifold itself, and then adopt _softmax_ to gain the final weights. normalize these weights, and finally combine the weights with the corresponding embeddings on the manifold to perform weighted addition so as to generate the find graph embedding. The process is shown in Fig 5 and the equation of the attention is shown as follows. \[E^{mm}=Attention(Q,K,V)=softmax\left(\frac{QK^{T}}{\sqrt{d_{k}}}\right)V, \tag{12}\] where \(d_{k}\) is the dimension of the embedding, equal to \(|\mathcal{C}|\), the size of the coreset and \(K\) is equal to \(Q\). Finally, with the output, FMGNN is trained on node classification and link prediction tasks. For node classification, we map \(E^{mm}=\{e_{i}^{mm},0<i<|\mathcal{V}|\}\) to perform \(softmax\) in order to find the most possible one from \(\tau\) classes. The function is \[p(e_{j}^{mm})=softmax(\mathrm{w}(e_{1,j}^{mm},...,e_{|\mathcal{C}|,j}^{mm})), 0<j<|\mathcal{V}|, \tag{13}\] where \(\mathrm{w}\in\mathbb{R}^{r\times|\mathcal{C}|}\). For link prediction, we refer to the Fermi-Dirac method, a generalization of sigmoid, to compute probability for potential edges: \[p((i,j)\in\mathcal{V}\mid E_{i}^{mm},E_{j}^{mm})=[e^{(d_{k}(E_{i}^{mm},E_{j}^{ mm})^{2}-r)/t}+1]^{-1}, \tag{14}\] where \(r\) and \(t\) are hyperparameters. We train on both node classification and link prediction tasks by minimizing the cross-entropy loss using negative sampling. To sum up, FMGNN fuses embedding on different manifolds with interaction and alignment, aggregates information in the same space after the manifold fusion, and converts the embeddings on each manifold through geometric coreset based centroid mapping to exact mutual information. ## 6. Experiments In this section, we present and analyze our experiment results in detail. First, we describe real-life datasets and the baselines for comparison. After that, we compare these baselines with **FMGNN** on node classification (NC) and link prediction (LP) tasks. Moreover, we discuss the centroid offset of the FMGNN. Finally, we perform ablation experiments to show the necessity of geometric coresets and the contribution of different manifolds. ### Experiments Setup **Benchmarks.** We use various open-sourced real-life graph datasets described in Table 3. 1. **Citation networks**. CORA, PubMed, and CiteSeer (Huang et al., 2017) are standard node classification and link prediction benchmarks, and the nodes are divided by research (sub)fields. 2. **Disease propagation tree**(Kumar et al., 2017). Disease, taken from HGCN, its labels of nodes is infected or not infected. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & **\#N.** & **\#Edge** & **\#Classes** & **\#Features** & **\#Dia.** & **\#Triangle** \\ \hline Disease & 1,044 & 1,043 & 2 & 1,000 & 10 & 0 \\ Airport & 3,188 & 18,631 & 4 & 4 & 12 & 98,669 \\ PubMed & 19,717 & 44,327 & 3 & 500 & 18 & 12,520 \\ CORA & 2,708 & 5,429 & 7 & 1,433 & 19 & 1,630 \\ CiteSeer & 3,327 & 4,732 & 6 & 3,703 & 28 & 1,167 \\ AmazonPhoto & 13,381 & 119,043 & 8 & 745 & 11 & 717,400 \\ Coauthor Phy. & 34,493 & 245,778 & 5 & 8,415 & 17 & 468,550 \\ \hline \hline \end{tabular} \end{table} Table 3. Statistics of the selected benchmarks. 3. **Co-author networks**. Coauthor Physics is a co-authorship graph based on the Microsoft Academic Graph from the KDD Cup 2016 challenge. Its node labels are each author's most active fields of study. 4. **Flight network**. Airport, origin from [39], is a transductive dataset updating by HGCN, whose nodes represent airports and edges represent the airline routes. The labels of nodes are the countries to which the airport belongs. 5. **Goods co-consume network**. AmazonPhoto [19, 28] is a segment of the Amazon co-purchase graph, where the product categories give node labels. **Baselines.** We compare the performance of FMGNN with baselines, including shallow and neural network-based methods, GNNs in the Euclidean space, and GNNs in the hyperbolic space. For neural network-based methods, we choose MLP and HNN [11]. For GNNs in the Euclidean space, GCN [16], GAT [31], GraphSAGE [14] and SGC [35] are taken into comparison. When it comes to GNNs in hyperbolic space, HGCN [8], HGNN [17], and H2H-GCN [9] are the most commonly adopted baselines. Besides, we also take GIL [40] into comparison as a baseline equipped with the embedding interaction only on the final outputs of Euclidean and hyperbolic manifolds. **Settings.** Referring to the HGCN, in the LP task, we use the random split point 85%/5%/10% as the training sets, validation sets, and test sets. Moreover, in the NC task, we divide the disease with 30%/10%/60% and 70%/15%/15% for the airport, as for CORA, PubMed, and CiteSeer, we follow the division ratio in [16], and for amazon photo and coauthor, we follow the division on the GNN benchmark [28]. Following the HGCN and H2H-GCN, we evaluate link prediction by measuring the area under the ROC curve on the test set and evaluate node classification by measuring the F1 score, apart from CORA, CiteSeer, PubMed, where the performance is measured by accuracy. ### Results Table 4 summarizes the performance of FMGNN and the baseline methods on seven benchmarks, where the best results are shown in bold and the second best are shown with an underline. We ran five times and reported the average results and standard deviations. From the Table 3 and Table 4 we can see that: 1. [leftmargin=*] 2. FMGNN outperforms the baselines or achieves similar results in classifying different types of nodes and link prediction between nodes. 3. Compared to GIL, FMGNN has better performance on node classification tasks on six benchmarks and has more significant advantages on large-scale datasets such as AmazonPhoto and Coauthor Network on Physics due to the feature fusion and alignment before aggregation on each GCN layer. 4. Compared to H2H-GCN, although our model achieves lower accuracy on the high hyperbolicity benchmark, the overall performance is close. On datasets such as Cora, Pubmed, and CiteSeer, the performance is significantly \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c} \hline \hline **Benchmarks** & **DISEASE** & \multicolumn{2}{c}{**AIRPORT**} & \multicolumn{2}{c}{**PUBMED**} & \multicolumn{2}{c}{**CORA**} & \multicolumn{2}{c}{**CITESER**} & \multicolumn{2}{c}{**AmazonPhoto**} & \multicolumn{2}{c}{**Coauthor-Physics**} \\ \hline **Hyperbolicity \(\delta\)** & **0** & **1** & **2.5** & **3.0** & **4.5** & **2.5** & **2.5** & **3.0** & **4.5** & **2.5** & **2.5** \\ \hline **Task** & **LP** & **NC** & **LP** & **NC** & **LP** & **NC** & **LP** & **NC** & **LP** & **NC** & **LP** & **NC** & **LP** & **NC** \\ \hline **MLP** & 63.6 \(\pm\) 0.6 & 26.8 \(\pm\) 2.5 & 89.8 \(\pm\) 0.5 & 68.6 \(\pm\) 0.6 & 84.1 \(\pm\) 0.9 & 72.4 \(\pm\) 0.2 & 83.1 \(\pm\) 0.5 & 51.5 \(\pm\) 1.0 & 86.3 \(\pm\) 0.0 & 59.7 \(\pm\) 0.0 & 69.1 \(\pm\) 0.7 & 72.3 \(\pm\) 1.8 & 68.8 \(\pm\) 1.2 & 78.1 \(\pm\) 3.1 \\ **HNN** & 75.1 \(\pm\) 0.3 & 41.0 \(\pm\) 1.8 & 98.2 \(\pm\) 0.2 & 90.5 \(\pm\) 0.5 & 94.9 \(\pm\) 0.1 & 69.8 \(\pm\) 0.4 & 80.9 \(\pm\) 0.1 & 54.6 \(\pm\) 0.3 & 87.8 \(\pm\) 0.7 & 72.7 \(\pm\) 0.0 & 79.4 \(\pm\) 8.7 \(\pm\) 0.1 & 81.2 \(\pm\) 0.8 & 87.7 \(\pm\) 2.2 \\ **GCN** & 64.7 \(\pm\) 0.5 & 63.7 \(\pm\) 0.5 & 63.7 \(\pm\) 0.4 & 89.3 \(\pm\) 0.4 & 81.4 \(\pm\) 0.6 & 91.2 \(\pm\) 0.5 & 79.0 \(\pm\) 0.2 & 90.4 \(\pm\) 0.2 & 81.5 \(\pm\) 0.3 & 91.1 \(\pm\) 0.0 & 70.3 \(\pm\) 0.0 & 88.6 \(\pm\) 0.6 & 91.2 \(\pm\) 1.2 & 89.3 \(\pm\) 3.1 & 92.8 \(\pm\) 0.5 \\ **GAT** & 69.8 \(\pm\) 0.3 & 70.4 \(\pm\) 0.4 & 90.5 \(\pm\) 0.3 & 81.5 \(\pm\) 0.2 & 81.2 \(\pm\) 0.7 & 79.0 \(\pm\) 0.3 & 93.7 \(\pm\) 0.1 & 83.0 \(\pm\) 0.7 & 91.2 \(\pm\) 0.1 & 72.5 \(\pm\) 0.0 & 90.0 \(\pm\) 5.2 & 85.7 \(\pm\) 0.3 & 92.0 \(\pm\) 4.1 & 92.5 \(\pm\) 0.5 \\ **SAGE** & 65.9 \(\pm\) 0.3 & 69.1 \(\pm\) 0.6 & 94.0 \(\pm\) 0.5 & 82.1 \(\pm\) 0.5 & 86.2 \(\pm\) 1.7 & 74.2 \(\pm\) 2.2 & 85.5 \(\pm\) 0.6 & 77.9 \(\pm\) 2.4 & 87.9 \(\pm\) 0.3 & 65.1 \(\pm\) 0.1 & 89.9 \(\pm\) 0.9 & 90.4 \(\pm\) 1.3 & 92.1 \(\pm\) 0.1 & 90.3 \(\pm\) 0.8 \\ **SGC** & 61.2 \(\pm\) 0.2 & 69.5 \(\pm\) 0.2 & 89.8 \(\pm\) 0.3 & 80.6 \(\pm\) 0.1 & 94.1 \(\pm\) 0.7 & 78.9 \(\pm\) 0.0 & 91.5 \(\pm\) 0.1 & 81.0 \(\pm\) 0.1 & 91.3 \(\pm\) 0.1 & 71.4 \(\pm\) 0.5 & 88.3 \(\pm\) 0.2 & 83.7 \(\pm\) 0.1 & 91.4 \(\pm\) 0.3 & 92.0 \(\pm\) 0.2 \\ \hline **HGCN** & 90.8 \(\pm\) 0.3 & 74.5 \(\pm\) 0.9 & 96.4 \(\pm\) 0.1 & **91.6** & 92.6 \(\pm\) 0.0 & 80.3 \(\pm\) 0.3 & 92.9 \(\pm\) 0.1 & 79.9 \(\pm\) 0.2 & 93.9 \(\pm\) 0.0 & 77.6 \(\pm\) 0.0 & **95.6 \(\pm\) 0.2** & 90.0 \(\pm\) 0.5 & 95.7 \(\pm\) 0.6 & 92.5 \(\pm\) 0.2 \\ **HGNN** & 90.6 \(\pm\) 0.2 & 85.7 \(\pm\) 0.8 & 95.8 \(\pm\) 0.1 & 85.1 \(\pm\) 0.3 & 94.1 \(\pm\) 0.2 & 75.9 \(\pm\) 0.1 & 56.1 \(\pm\) 0.0 & 78.3 \(\pm\) 0.2 & 97.4 \(\pm\) 0.2 & 71.3 \(\pm\) 0.1 & - & - & - & - & - \\ **H2H-GCN** & 92.6 \(\pm\) 0.8 & 85.6 \(\pm\) 1.7 & 94.0 \(\pm\) 0.3 & 89.3 \(\pm\) 0.5 & 95.9 \(\pm\) 0.0 & 79.9 \(\pm\) 0.5 & 95.0 \(\pm\) 0.0 & 82.8 \(\pm\) 0.4 & 94.3 \(\pm\) 0.2 & 65.7 \(\pm\) 0.1 & 92.4 \(\pm\) 0.2 & 91.3 \(\pm\) 0.2 & 92.1 \(\pm\) 0.4 \\ \hline **GIL** & **99.9 \(\pm\) 0.0** & **97.0** & 98.7 \(\pm\) 0.4 & **93.1** & **95.4** & 0.1 & 78.9 \(\pm\) 0.2 & 95.2 \(\pm\) 0.2 & 83.6 \(\pm\) 0.6 & **99.8** \(\pm\) 0.1 & 72.9 \(\pm\) 0.2 & 82.6 \(\pm\) 2. better than H2H-GCN. In particular, on a graph like CiteSeer has a large diameter and fewer triangle motifs, FMGNN performs better on graphs with sparsity. 4. Our model achieves the best overall performance on all the datasets with different characteristics, compared to other baselines through the radar map in Figure 6 (the line in red belongs to FMGNN). We introduce the hyperbolicity measurements \(\delta\) shown in Table 4, which is originally proposed by Gromov (Gromov, 2016), to measure how much hierarchical structure a graph is inherent with, where the lower \(\delta\) is, the more the graph is like a tree. Empirical results show that the more the graph is like a tree, the better performance the hyperbolic graph embedding can achieve: hyperbolic methods outperform the Euclidean methods in Disease, Airport. Our FMGNN outperforms HGCN and HGNN, meaning that FMGNN can benefit from embedding from the spherical manifolds and the Euclidean manifolds, even in the tree-like graphs. In the less tree-like graphs with higher \(\delta\), such as PubMed, Cora, and CiteSeer, Euclidean manifold methods outperform the hyperbolic, and our FMGNN still achieves the best performance. Suppose a network diameter is large, such as Citeseer. In that case, models such as H2H-GCN and HGNN that are highly dependent on the ductility of hyperbolic space or even GIL which performs interact learning with Euclidean manifold have relatively low performance due to the fact that in the process of message passing, the distant nodes may be absorbed by accidental connections (weak correlations). Moreover, we also notice that GIL does not perform well on large-scale graph data with many triangles, AmazonPhoto and Coauthor on Physics, which indicates that GIL does not fit well for graphs with densely connected components. Besides, when processing graphs with triangles motifs, before the step of neighborhood aggregation fusing information from other manifolds is of significance. Last but not least, the feature dimensions of FMGNN are always below \(100\), and compared with HGNN, the number of parameters of FMGNN is significantly reduced, reducing the model's complexity. ### Observation on Stability Our model is more stable than the hybrid-GCN and GIL tested by the randomness. Compared with GIL, FMGNN interacts and aligns embeddings on different manifolds during training, and the mutual relation information on Figure 6. The comparison of HGCN (color in Mint Tulip), H2H-GCN (color in Madang), GIL (color in Pigeon Post) and FMGNN (color in Mandy) on various benchmarks. In general, FMGNN performs well all-round. each manifold is preserved in the distances to the geometric coresets, while GIL only executes the interaction learning after the aggregation. The performance shows that such alignment of FMGNN can reduce the random shifts as shown in Section 4 and performance fluctuation, which demonstrates the efficiency of FMGNN. ### Ablation In order to better understand the role of each component of FMGNN, we conduct a detailed ablation experiment for FMGNN on CORA, PubMed, CiteSeer, AmazonPhoto, and Coauthor-Physics to observe the effects of geometric coreset-based centroid mapping and the necessity of using the three-manifolds. The result is as follows. **The geometric coreset size.** The FMGNN embedding dimension is decided by the size of the coresets in the three-manifold. We observe the relation between the size of the coreset and the amount of the initial random nodes, and find that the dimension of the coreset is highly relevant to the size of the initially random nodes in each manifold. We set the size of the initially embedding of nodes on each manifold as 100, and we change the coreset size from 10 to 100, and the result is shown in Figure 8. We can find that our model performs best with 100 dimensions. At the same time, we notice that as the size of geometric coreset increases from 10 to 100, the accuracy does not rise simultaneously. The experiments show that there is a peak in the performance with respective to the size of coresets. What is more, it shows that the coreset reduces the dimension of the vertex embedding. Meanwhile, it also reflects that the larger the coreset, the more redundant information in the absorbed space. **Every manifold counts.** Given the introduction of GCN in the spherical manifold, we conduct parallel-space ablation experiments on our model. Experiments are performed on only one of the three spaces and discarding one. The experimental results over classic citation networks are shown in Table 5. As a result, centroid mapping in Euclidean manifold alone can achieve good results in PubMed, and hyperbolic manifold performs better in CiteSeer while spherical manifold suit CORA, which support the conclusion in (Beng et al., 2019). It also shows that Euclidean, hyperbolic and spherical manifolds partially capture the primary information of the nodes on these three graphs. In addition, any combination of manifolds can increase the experimental result since the information each manifold captures is not the same. Finally, we can conclude that the information on the three spaces needs to be collected simultaneously to get better performance. Figure 7. Fluctuations when training GIL and FMGNN. ## 7. Conclusion In this paper, we propose FMGNN, a graph neural network that fuses embeddings from Euclidean, hyperbolic, and spherical manifolds with embedding interaction and alignment during training. Such embedding alignment can benefit from the representation capability of each manifold for its best-fit graph structures, while reducing the centroid offset and performance fluctuation during combining the embedding on different manifolds. We also propose the geometric coreset centroid mapping to fuse the mutual relation preserved in each manifold with low dimensions and high efficiency. As a result, we outperform the baseline models in most of the selected benchmarks, especially on node classification tasks. In future, we would like to further exploit and propose the embedding mapping and interacting among different manifolds with conformal or length preserving insights with theoretical guarantees. \begin{table} \begin{tabular}{c|c c c} \hline & CORA & PubMed & CiteSeer \\ \hline \(\mathbb{R}\) & 81.3 & 79.1 & 72.8 \\ \(\mathbb{H}\) & 81.1 & 78.4 & 73.3 \\ \(\mathbb{S}\) & 81.8 & 78.9 & 72.0 \\ \(\mathbb{R}+\mathbb{H}\) & 81.7 & 79.7 & 74.4 \\ \(\mathbb{R}+\mathbb{S}\) & 82.5 & 79.9 & 73.8 \\ \(\mathbb{S}+\mathbb{H}\) & 82.1 & 79.5 & 74.1 \\ \(\mathbb{R}+\mathbb{H}+\mathbb{S}\) & **83.9** & **80.4** & **78.1** \\ \hline \end{tabular} \end{table} Table 5. FMGNN manifolds ablation experiments results. Figure 8. ACC of NC task distribution along with coreset size, where the X-axis denotes the step of the size of the coreset size, and the Y-axis denotes the ACC of FMGNN.
2310.10013
Riemannian Residual Neural Networks
Recent methods in geometric deep learning have introduced various neural networks to operate over data that lie on Riemannian manifolds. Such networks are often necessary to learn well over graphs with a hierarchical structure or to learn over manifold-valued data encountered in the natural sciences. These networks are often inspired by and directly generalize standard Euclidean neural networks. However, extending Euclidean networks is difficult and has only been done for a select few manifolds. In this work, we examine the residual neural network (ResNet) and show how to extend this construction to general Riemannian manifolds in a geometrically principled manner. Originally introduced to help solve the vanishing gradient problem, ResNets have become ubiquitous in machine learning due to their beneficial learning properties, excellent empirical results, and easy-to-incorporate nature when building varied neural networks. We find that our Riemannian ResNets mirror these desirable properties: when compared to existing manifold neural networks designed to learn over hyperbolic space and the manifold of symmetric positive definite matrices, we outperform both kinds of networks in terms of relevant testing metrics and training dynamics.
Isay Katsman, Eric Ming Chen, Sidhanth Holalkere, Anna Asch, Aaron Lou, Ser-Nam Lim, Christopher De Sa
2023-10-16T02:12:32Z
http://arxiv.org/abs/2310.10013v1
# Riemannian Residual Neural Networks ###### Abstract Recent methods in geometric deep learning have introduced various neural networks to operate over data that lie on Riemannian manifolds. Such networks are often necessary to learn well over graphs with a hierarchical structure or to learn over manifold-valued data encountered in the natural sciences. These networks are often inspired by and directly generalize standard Euclidean neural networks. However, extending Euclidean networks is difficult and has only been done for a select few manifolds. In this work, we examine the residual neural network (ResNet) and show how to extend this construction to general Riemannian manifolds in a geometrically principled manner. Originally introduced to help solve the vanishing gradient problem, ResNets have become ubiquitous in machine learning due to their beneficial learning properties, excellent empirical results, and easy-to-incorporate nature when building varied neural networks. We find that our Riemannian ResNets mirror these desirable properties: when compared to existing manifold neural networks designed to learn over hyperbolic space and the manifold of symmetric positive definite matrices, we outperform both kinds of networks in terms of relevant testing metrics and training dynamics. ## 1 Introduction In machine learning, it is common to represent data as vectors in Euclidean space (i.e. \(\mathbb{R}^{n}\)). The primary reason for such a choice is convenience, as vectors to the input points, thereby naturally generalizing a typical Euclidean residual addition. This process is illustrated in Figure 1. Note that this strategy is exceptionally natural, only making use of inherent geodesic geometry, and works generally for all smooth manifolds. We refer to such networks as Riemannian residual neural networks. Though the above approach is principled, it is underspecified, as constructing an efficient learnable vector field for a given manifold is often nontrivial. To resolve this issue, we present a general way to induce a learnable vector field for a manifold \(\mathcal{M}\) given only a map \(f:\mathcal{M}\rightarrow\mathbb{R}^{k}\). Ideally, this map should capture intrinsic manifold geometry. For example, in the context of Euclidean space, this map could consist of a series of \(k\) projections onto hyperplanes. There is a natural equivalent of this in hyperbolic space that instead projects to horospheres (horospheres correspond to hyperplanes in Euclidean space). More generally, we propose a feature map that once more relies only on geodesic information, consisting of projection to random (or learned) geodesic balls. This final approach provides a fully geometric way to construct vector fields, and therefore natural residual networks, for any Riemannian manifold. After introducing our general theory, we give concrete manifestations of vector fields, and therefore residual neural networks, for hyperbolic space and the manifold of SPD matrices. We compare the performance of our Riemannian residual neural networks to that of existing manifold-specific networks on hyperbolic space and on the manifold of SPD matrices, showing that our networks perform much better in terms of relevant metrics due to their improved adherence to manifold geometry. Our contributions are as follows: 1. We introduce a novel and principled generalization of residual neural networks to general Riemannian manifolds. Our construction relies only on knowledge of geodesics, which capture manifold geometry. 2. Theoretically, we show that our methodology better captures manifold geometry than pre-existing manifold-specific neural network constructions. Empirically, we apply our general construction to hyperbolic space and to the manifold of SPD matrices. On various hyperbolic graph datasets (where hyperbolicity is measured by Gromov \(\delta\)-hyperbolicity) our method considerably outperforms existing work on both link prediction and node classification tasks. On various SPD covariance matrix classification datasets, a similar conclusion holds. 3. Our method provides a way to directly vary the geometry of a given neural network without having to construct particular operations on a per-manifold basis. This provides the novel capability to directly compare the effect of geometric representation (in particular, evaluating the difference between a given Riemannian manifold \((\mathcal{M},g)\) and Euclidean space \((\mathbb{R}^{n},||\cdot||_{2})\)) while fixing the network architecture. ## 2 Related Work Our work is related to but distinctly different from existing neural ordinary differential equation (ODE) [9] literature as well a series of papers that have attempted generalizations of neural networks to specific manifolds such as hyperbolic space [17] and the manifold of SPD matrices [26]. ### Residual Networks and Neural ODEs Residual networks (ResNets) were originally developed to enable training of larger networks, previously prone to vanishing and exploding gradients [23]. Later on, many discovered that by adding a learned residual, ResNets are similar to Euler's method [9; 21; 37; 45; 53]. More specifically, the ResNet represented by \(\textbf{h}_{t+1}=\textbf{h}_{t}+f(\textbf{h},\theta_{t})\) for \(\textbf{h}_{t}\in\mathbb{R}^{D}\) mimics the dynamics of the ODE defined by \(\frac{d\textbf{h}(t)}{dt}=f(\textbf{h}(t),t,\theta)\). Neural ODEs are defined precisely as ODEs of this form, where Figure 1: An illustration of a manifold-generalized residual addition. The traditional Euclidean formula \(p\gets p+v\) is generalized to \(p\leftarrow\exp_{p}(v)\), where \(\exp\) is the Riemannian exponential map. \(\mathcal{M}\) is the manifold and \(T_{p}\mathcal{M}\) is the tangent space at \(p\). the local dynamics are given by a parameterized neural network. Similar to our work, Falorsi and Forre [15], Katsman et al. [29], Lou et al. [36], Mathieu and Nickel [38] generalize neural ODEs to Riemannian manifolds (further generalizing manifold-specific work such as Bose et al. [3], that does this for hyperbolic space). However, instead of using a manifold's vector fields to solve a neural ODE, we learn an objective by parameterizing the vector fields directly (Figure 2). Neural ODEs and their generalizations to manifolds parameterize a continuous collection of vector fields over time for a single manifold in a dynamic flow-like construction. Our method instead parameterizes a discrete collection of vector fields, entirely untethered from any notion of solving an ODE. This makes our construction a strict generalization of both neural ODEs and their manifold equivalents [15; 29; 36; 38]. ### Riemannian Neural Networks Past literature has attempted generalizations of Euclidean neural networks to a number of manifolds. **Hyperbolic Space** Ganea et al. [17] extended basic neural network operations (e.g. activation function, linear layer, recurrent architectures) to conform with the geometry of hyperbolic space through gyrovector constructions [51]. In particular, they use gyrovector constructions [51] to build analogues of activation functions, linear layers, and recurrent architectures. Building on this approach, Chami et al. [8] adapt these constructions to hyperbolic versions of the feature transformation and neighborhood aggregation steps found in message passing neural networks. Additionally, batch normalization for hyperbolic space was introduced in Lou et al. [35]; hyperbolic attention network equivalents were introduced in Gulcehre et al. [20]. Although gyrovector constructions are algebraic and allow for generalization of neural network operations to hyperbolic space and beyond, we note that they do not capture intrinsic geodesic geometry. In particular, we note that the gyrovector-based hyperbolic linear layer introduced in Ganea et al. [17] reduces to a Euclidean matrix multiplication followed by a learned hyperbolic bias addition (see Appendix D.2). Hence all non-Euclidean learning for this case happens through the bias term. In an attempt to resolve this, further work has focused on imbuing these neural networks with more hyperbolic functions [10; 49]. Chen et al. [10] notably constructs a hyperbolic residual layer by projecting an output onto the Lorentzian manifold. However, we emphasize that our construction is more general while being more geometrically principled as we work with fundamental manifold operations like the exponential map rather than relying on the niceties of Lorentz space. Yu and De Sa [55] make use of randomized hyperbolic Laplacian features to learn in hyperbolic space. We note that the features learned are shallow and are constructed from a specific manifestation of the Laplace-Beltrami operator for hyperbolic space. In contrast, our method is general and enables non-shallow (i.e., multi-layer) feature learning. **SPD Manifold** Neural network constructs have been extended to the manifold of symmetric positive definite (SPD) matrices as well. In particular, SPDNet [26] is an example of a widely adopted SPD manifold neural network which introduced SPD-specific layers analogous to Euclidean linear and ReLU layers. Building upon SPDNet, Brooks et al. [5] developed a batch normalization method to be used with SPD data. Additionally, Lopez et al. [34] adapted gyrocalculus constructions used in hyperbolic space to the SPD manifold. **Symmetric Spaces** Further work attempts generalization to symmetric spaces. Sonoda et al. [50] design fully-connected networks over noncompact symmetric spaces using particular theory from Helgason-Fourier analysis [25], and Chakraborty et al. [7] attempt to generalize several operations such as convolution to such spaces by adapting and developing a weighted Frechet mean construction. We note that the Helgason-Fourier construction in Sonoda et al. [50] exploits a fairly particular structure, while the weighted Frechet mean construction in Chakraborty et al. [7] is specifically introduced for convolution, which is not the focus of our work (we focus on residual connections). Unlike any of the manifold-specific work described above, our residual network construction can be applied generally to any smooth manifold and is constructed solely from geodesic information. ## 3 Background In this section, we cover the necessary background for our paper; in particular, we introduce the reader to the necessary constructs from Riemannian geometry. For a detailed introduction to Riemannian geometry, we refer the interested reader to textbooks such as Lee [32]. ### Riemannian Geometry A topological manifold \((\mathcal{M},g)\) of dimension \(n\) is a locally Euclidean space, meaning there exist homeomorphic1 functions (called "charts") whose domains both cover the manifold and map from the manifold into \(\mathbb{R}^{n}\) (i.e. the manifold "looks like" \(\mathbb{R}^{n}\) locally). A smooth manifold is a topological manifold for which the charts are not simply homeomorphic, but diffeomorphic, meaning they are smooth bijections mapping into \(\mathbb{R}^{n}\) and have smooth inverses. We denote \(T_{p}\mathcal{M}\) as the tangent space at a point \(p\) of the manifold \(\mathcal{M}\). Further still, a Riemannian manifold2\((\mathcal{M},g)\) is an \(n\)-dimensional smooth manifold with a smooth collection of inner products \((g_{p})_{p\in\mathcal{M}}\) for every tangent space \(T_{p}\mathcal{M}\). The Riemannian metric \(g\) induces a distance \(d_{g}:\mathcal{M}\times\mathcal{M}\to\mathbb{R}\) on the manifold. Footnote 1: A homeomorphism is a continuous bijection with continuous inverse. Footnote 2: Note that imposing Riemannian structure does not considerably limit the generality of our method, as any smooth manifold that is Hausdorff and second countable has a Riemannian metric [32]. ### Geodesics and the Riemannian Exponential Map **Geodesics** A geodesic is a curve of minimal length between two points \(p,q\in\mathcal{M}\), and can be seen as the generalization of a straight line in Euclidean space. Although a choice of Riemannian metric \(g\) on \(\mathcal{M}\) appears to only define geometry locally on \(\mathcal{M}\), it induces global distances by integrating the length (of the "speed" vector in the tangent space) of a shortest path between two points: \[d(p,q)=\inf_{\gamma}\int_{0}^{1}\sqrt{g_{\gamma(t)}(\gamma^{\prime}(t),\gamma ^{\prime}(t))}\,dt \tag{1}\] where \(\gamma\in C^{\infty}([0,1],\mathcal{M})\) is such that \(\gamma(0)=p\) and \(\gamma(1)=q\). For \(p\in\mathcal{M}\) and \(v\in T_{p}\mathcal{M}\), there exists a unique geodesic \(\gamma_{v}\) where \(\gamma(0)=p\), \(\gamma^{\prime}(0)=v\) and the domain of \(\gamma\) is as large as possible. We call \(\gamma_{v}\) the maximal geodesic [32]. **Exponential Map** The Riemannian exponential map is a way to map \(T_{p}\mathcal{M}\) to a neighborhood around \(p\) using geodesics. The relationship between the tangent space and the exponential map output can be thought of as a local linearization, meaning that we can perform typical Euclidean operations in the tangent space before projecting to the manifold via the exponential map to capture the local on-manifold behavior corresponding to the tangent space operations. For \(p\in\mathcal{M}\) and \(v\in T_{p}\mathcal{M}\), the exponential map at \(p\) is defined as \(\exp_{p}(v)=\gamma_{v}(1)\). One can think of \(\exp\) as a manifold generalization of Euclidean addition, since in the Euclidean case we have \(\exp_{p}(v)=p+v\). Figure 2: A visualization of a Riemannian residual neural network on a manifold \(\mathcal{M}\). Our model parameterizes vector fields on a manifold. At each layer in our network, we take a step from a point in the direction of that vector field (brown), which is analogous to the residual step in a ResNet. ### Vector Fields Let \(T_{p}\mathcal{M}\) be the tangent space to a manifold \(\mathcal{M}\) at a point \(p\). Like in Euclidean space, a vector field assigns to each point \(p\in\mathcal{M}\) a tangent vector \(X_{p}\in T_{p}\mathcal{M}\). A smooth vector field assigns a tangent vector \(X_{p}\in T_{p}\mathcal{M}\) to each point \(p\in\mathcal{M}\) such that \(X_{p}\) varies smoothly in \(p\). **Tangent Bundle** The tangent bundle of a smooth manifold \(\mathcal{M}\) is the disjoint union of the tangent spaces \(T_{p}\mathcal{M}\), for all \(p\in\mathcal{M}\), denoted by \(T\mathcal{M}:=\bigsqcup_{p\in\mathcal{M}}T_{p}\mathcal{M}=\bigsqcup_{p\in \mathcal{M}}\{(p,v)\mid v\in T_{p}\mathcal{M}\}\). **Pushforward** A derivative (also called a _pushforward_) of a map \(f:\mathcal{M}\to\mathcal{N}\) between two manifolds is denoted by \(D_{p}f:T_{p}\mathcal{M}\to T_{f(p)}\mathcal{N}\). This is a generalization of the classical Euclidean Jacobian (since \(\mathbb{R}^{n}\) is a manifold), and provides a way to relate tangent spaces at different points on different manifolds. **Pullback** Given \(\phi:\mathcal{M}\to\mathcal{N}\) a smooth map between manifolds and \(f:\mathcal{N}\to\mathbb{R}\) a smooth function, the pullback of \(f\) by \(\phi\) is the smooth function \(\phi^{*}f\) on \(\mathcal{M}\) defined by \((\phi^{*}f)(x)=f(\phi(x))\). When the map \(\phi\) is implicit, we simply write \(f^{*}\) to mean the pullback of \(f\) by \(\phi\). ### Model Spaces in Riemannian Geometry The three Riemannian model spaces are Euclidean space \(\mathbb{R}^{n}\), hyperbolic space \(\mathbb{H}^{n}\), and spherical space \(\mathbb{S}^{n}\), that encompass all manifolds with constant sectional curvature. Hyperbolic space manifests in several representations like the Poincare ball, Lorentz space, and the Klein model. We use the Poincare ball model for our Riemannian ResNet design (see Appendix A for more details on the Poincare ball model). ### SPD Manifold Let \(SPD(n)\) be the manifold of \(n\times n\) symmetric positive definite (SPD) matrices. We recall from Gallier and Quaintance [16] that \(SPD(n)\) has a Riemannian exponential map (at the identity) equivalent to the matrix exponential. Two common metrics used for \(SPD(n)\) are the log-Euclidean metric [16], which induces a flat structure on the matrices, and the canonical affine-invariant metric [12; 42], which induces non-constant negative sectional curvature. The latter gives \(SPD(n)\) a considerably less trivial geometry than that exhibited by the Riemannian model spaces [2] (see Appendix A for more details on \(SPD(n)\)). ## 4 Methodology In this section, we provide the technical details behind Riemannian residual neural networks. ### General Construction We define a **Riemannian Residual Neural Network** (RResNet) on a manifold \(\mathcal{M}\) to be a function \(f:\mathcal{M}\to\mathcal{M}\) defined by Figure 3: An overview of our generalized Riemannian Residual Neural Network (RResNet) methodology. We start by mapping \(x^{(0)}\in\mathcal{M}^{(0)}\) to \(\chi^{(1)}\in\mathcal{M}^{(1)}\) using a base point mapping \(h_{1}\). Then, using our paramterized vector field \(\ell_{i}\), we compute a residual \(v^{(1)}:=\ell_{1}(\chi^{(1)})\). Finally, we project \(v^{(1)}\) back onto the manifold using the Riemannian \(\exp\) map, leaving us with \(x^{(1)}\). This procedure can be iterated to produce a multi-layer Riemannian residual neural network that is capable of changing manifold representation on a per layer basis. \[f(x) :=x^{(m)} \tag{2}\] \[x^{(0)} :=x\] (3) \[x^{(i)} :=\exp_{x^{(i-1)}}(\ell_{i}(x^{(i-1)})) \tag{4}\] for \(x\in\mathcal{M}\), where \(m\) is the number of layers and \(\ell_{i}:\mathcal{M}\to T\mathcal{M}\) is a neural network-parameterized vector field over \(\mathcal{M}\). This residual network construction is visualized for the purpose of intuition in Figure 2. In practice, parameterizing a function from an abstract manifold \(\mathcal{M}\) to its tangent bundle is difficult. However, by the Whitney embedding theorem [33], we can embed \(\mathcal{M}\hookrightarrow\mathbb{R}^{D}\) smoothly for some dimension \(D\geq\dim\mathcal{M}\). As such, for a standard neural network \(n_{i}:\mathbb{R}^{D}\to\mathbb{R}^{D}\) we can construct \(\ell_{i}\) by \[\ell_{i}(x):=\operatorname{proj}_{T_{x}\mathcal{M}}(n_{i}(x)) \tag{5}\] where we note that \(T_{x}\mathcal{M}\subset\mathbb{R}^{D}\) is a linear subspace (making the projection operator well defined). Throughout the paper we call this the embedded vector field design3. We note that this is the same construction used for defining the vector field flow in Lou et al. [36], Mathieu and Nickel [38], Rozen et al. [44]. Footnote 3: Ideal vector field design is in general nontrivial and the embedded vector field is not a good choice for all manifolds (see Appendix B). We also extend our construction to work in settings where the underlying manifold changes from layer to layer. In particular, for a sequence of manifolds \(\mathcal{M}^{(0)},\mathcal{M}^{(1)},\ldots,\mathcal{M}^{(m)}\) with (possibly learned) maps \(h_{i}:\mathcal{M}^{(i-1)}\to\mathcal{M}^{(i)}\), our Riemannian ResNet \(f:\mathcal{M}^{(0)}\to\mathcal{M}^{(m)}\) is given by \[f(x) :=x^{(m)} \tag{6}\] \[x^{(0)} :=x\] (7) \[x^{(i)} :=\exp_{h_{i}(x^{(i-1)})}(\ell_{i}(h_{i}(x^{(i-1)})))\forall i\in [m] \tag{8}\] with functions \(\ell_{i}:\mathcal{M}^{(i)}\to T\mathcal{M}^{(i)}\) given as above. This generalization is visualized in Figure 3. In practice, our \(\mathcal{M}^{(i)}\) will be different dimensional versions of the same geometric space (e.g. \(\mathbb{H}^{n}\) or \(\mathbb{R}^{n}\) for varying \(n\)). If the starting and ending manifolds are the same, the maps \(h_{i}\) will simply be standard inclusions. When the starting and ending manifolds are different, the \(h_{i}\) may be standard neural networks for which we project the output, or the \(h_{i}\) may be specially design learnable maps that respect manifold geometry. As a concrete example, our \(h_{i}\) for the SPD case map from an SPD matrix of one dimension to another by conjugating with a Stiefel matrix [26]. Furthermore, as shown in Appendix D, our model is equivalent to the standard ResNet when the underlying manifold is \(\mathbb{R}^{n}\). **Comparison with Other Constructions** We discuss how our construction compares with other methods in Appendix E, but here we briefly note that unlike other methods, our presented approach is fully general and better conforms with manifold geometry. ### Feature Map-Induced Vector Field Design Most of the difficulty in application of our general vector field construction comes from the design of the learnable vector fields \(\ell_{i}:\mathcal{M}^{(i)}\to T\mathcal{M}^{(i)}\). Although we give an embedded vector field design above, it is not very principled geometrically. We would like to considerably restrict these vector fields so that their range is informed by the underlying geometry of \(\mathcal{M}\). For this, we note that it is possible to induce a vector field \(\xi:\mathcal{M}\to T\mathcal{M}\) for a manifold \(\mathcal{M}\) with any smooth map \(f:\mathcal{M}\to\mathbb{R}^{k}\). In practice, this map should capture intrinsic geometric properties of \(\mathcal{M}\) and can be viewed as a feature map, or de facto linearization of \(\mathcal{M}\). Given an \(x\in\mathcal{M}\), we need only pass \(x\) through \(f\) to get its feature representation in \(\mathbb{R}^{k}\), then note that since: \[D_{p}f:T_{p}\mathcal{M}\to T_{f(p)}\mathbb{R}^{k},\] we have an induced map: \[(D_{p}f)^{*}:(T_{f(p)}\mathbb{R}^{k})^{*}\to(T_{p}\mathcal{M})^{*},\] where \((D_{p}f)^{*}\) is the pullback of \(D_{p}f\). Note that \(T_{p}\mathbb{R}^{k}\cong\mathbb{R}^{k}\) and \((\mathbb{R}^{k})^{*}\cong\mathbb{R}^{k}\) by the dual space isomorphism. Moreover \((T_{p}\mathcal{M})^{*}\cong T_{p}\mathcal{M}\) by the tangent-cotangent space isomorphism [33]. Hence, we have the induced map: \[(D_{p}f)^{*}_{r}:\mathbb{R}^{k}\to T_{p}\mathcal{M},\] obtained from \((D_{p}f)^{*}\), simply by both precomposing and postcomposing the aforementioned isomorphisms, where relevant. \((D_{p}f)_{r}^{*}\) provides a natural way to map from the feature representation to the tangent bundle. Thus, we may view the map \(\ell_{f}:\mathcal{M}\to T\mathcal{M}\) given by: \[\ell_{f}(x)=(D_{x}f)_{r}^{*}(f(x))\] as a deterministic vector field induced entirely by \(f\). **Learnable Feature Map-Induced Vector Fields** We can easily make the above vector field construction learnable by introducing a Euclidean neural network \(n_{\theta}:\mathbb{R}^{k}\to\mathbb{R}^{k}\) after \(f\) to obtain \(\ell_{f,\theta}(x)=(D_{x}f)^{*}(n_{\theta}(f(x)))\). **Feature Map Design** One possible way to simplify the design of the above vector field is to further break down the map \(f:\mathcal{M}\to\mathbb{R}^{k}\) into \(k\) maps \(f_{1},\dots,f_{k}:\mathcal{M}\to\mathbb{R}\), where ideally, each map \(f_{i}\) is constructed in a similar way (e.g. performing some kind of geometric projection, where the \(f_{i}\) vary only in terms of the specifying parameters). As we shall see in the following subsection, this ends up being a very natural design decision. In what follows, we shall consider only smooth feature maps \(f:\mathcal{M}\to\mathbb{R}^{k}\) induced by a single parametric construction \(g_{\theta}:\mathcal{M}\to\mathbb{R}\), i.e. the \(k\) dimensions of the output of \(f\) are given by different choices of \(\theta\) for the same underlying feature map4. This approach also has the benefit of a very simple interpretation of the induced vector field. Given feature maps \(g_{\theta_{1}},\dots,g_{\theta_{k}}:\mathcal{M}\to\mathbb{R}\) that comprise our overall feature map \(f:\mathcal{M}\to\mathbb{R}^{k}\), our vector field is simply a linear combination of the maps \(\nabla g_{\theta_{i}}:\mathcal{M}\to T\mathcal{M}\). If the \(g_{\theta_{i}}\) are differentiable with respect to \(\theta_{i}\), we can even learn the \(\theta_{i}\) themselves. Footnote 4: We use the term “feature map” for both the overall feature map \(f:\mathcal{M}\to\mathbb{R}^{k}\) and for the inducing construction \(g_{\theta}:\mathcal{M}\to\mathbb{R}\). This is well-defined since in our work we consider only feature maps \(f:\mathcal{M}\to\mathbb{R}^{k}\) that are induced by some \(g_{\theta}:\mathcal{M}\to\mathbb{R}\). #### 4.2.1 Manifold Manifestations In this section, in an effort to showcase how simple it is to apply our above theory to come up with natural vector field designs, we present several constructions of manifold feature maps \(g_{\theta}:\mathcal{M}\to\mathbb{R}\) that capture the underlying geometry of \(\mathcal{M}\) for various choices of \(\mathcal{M}\). Namely, in this section we provide several examples of \(f:\mathcal{M}\to\mathbb{R}\) that induce \(\ell_{f}:\mathcal{M}\to T\mathcal{M}\), thereby giving rise to a Riemannian neural network by Section 4.1. **Euclidean Space** To build intuition, we begin with an instructive case. We consider designing a feature map for the Euclidean space \(\mathbb{R}^{n}\). A natural design would follow simply by considering hyperplane projection. Let a hyperplane \(w^{T}x+b=0\) be specified by \(w\in\mathbb{R}^{n},b\in\mathbb{R}\). Then a natural feature map \(g_{w,b}:\mathbb{R}^{n}\to\mathbb{R}\) parameterized by the hyperplane parameters is given by hyperplane projection [14]: \(g_{w,b}(x)=\frac{|w^{T}x+b|}{||w||_{2}}\). **Hyperbolic Space** We wish to construct a natural feature map for hyperbolic space. Seeking to follow the construction given in the Euclidean context, we wish to find a hyperbolic analog of hyperplanes. This is provided to us via the notion of horospheres [24]. Illustrated in Figure 4, horospheres naturally generalize hyperplanes to hyperbolic space. We specify a horosphere in the Poincare ball model of hyperbolic space \(\mathbb{H}^{n}\) by a point of tangency \(\omega\in\mathbb{S}^{n-1}\) and a real value \(b\in\mathbb{R}\). Then a natural feature map \(g_{\omega,b}:\mathbb{H}^{n}\to\mathbb{R}\) parameterized by the horosphere parameters would be given by horosphere projection [4]: \(g_{\omega,b}(x)=-\log\left(\frac{1-||x||_{2}^{2}}{||x-\omega||_{2}^{2}}\right)+b\). **Symmetric Positive Definite Matrices** The manifold of SPD matrices is an example of a manifold where there is no innate representation of a hyperplane. Instead, given \(X\in SPD(n)\), a reasonable feature map \(g_{k}:SPD(n)\to\mathbb{R}\), parameterized by \(k\), is to map \(X\) to its \(k\)th largest eigenvalue: \(g_{k}(X)=\lambda_{k}\). Figure 4: Example of a horosphere in the Poincaré ball representation of hyperbolic space. In this particular two-dimensional case, the hyperbolic space \(\mathbb{H}_{2}\) is visualized via the Poincaré disk model, and the horosphere, shown in blue, is called a horocycle. **General Manifolds** For general manifolds there is no perfect analog of a hyperplane, and hence there is no immediately natural feature map. Although this is the case, it is possible to come up with a reasonable alternative. We present such an alternative in Appendix B.4 together with pertinent experiments. _Example: Euclidean Space_ One motivation for the vector field construction \(\ell_{f}(x)=(D_{x}f)_{r}^{*}(f(x))\) is that in the Euclidean case, \(\ell_{f}\) will reduce to a standard linear layer (because the maps \(f\) and \((D_{x}f)^{*}\) are linear), which, in combination with the Euclidean \(\exp\) map, will produce a standard Euclidean residual neural network. Explicitly, for the Euclidean case, note that our feature map \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{k}\) will, for example, take the form \(f(x)=Wx,W\in\mathbb{R}^{k\times n}\) (here we have \(b=0\) and \(W\) has normalized row vectors). Then note that we have \(Df=W\) and \((Df)^{*}=W^{T}\). We see for the standard feature map-based construction, our vector field \(\ell_{f}(x)=(D_{x}f)^{*}(f(x))\) takes the form \(\ell_{f}(x)=W^{T}Wx\). For the learnable case (which is standard for us, given that we learn Riemannian residual neural networks), when the manifold is Euclidean space, the general expression \(\ell_{f,\theta}(x)=(D_{x}f)^{*}(n_{\theta}(f(x)))\) becomes \(\ell_{f,\theta}(x)=W^{T}n_{\theta}(Wx)\). When the feature maps are trivial projections (onto axis-aligned hyperplanes), we have \(W=I\) and \(\ell_{f,\theta}(x)=n_{\theta}(x)\). Thus our construction can be viewed as a generalization of a standard neural network. ## 5 Experiments In this section, we perform a series of experiments to evaluate the effectiveness of RResNets on tasks arising on different manifolds. In particular, we explore hyperbolic space and the SPD manifold. ### Hyperbolic Space We perform numerous experiments in the hyperbolic setting. The purpose is twofold: 1. We wish to illustrate that our construction in Section 4 is not only more general, but also intrinsically more geometrically natural than pre-existing hyperbolic constructions such as HNN [17], and is thus able to learn better over hyperbolic data. 2. We would like to highlight that non-Euclidean learning benefits the most hyperbolic datasets. We can do this directly since our method provides a way to vary the geometry of a fixed neural network architecture, thereby allowing us to directly investigate the effect of changing geometry from Euclidean to hyperbolic. #### 5.1.1 Direct Comparison Against Hyperbolic Neural Networks [17] To demonstrate the improvement of RResNet over HNN [17], we first perform node classification (NC) and link prediction (LP) tasks on graph datasets with low Gromov \(\delta\)-hyperbolicity [8], which means the underlying structure of the data is highly hyperbolic. The RResNet model is given the \begin{table} \begin{tabular}{l l c c c c c c c} \hline \hline & \multicolumn{2}{c}{**Dataset**} & \multicolumn{2}{c}{Disease} & \multicolumn{2}{c}{Airport} & \multicolumn{2}{c}{PubMed} & \multicolumn{2}{c}{CoRA} \\ & \multicolumn{2}{c}{**Hyperbolicity**} & \multicolumn{2}{c}{\(\delta=0\)} & \multicolumn{2}{c}{\(\delta=1\)} & \multicolumn{2}{c}{\(\delta=3.5\)} & \multicolumn{2}{c}{\(\delta=11\)} \\ \cline{2-10} & **Task** & **LP** & **NC** & **LP** & **NC** & **LP** & **NC** & **LP** & **NC** \\ \hline \multirow{6}{*}{**Feen**} & Eue & \(59.8_{\pm 2.0}\) & \(32.5_{\pm 1.1}\) & \(92.0_{\pm 0.0}\) & \(60.9_{\pm 3.4}\) & \(83.3_{\pm 0.1}\) & \(48.2_{\pm 0.7}\) & \(82.5_{\pm 0.3}\) & \(23.8_{\pm 0.7}\) \\ & Hyp [41] & \(63.5_{\pm 0.6}\) & \(45.5_{\pm 3.3}\) & \(94.5_{\pm 0.0}\) & \(70.2_{\pm 0.1}\) & \(87.5_{\pm 0.1}\) & \(68.5_{\pm 0.3}\) & \(87.6_{\pm 0.2}\) & \(22.0_{\pm 1.5}\) \\ & Eue-Mixed & \(49.6_{\pm 1.1}\) & \(35.2_{\pm 3.4}\) & \(91.5_{\pm 0.1}\) & \(68.3_{\pm 2.3}\) & \(86.0_{\pm 1.3}\) & \(63.0_{\pm 0.3}\) & \(84.4_{\pm 0.2}\) & \(46.1_{\pm 0.4}\) \\ & Hyp-Mixed & \(55.1_{\pm 1.3}\) & \(56.9_{\pm 1.5}\) & \(93.3_{\pm 0.0}\) & \(69.6_{\pm 0.1}\) & \(83.8_{\pm 0.3}\) & \(\mathbf{73.9_{\pm 0.2}}\) & \(85.6_{\pm 0.5}\) & \(45.9_{\pm 0.3}\) \\ \hline \multirow{6}{*}{**Feen**} & MLP & \(72.6_{\pm 0.6}\) & \(28.8_{\pm 2.5}\) & \(89.8_{\pm 0.5}\) & \(68.6_{\pm 0.6}\) & \(84.1_{\pm 0.9}\) & \(72.4_{\pm 0.2}\) & \(83.1_{\pm 0.5}\) & \(51.5_{\pm 1.0}\) \\ & HNN [17] & \(75.1_{\pm 0.3}\) & \(41.0_{\pm 1.8}\) & \(90.8_{\pm 0.2}\) & \(80.5_{\pm 0.5}\) & \(\mathbf{94.9_{\pm 0.1}}\) & \(69.8_{\pm 0.4}\) & \(\mathbf{89.0_{\pm 0.1}}\) & \(\mathbf{54.6_{\pm 0.4}}\) \\ \cline{1-1} & **RResNet** & **98.4**\({}_{\pm 0.3}\) & \(\mathbf{76.8_{\pm 2.0}}\) & \(\mathbf{95.2_{\pm 0.1}}\) & \(\mathbf{96.9_{\pm 0.3}}\) & \(\mathbf{95.0_{\pm 0.3}}\) & \(72.3_{\pm 1.7}\) & \(86.7_{\pm 0.3}\) & \(52.4_{\pm 5.5}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Above we give graph task results for RResNet Horo compared with several non-graph-based neural network baselines (baseline methods and metrics are from Chami et al. [8]). Test ROC AUC is the metric reported for link prediction (LP) and test F1 score is the metric reported for node classification (NC). Mean and standard deviation are given over five trials. Note that RResNet Horo considerably outperforms HNN on the most hyperbolic datasets, performing worse and worse as hyperbolicity increases, to a more extreme extent than previous methods that do not adhere to geometry as closely (this is expected). name "RResNet Horo." It utilizes a horosphere projection feature map-induced vector field described in Section 4. All model details are given in Appendix C.2. We find that because we adhere well to the geometry, we attain good performance on datasets with low Gromov \(\delta\)-hyperbolicities (e.g. \(\delta=0,\delta=1\)). As soon as the Gromov hyperbolicity increases considerably beyond that (e.g. \(\delta=3.5,\delta=11\)), performance begins to degrade since we are embedding non-hyperbolic data in an unnatural manifold geometry. Since we adhere to the manifold geometry more strongly than prior hyperbolic work, we see performance decay faster as Gromov hyperbolicity increases, as expected. In particular, we test on the very hyperbolic Disease (\(\delta=0\)) [8] and Airport (\(\delta=1\)) [8] datasets. We also test on the considerably less hyperbolic PubMed (\(\delta=3.5\)) [47] and CoRA (\(\delta=11\)) [46] datasets. We use all of the non-graph-based baselines from Chami et al. [8], since we wish to see how much we can learn strictly from a proper treatment of the embeddings (and no graph information). Table 1 summarizes the performance of "RResNet Horo" relative to these baselines. Moreover, we find considerable benefit from the feature map-induced vector field over an embedded vector field that simply uses a Euclidean network to map from a manifold point embedded in \(\mathbb{R}^{n}\). The horosphere projection captures geometry more accurately, and if we swap to an embedded vector field we see considerable accuracy drops on the two haredst hyperbolic tasks: Disease NC and Airport NC. In particular, for Disease NC the mean drops from \(76.8\) to \(75.0\), and for Airport NC we see a very large decrease from \(96.9\) to \(83.0\), indicating that geometry captured with a well-designed feature map is especially important. We conduct a more thorough vector field ablation study in Appendix C.5. #### 5.1.2 Impact of Geometry A major strength of our method is that it allows one to investigate the direct effect of geometry in obtaining results, since the architecture can remain the same for various manifolds and geometries (as specified by the metric of a given Riemannian manifold). This is well-illustrated in the most hyperbolic Disease NC setting, where swapping out hyperbolic for Euclidean geometry in an RResNet induced by an embedded vector field decreases the F1 score from a \(75.0\) mean to a \(67.3\) mean and induces a large amount of numerical stability, since standard deviation increases from \(5.0\) to \(21.0\). We conduct a more thorough geometry ablation study in Appendix C.5. ### SPD Manifold A common application of SPD manifold-based models is learning over full-rank covariance matrices, which lie on the manifold of SPD matrices. We compare our RResNet to SPDNet [26] and SPDNet with batch norm [5] on four video classification datasets: AFEW [13], FPHA [18], NTU RGB+D [48], and HDM05 [39]. Results are given in Table 2. Please see Appendix C.6 for details on the experimental setup. For our RResNet design, we try two different metrics: the log-Euclidean metric [16] and the affine-invariant metric [12; 42], each of which captures the curvature of the SPD manifold differently. We find that adding a learned residual improves performance and training dynamics over existing neural networks on SPD manifolds with little effect on runtime. We experiment with several vector field designs, which we outline in Appendix B. The best vector field design (given in Section 4.2), also the one we use for all SPD experiments, necessitates eigenvalue computation. We note the cost of computing eigenvalues is not a detrimental feature of our approach since previous works (SPDNet [26], SPDNet with batchnorm [5]) already make use of eigenvalue computation5. Empirically, we observe that the beneficial effects of our RResNet construction are similar to those of the SPD batch norm introduced in Brooks et al. [5] (Table 2, Figure 5 in Appendix C.6). In addition, we find that our operations are stable with ill-conditioned input matrices, which commonly occur in the wild. To contrast, the batch norm computation in SPDNetBN, which relies on Karcher flow \begin{table} \begin{tabular}{l l l l l} \hline \hline & AFEW[13] & FPHA[18] & NTU RGB+D[48] & HDM05[39] \\ \hline SPDNet & \(33.24_{\pm 0.56}\) & \(65.39_{\pm 1.48}\) & \(41.47_{\pm 0.34}\) & \(66.77_{\pm 0.92}\) \\ SPDNetBN & \(35.39_{\pm 0.93}\) & \(65.03_{\pm 1.35}\) & \(41.92_{\pm 0.37}\) & \(67.25_{\pm 0.44}\) \\ **RResNet Affine-Invariant** & \(35.17_{\pm 1.78}\) & \(\mathbf{66.53_{\pm 0.64}}\) & \(41.00_{\pm 0.50}\) & \(67.91_{\pm 1.27}\) \\ **RResNet Log-Euclidean** & \(\mathbf{36.38_{\pm 1.29}}\) & \(64.58_{\pm 0.98}\) & \(\mathbf{42.99_{\pm 0.23}}\) & \(\mathbf{69.80_{\pm 1.51}}\) \\ \hline \hline \end{tabular} \end{table} Table 2: We run our SPD manifold RResNet on four SPD matrix datasets and compare against SPDNet [26] and SPDNet with batch norm [5]. We report the mean and standard deviation of validation accuracies over five trials and bold which method performs the best. [28; 35], suffers from numerical instability when the input matrices are nearly singular. Overall, we observe our RResNet with the affine-invariant metric outperforms existing work on FPHA, and our RResNet using the log-Euclidean metric outperforms existing work on AFEW, NTU RGB+D, and HDM05. Being able to directly interchange between two metrics while maintaining the same neural network design is an unique strength of our model. ## 6 Riemannian Residual Graph Neural Networks Following the initial comparison to non-graph-based methods in Table 1, we introduce a simple graph-based method by modifying RResNet Horo above. We take the previous model and pre-multiply the feature map output by the underlying graph adjacency matrix \(A\) in a manner akin to what happens with graph neural networks [54]. This is the simple modification that we introduce to the Riemannian ResNet to incorporate graph information; we call this method G-RResNet Horo. We compare directly against the graph-based methods in Chami et al. [8] as well as against Fully Hyperbolic Neural Networks [10] and give results in Table 3. We test primarily on node classification since we found that almost all LP tasks are too simple and solved by methods in Chami et al. [8] (i.e., test ROC is greater than \(95\%\)). We also tune the matrix power of \(A\) for a given dataset; full architectural details are given in Appendix C.2. Although this method is simple, we see further improvement and in fact attain a state-of-the-art result for the Airport [8] dataset. Once more, as expected, we see a considerable performance drop for the much less hyperbolic datasets, PubMed and CoRA. ## 7 Conclusion We propose a general construction of residual neural networks on Riemannian manifolds. Our approach is a natural geodesically-oriented generalization that can be applied more broadly than previous manifold-specific work. Our introduced neural network construction is the first that decouples geometry (i.e. the representation space expected for input to layers) from the architecture design (i.e. actual "wiring" of the layers). Moreover, we introduce a geometrically principled feature map-induced vector field design for the RResNet. We demonstrate that our methodology better captures underlying geometry than existing manifold-specific neural network constructions. On a variety of tasks such as node classification, link prediction, and covariance matrix classification, our method outperforms previous work. Finally, our RResNet's principled construction allows us to directly assess the effect of geometry on a task, with neural network architecture held constant. We illustrate this by directly comparing the performance of two Riemannian metrics on the manifold of SPD matrices. We hope others will use our work to better learn over data with nontrivial geometries in relevant fields, such as lattice quantum field theory, robotics, and computational chemistry. **Limitations** We rely fundamentally on knowledge of geodesics of the underlying manifold. As such, we assume that a closed form (or more generally, easily computable, differentiable form) is given for the Riemannian exponential map as well as for the tangent spaces. ## Acknowledgements We would like to thank Facebook AI for funding equipment that made this work possible. In addition, we thank the National Science Foundation for awarding Prof. Christopher De Sa a grant that helps fund this research effort (NSF IIS-2008102) and for supporting both Isay Katsman and Aaron Lou with graduate research fellowships. We would also like to acknowledge Prof. David Bindel for his useful insights on the numerics of SPD matrices.
2304.10167
Adaptive coded illumination Fourier ptychography microscopy based on physical neural network
Fourier Ptychographic Microscopy (FPM) is a computational technique that achieves a large space-bandwidth product imaging. It addresses the challenge of balancing a large field of view and high resolution by fusing information from multiple images taken with varying illumination angles. Nevertheless, conventional FPM framework always suffers from long acquisition time and a heavy computational burden. In this paper, we propose a novel physical neural network that generates an adaptive illumination mode by incorporating temporally-encoded illumination modes as a distinct layer, aiming to improve the acquisition and calculation efficiency. Both simulations and experiments have been conducted to validate the feasibility and effectiveness of the proposed method. It is worth mentioning that, unlike previous works that obtain the intensity of a multiplexed illumination by post-combination of each sequentially illuminated and obtained low-resolution images, our experimental data is captured directly by turning on multiple LEDs with a coded illumination pattern. Our method has exhibited state-of-the-art performance in terms of both detail fidelity and imaging velocity when assessed through a multitude of evaluative aspects.
Ruiqing Sun, Delong Yang, Yao Hu, Qun Hao, Xin Li, Shaohui Zhang
2023-04-20T08:59:08Z
http://arxiv.org/abs/2304.10167v1
# Adaptive coded illumination Fourier ###### Abstract Fourier Ptychographic Microscopy (FPM) is a computational technique that achieves a large space-bandwidth product imaging. It addresses the challenge of balancing a large field of view and high resolution by fusing information from multiple images taken with varying illumination angles. Nevertheless, conventional FPM framework always suffers from long acquisition time and a heavy computational burden. In this paper, we propose a novel physical neural network that generates an adaptive illumination mode by incorporating temporally-encoded illumination modes as a distinct layer, aiming to improve the acquisition and calculation efficiency. Both simulations and experiments have been conducted to validate the feasibility and effectiveness of the proposed method. It is worth mentioning that, unlike previous works that obtain the intensity of a multiplexed illumination by post-combination of each sequentially illuminated and obtained low-resolution images, our experimental data is captured directly by turning on multiple LEDs with a coded illumination pattern. Our method has exhibited state-of-the-art performance in terms of both detail fidelity and imaging velocity when assessed through a multitude of evaluative aspects. 1School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China 2Changchun University of Science and Technology, Changchun 130022, China 3Department of General Surgery, Xiangya Hospital, Central South University, Changsha 410011, China 4e-mail: [email protected] 5Corresponding author: [email protected] ## 1 Introduction With the development of life sciences, microscopic biological structure information has been obtaining increasing interest and attention among the researchers [1-3]. However, due to the Space-Bandwidth Product (SBP) limit of conventional optical microscopes, there is a growing conflict between the field of view (FOV) and spatial resolution, which are both crucial for observing dynamic processes across different spatial and temporal scales [4]. To overcome this limit, Fourier ptychographic microscopy (FPM) has emerged as a typical computational imaging technique that produces high-resolution(HR) images with a wide FOV [5-8]. This technique has attracted considerable attention due to its ability to be extended to different applications, such as digital pathology, drug discovery and stem cell biology [6,9,10]. Traditional large SBP imaging methods always require 2D precise scanning in the spatial domain using a high numerical aperture (NA) objective. However, the scanning process is time-consuming, making it unsuitable for in vitro imaging of dynamic events. In contrast, FPM scans the sample's spectrum by sequentially turning on each LED unit located at different positions on the LED board and combining corresponding low-resolution(LR) images in Fourier space to form a large SBP complex image with a small NA objective [5,6,11,12]. This imaging approach significantly reduces the overall hardware cost. FPM's reconstruction process can be accomplished through the deployment of either nonlinear optimization methodologies or iterative algorithms [5,7,13-15]. As a variety of machine learning techniques have gradually emerged to solve the imaging reconstruction problem [16-23], deep learning-based optimization methods are also used to solve the FPM optimization problem [24,25]. To increase the quality of the reconstructed HR image, it is important to ensure sufficient overlap ratio between adjacent sub-spectrum areas corresponding to adjacent LEDs, with at least 60% overlapping coverage being necessary [26, 27]. In other words, it is necessary to turn on an adequate quantity of LEDs. Consequently, a typical FPM system always faces challenges with long acquisition times due to the large number of raw images and extended exposure times required for dark field images, which may limit its potential applications. There are two main approaches to address this issue: one is to reduce the number of acquisitions for LR images, and the other is to use higher-performance hardware. Generally, employing LED light sources with increased brightness and switching frequency, in conjunction with image sensors boasting higher frame rates, can substantially enhance the overall acquisition efficiency. Nonetheless, hardware modifications often mean more extensive system adjustment cycles, augmented uncertainty, and elevated costs. Therefore, some studies have improved the temporal resolution by turning on multiple LEDs simultaneously to shorten LR images acquiring time [4, 14, 20]. These studies can be divided into two categories: manual rule-based and data-driven approaches. Manual rule-based methods can provide a universal illumination mode without additional training time and have high interpretability, while data-driven methods can provide a more suitable one. Manually designed modes may lead to satisfactory results for samples with specific spectral distribution types. However, for samples exhibiting significant differences in spectral distribution characteristics, the reconstruction outcomes can sometimes be unsatisfactory. Data-driven methods can perform better in generating specific illumination modes, but the gold standard required for training can be scarce, particularly in some fields such as materials science and medicine. To deal with the aforementioned challenges, this paper proposes an unsupervised physical neural network to create adaptive illumination modes for any specific samples. Our model is established within a highly scalable framework, which is also employed in the Fourier Ptychographic Multi-parameter Neural Network (FPMN) [24]. Each layer within our model possesses a distinct physical interpretation, as the physical priors of FPM are incorporated. It is important to note that previous illumination optimization methods [22, 27] often produce a mode consisting of LEDs with varying brightness levels. The complex brightness ratio between different LEDs heighten increases the hardware construction requirements, and information associated with relatively low brightness is often susceptible to loss during capture. As a result, these methods typically rely on linear combinations of LR images from single LED sequential illumination, rather than directly activating multiple LEDs and taking corresponding composite LR images. In contrast, we turned on LEDs based on our generated mode to capture directly instead of digital combination with acquired raw LR images. By reviewing the illumination mode generation task, we found that with appropriate prior knowledge, it is possible to generate a specific coded illumination mode for each sample based on its Fourier domain distribution. In the entire imaging process, we update the encoded illumination layer and the sample layer synchronously to generate a specific illumination mode with the best reconstruction quality, based on prior information. The prior information is derived from a HR complex image obtained either by interpolating the LR image corresponding to the central LED or using previously reconstructed results during continuous imaging. To reduce the generation time and enhance the stability, we introduce physical constraints on the number of available LEDs for each illumination pattern during the update process, effectively reducing the solution space. Once the illumination mode is generated, we collect LR images by using it to illuminate and reconstruct the amplitude and phase of the sample through our proposed physical neural network. Thanks to its unsupervised nature, our model can adjust the acquisition time of LR images by reducing the number of illumination patterns as required during deployment. Generally, more illumination patterns with fewer LEDs per pattern help to obtain higher quality of the reconstruction result, while one containing fewer patterns with more LEDs per pattern helps to shorten the time of information acquisition and reconstruction processes. Features of our model for unsupervised training and adaptation effectively address the issue of dataset scarcity and enhance the model's generalization, rendering it more appropriate for practical applications. This paper is organized as follows. Section 2.1 expounds upon the architecture and underlying principles of our proposed model. The specific methodology and computational procedures for generating the illumination mode are discussed in Section 2.2, while Section 2.3 presents the capture and reconstruction processes. The simulation and experiment principles are elucidated in the final segment of Section 2. In Section 3, simulations and experiments are conducted to validate the capability of our model to generate adaptive illumination modes with high reconstruction quality. Conclusions are subsequently synthesized in Section 4. ## 2 Method Unlike the typical FPM system, we first generate a specific illumination mode which is appropriate for illuminating the sample based on the ideal imaging process. Then we employ this mode to illuminate the sample, capture the intensity of LR images, and reconstruct the target's amplitude and phase. We demonstrate our experimental system in Fig. 1. ### 2.1FPM principle and model structure Figure 1: (a) The theoretical model of FPM (b) The experimental system of our method (c) the generated illuminating model with our method In a forward imaging process, if the sample is much smaller than its distance to the LED array, the illumination light can be treated as a parallel plane wave. Each LR image corresponding to an individual LED offers information on different sub-spectrum areas of the sample, with the wave vector \(\left(k_{x},k_{y}\right)\) determined by the position of each LED and the distance between the LED array and the sample. In scenarios involving sequential illumination mode or position-multiplexed illumination mode, the imaging process can be regarded as performing FFT, low-pass filtering in the Fourier domain, iFFT, and intensity imaging in the spatial domain. It can be expressed as \[I_{n}(x,y)=\left|\mathcal{F}^{-1}\big{\{}\mathcal{F}\{t(x,y)\}\cdot P\big{(}k_{ x},k_{y}\big{)}\big{\}}\right|^{2} \tag{1}\] The \(t(x,y)\) denotes the sub-region on the frequency domain, where \((x,y)\) indicates the spatial location of the illuminating source. The Fourier transform is represented by \(\mathcal{F}\), while its inverse process is represented by \(\mathcal{F}^{-1}\). The pupil function of our system is denoted by \(P\big{(}k_{x},k_{y}\big{)}\), and \(\big{(}k_{x},k_{y}\big{)}\) maps the spatial position in the frequency domain. Finally, \(I_{n}(x,y)\) represents the two-dimensional intensity of the final captured image. Conventional FPM optimization algorithms require finding analytical differentiation, which is difficult when optimizing multiple parameters simultaneously. In contrast, FPMN [24] models the forward propagation process as an element-wise neural network, taking advantage of numerical differential methods to simplify the optimization process. FPMN's joint optimization capability and stability during training make it possible for future researchers to model and interpolate other parameters in the non-ideal state imaging process using a similar framework. Consequently, we establish a model sharing the similar framework with FPMN for our research and rewrite some layers using Pytorch's new complex data type to reduce complexity and enhance operational efficiency. In essence, our model can be viewed as an evolution of FPMN. Eq. (1) is rewritten accordingly. \[I_{n}=\left|\mathcal{F}^{-1}\big{\{}\delta(k-k_{n})\cdot P\big{(}k_{x},k_{y} \big{)}\big{\}}\right|^{2},n=1,2,...\,N \tag{2}\] where \(\delta\) denotes the Fourier transform of the object function, \(k_{n}\) represents the illumination vector and \(P\big{(}k_{x},k_{y}\big{)}\) have the same meaning as in Eq. (1). The variable \(N\) represents the total number of LEDs that are present on the LED array. We first rewrite the ideal Fourier Ptychography Neural Network (FPN) [24] as Complex Fourier Ptychography Neural Network (CFPN) and reconstructed to add our illumination pattern layer, which is used to generate the specific illumination mode, as shown in Fig. 12(a). The total number of illumination patterns in the mode is N. The two-original float-type sub-channels that represent the sample's complex function were merged into a single channel of complex type, which is the same for the inverse fast Fourier transform (IFFT) layer. For modeling the complex pupil function \(P\big{(}k_{x},k_{y}\big{)}\), we used the top ten Zernike coefficients, which is a classic two-dimensional phase distribution representation approach. ### 2.2 Generate specific illumination mode Figure 2: (a)structure of the ideal process model CFPN for generating illumination mode. During the process of generating a specific illumination mode, we use either the HR image from the last frame in the video dataset or the image obtained by bilinear interpolation from the LR image illuminated with the middle LED as the prior knowledge of the sample's phase and amplitude to simulate the imaging process. As each LED illuminates incoherently with the others, the final captured intensity in a multi-LED scenario can be regarded as a straightforward linear sum of the individual intensities produced by each LED. This can be expressed as \[MI_{p}=\sum_{i=0}^{N}w_{i}\cdot I_{i}(x_{i},y_{i}) \tag{3}\] We use \(MI_{p}\) to represent the ideal captured intensity, and \(N\) represents the total number of LEDs. The \(W_{i}\) parameter is used to correct for any fluctuations in the illumination intensity, which helps make our model more realistic. The \(I_{i}(x_{i},y_{i})\) has the same meaning as in Eq. (1). The process of obtaining the illumination mode is approached as a dynamic programming (DP) process. DP is a computational technique to solve optimization problems by breaking them down into smaller sub-problems and finding the optimal solution to each sub-problem. To tackle this challenge, a greedy algorithm was utilized, which is a type of algorithmic approach in computer science and optimization theory that makes locally optimal choices at each step in the hope of finding a global optimum. In other words, the best available option is chosen at each step without considering the overall effect on the final solution. To start, the illumination mode is considered as a queue M for storing patterns, with the capacity ceiling of the queue set as the number of patterns contained in one mode. Initially, the queue is empty. To choose which LEDs to be turned on in a pattern, the extreme case where all LEDs on the available area are turned on is assumed, and the lighting weights for each LED are given the same initial value to ensure that all LEDs have an equal chance of being lit before a new pattern is fixed. After the intensity of the LR images is recorded, the network is tasked with reconstructing the exact sample information according to the collected images. This approach makes solving the inverse problem more difficult, but thanks to our modified CFPN, we can transform the original optimization problem into a loss function minimization problem, as illustrated in Fig 2(a). During the backpropagation process, the weights of LEDs that help reduce the loss function value will be increased significantly. We assign a weight of 1 to those LEDs that are clearly beneficial and set the rest to 0. This pattern is then fixed and added to the queue M. When M is empty, we run the propagation process with all LEDs on, fix and add the obtained pattern. When the queue M contains n (where n is not 0) patterns, we dequeuer all patterns in it to create an illumination mode containing n+1 patterns, and perform a network training process based on this mode. In the n+1th pattern, the LEDs not included in the first n patterns are initialized with the same weight, and the rest are set to 0. Similarly, we set the weights of the LEDs with high importance in the n+1th pattern to 1 and the weights of those with relatively low importance to 0 and add all n+1 patterns to the queue M. This algorithm is repeated until the queue is full. Consequently, for each observation, the illumination mode is specific to the sample and locally optimal. Our model can also adaptively generate the illumination mode based on the number of different groups of patterns provided. However, the simulation process has a higher signal-to-noise ratio than the actual acquisition process. In particular, the dark field information is frequently damaged during actual capture. To enhance the practical application ability of our proposed model, we artificially divided the patterns into groups, with each group corresponding to a different size of the optimizable LED array area, as shown in Fig. 3. We have created four groups with optimized LED numbers of 25, 49, 81, and 121, each containing five patterns. It's worth mentioning that this division keeps the captured energy within a small fluctuation range under each pattern illumination, making it easier to adjust the exposure time during experimental acquisition process. The number of groups and the number of patterns within each group serve as adjustable hyper-parameters that can be modified as needed. Nonetheless, our model remains highly competitive even without grouping, as demonstrated in the follow-up experiment section. Unlike the previous work of Delong Y et al. [24], we incorporate a normalization step during the forward propagation of the model. This approach improves the stability during the training process and is more consistent with the experimental LR image collection process, making the increase of LED brightness weight no longer a simple linear process. It also avoids the impact of different initialization values on the overall illumination mode. Moreover, it prevents the network from excessively increasing the brightness of some LEDs during the optimization process, which could cause the ignore of other LEDs that are also beneficial for minimizing the loss function value. ### Capture low-resolution images and reconstruct In this process, we illuminate the observed object with the mode generated and record the LR images. To enhance stability during training, we linear stretch the intensity values of the model output during forward propagation. This method is an extension of the approach used by Delong Yang et al. [24] and is described by Eq. (4). \[MI_{p}=\frac{MI_{p}}{n*16} \tag{4}\] Where \(MI_{p}\) represents the predicted captured intensity for the pth pattern, and \(n\) represents the number of LEDs containing in the pattern. Rather than modeling an idealized process, we rewrite the FPMN network as the complex Fourier ptychography multi-parameter neural network (CFPMN), similar to the aforementioned CPFN, for a more accurate representation of the actual physical processes involved. This allows us to jointly optimize parameters representing other physical processes that may appear in the non-ideal imaging situation, while also inheriting and building upon the strong scalability of FPMN's framework. By following Figure 3: An example illumination mode in which the yellow square represents the turned-on LEDs and another color means off (see **Visualization 1**). the work of Delong Yang el al. [24], we choose the L1-norm as the loss function in our approach, which is shown in the following equation. \[loss=\frac{1}{n}{\sum_{n=1}^{N}}\big{|}{I_{n}^{gt}}-{I_{n}^{predict}}\big{|} \tag{5}\] ### Simulation and experiment principles In the simulation, we use ground truth to initialize the sample layer and simulate the propagation of light during imaging by performing a forward process of the CFPN mentioned above. The series of LR images output by the network were used to simulate the image intensity collected by the camera during the actual imaging process. In the reconstruction process, we use the LR image illuminated by the central LED as the initial value of the Fourier object layer. The simulation results can well reflect the difficulty of solving the inverse problems with different illumination mode. In the following experiments, we demonstrate that the reconstruction results of simulation and experiment are highly consistent. It is worth integrating that we adjust the exposure time of different patterns to ensure the capture of dark-field information. Undoubtedly, this adjustment may alter the overall brightness of the LR image, complicating the reconstruction process. To address this issue, we adjust the brightness of the LR images based on the approximate theoretical brightness obtained through simulation, as shown in the following equation. \[E_{sn}=sum(F(CFPN,n)) \tag{6}\] \[E_{en}=sum(I_{n})\] (7) \[I_{n}=\frac{E_{sn}}{E_{en}}\cdot I_{n},(n=1\,...\,N) \tag{8}\] The energy of the simulation output of the LR image is represented by \(E_{sn}\), while \(F\) represents the forward process of our model. Here, \(I\) and \(N\) have the same meaning as mentioned previously. ## 3 Simulations and experiments In this section, we demonstrate the effectiveness and validity of our model by choosing the method of Tian's heuristic designed multiplexed 4 LEDs [4, 14, 28] as the benchmark and extending it to the situation of 8 LEDs. For the experiment, we used a 4X objective with an NA of 0.4542 and a LED array containing 11\(\times\)11 LEDs, with a distance of 5mm between each of them, placed 100mm beneath the sample. By following the work of Delong Yang et al. [24], we limit the captured images to 256\(\times\)256 pixels while the HR result was reconstructed to 1024\(\times\)1024 pixels. To generate an illumination mode with 11x11 available LEDs, containing 30 patterns and 4 LEDs were turned on in each pattern, we used our model as shown in Fig. 4. Interestingly, our model tends to include only bright-field LEDs or dark-field LEDs in one pattern, unlike random methods. This occurs because bright and dark field LEDs offer distinct information, and our model develops this differentiation capability through learning. The quality of the reconstructed result degrades as the number of captured LR images decreases, which is also observed in previous work [4, 14, 29], and we have the same phenomenon in our simulations. We attribute this to the reduction in the average spatial distance between two adjacent lit LEDs when decreasing the total number of available LEDs. Turning on too many LEDs in one pattern increases the difficulty of reconstruction. To make the training process of the network more stable, we used the LR image obtained by linear interpolation of the image illuminated with the center LED or prior HR image to initialize the sample layer. In other words, we additionally collected a LR image as the initial value, which is also used to generate the specific illumination mode when lacking other prior information. The experimental results illuminated by different numbers of patterns are shown in Fig. 5. ### Simulation results of USAF resolution target with our model Figure 4: The illuminating model of 11\(\times\)11 available LEDs generated by our model. Figure 5: Experiment results for multiplexed illumination of a resolution target. (a) The captured image illuminated by the middle LED. (b) The low-resolution area without reconstructing. (c) The reconstruction result from 30 images illuminated by multiplexed 4 LEDs which is heuristic design. (d) The reconstruction result from 30 images illuminated by our generated illumination mode. (e) The reconstruction result from 20 images illuminated by our generated illumination mode. (f) The reconstruction result from 15 images illuminated by multiplexed 8 LEDs which is heuristic design. (g) The reconstruction result from 15 images illuminated by our generated illumination mode. The high-level agreement between the simulations and experiments demonstrates the effectiveness of our simulation method, as shown in Fig. 5 and 6. Moreover, compared to actual experiments, the ground truth of the simulation experiments is more accurate and easier to obtain. Thus, we use the performance of the simulation results to evaluate the quality of different illumination modes in subsequent experiments. When observing samples with complex phases, our generated mode shows significant advantages in terms of acquisition effectiveness and optimization effects, which are difficult to match with other methods. As the number of captured images decreases, the reconstruction result also declines, as shown in Fig. 6. However, we can still make a trade-off between reconstruction speed and clarity based on \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Evaluation metric(amplitude)} \\ \cline{2-6} & L1 & SSIM & PSNR & NIQE & LPIPS \\ \hline 30 images & & & & & \\ (Tian’s multiplexing 4 LEDs [4, 14, 21]) & 240 & 0.495 & 24.3 & 21.0 & 0.220 \\ 20 images & & & & & \\ (our method based on incoherent information) & 197 & 0.704 & 25.2 & 20.3 & 0.181 \\ **20 images** & & **48.4** & **0.842** & **31.3** & **19.0** & **0.085** \\ **(our method)** & & & & & \\ 15 images & & & & & \\ (Tian’s multiplexing 8 LEDs [4, 14, 21]) & 1730 & 0.176 & 15.8 & 21.9 & 0.349 \\ 15 images & & & & & \\ (our method) & & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: **Comparison of the performance with different methods and illuminating model size** Figure 6: Simulation results for multiplexed illumination of a hypothetical sample. (a1) The ground truth for the amplitude within the central 256x256 area. (a2) The ground truth for the phase within the central 256x256 area. (b*) The reconstruction result from 30 images illuminated by Tian’s multiplexing 4 LEDs. (c*) The reconstruction result from 20 images illuminated by our generated model. (d*) The reconstruction result from 20 illuminated by our illuminating model generated depending on incoherent information. (e*) The reconstruction result from 15 images illuminated by Tian’s multiplexing 8 LEDs. (f*) The reconstruction result from 15 images illuminated generated by our model. different application scenarios since our reconstruction process remains stable under different pattern numbers. Additionally, we use different illumination modes to demonstrate the specificity of our method, as shown in Table 1. We employ the metrics of SSIM, PSNR, NIQE, and LPIPS [29-32], which are commonly used by super-resolution works in the field of computer vision, to evaluate the effectiveness of our method. It is worth mentioning that the illumination mode based on incoherent information has almost the same performance as the Tian's multiplexed method, which highly proves that our method is adaptive to the specific samples. ### The experiments of the biological sample Unlike the resolution target, the phase and spectrum characteristics of actual biological samples tend to be more complex. To demonstrate the efficiency of our model in practical applications, we utilized plant stem cross-section as our observation sample. We kept the same parameters mentioned in Section 3.1 and captured a series of LR images with a size of 1536\(\times\)2048. Then we selected 256\(\times\)256 sub-images from different regions and individually reconstructed their phase and amplitude. The results, which are shown in Figure 7, provide conclusive evidence of the effectiveness of our proposed method. ## 4 Conclusion and future work In this paper, we have presented an adaptive coded illumination technique for FPM using an extended physical neural network model. Our method can provide adaptive illumination modes specific to different samples, and can further compress LR image acquisition time based on unsupervised optimization, significantly expanding the application scope of the FPM system. Unlike the data-driven method, our method is established according to physical rules, enhancing the model interpretability. The parameters of the illumination pattern layer are optimized simultaneously with those of the sample layer to achieve adaptive generation. As every LED activated in our illumination mode exhibits the same brightness, this significantly alleviates the strain on the hardware. Our method has achieved state-of-the-art results with great advantages in high-frequency information recovery. Additionally, we observed that the initial value selection for the physical neural network framework can have an impact on the final optimization result. Thus, we plan to dedicate more research to optimization algorithms, including exploring a general update of the network that differs from the original FPM method in our future work. **Funding.** National Key Research and Development Program of China (No. 2021YFC2202404); National Natural Science Foundation of China (62275020). **Disclosures.** The authors declare no conflicts of interest. Figure 7: The experimental results of plant stem cross-section illuminated with our illumination mode. (a) The intensity of full image illuminated by center LED. (*1) The captured low-resolution image of center LED. (*2) The reconstructed amplitude of different regions. (*3) The reconstructed phase of different regions. **Data availability.** Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
2306.04019
Learning Search-Space Specific Heuristics Using Neural Networks
We propose and evaluate a system which learns a neuralnetwork heuristic function for forward search-based, satisficing classical planning. Our system learns distance-to-goal estimators from scratch, given a single PDDL training instance. Training data is generated by backward regression search or by backward search from given or guessed goal states. In domains such as the 24-puzzle where all instances share the same search space, such heuristics can also be reused across all instances in the domain. We show that this relatively simple system can perform surprisingly well, sometimes competitive with well-known domain-independent heuristics.
Yu Liu, Ryo Kuroiwa, Alex Fukunaga
2023-06-06T21:22:32Z
http://arxiv.org/abs/2306.04019v1
# Learning Search-Space Specific Heuristics Using Neural Networks ###### Abstract We propose and evaluate a system which learns a neural-network heuristic function for forward search-based, satisficing classical planning. Our system learns distance-to-goal estimators from scratch, given a single PDDL training instance. Training data is generated by backward regression search or by backward search from given or guessed goal states. In domains such as the 24-puzzle where all instances share the same search space, such heuristics can also be reused across all instances in the domain. We show that this relatively simple system can perform surprisingly well, sometimes competitive with well-known domain-independent heuristics. ## 1 Introduction State-space search using heuristic search algorithms such as GBFS is a state-of-the-art technique for satisficing, domain-independent planning. Search performance is largely determined by the heuristic evaluation function used to decide which state to expand next. Heuristic function effectiveness for domain-independent planning depends on the domain, as different heuristics represent different approaches to exploiting available information. Designing heuristics which work well across many domains is nontrivial, so learning-based approaches are an active area of research. In one setting for learning search control knowledge for planning (exemplified by the Learning Track of the IPC), a set of training problem instances (and/or a problem instance generator) is given, and the task is to learn a _domain-specific_ heuristic for that domain. Previous work on learning heuristics and other search control policies (e.g., selection among several search strategies) in this setting include Yoon et al. (2008); Xu et al. (2009); de la Rosa et al. (2011); Garrett et al. (2016); Sievers et al. (2019); Gomoluch et al. (2019). Another type of setting seeks to learn domain-independent planning heuristics, which generalize not only to domains used during training, but also to unseen domains Shen et al. (2020); Gomoluch et al. (2017). Inter-instance speedup learning, or "on-line learning", is a setting where only one problem instance (no training instances or problem generator) is given, and the task is to solve that instance as quickly as possible. Speedup learning within a single problem solving episode is worthwhile if the _total time spent by the solver (including learning)_ is faster than the time required to solve the problem using other methods. Previous work on-line learning for search-based planning includes learning decision rules for combining heuristics Domshlak et al. (2010) and macro operator learning Coles and Smith (2007). On-line learning can be used to learn an _instance-specific heuristic_. Previous work on instance-specific learning includes bootstrap heuristic learning Arfaee et al. (2011), as well as LHFCP, a single-instance neural network heuristic learning system Geissman (2015). Instance-specific learning can be generalized to _single search space learning_, where many problem instances share a single search space. For example, all instances of the 15-puzzle domain share the same search space - different instances have different initial states, all on the same connected state space graph. Thus, a learned heuristic function which performs well for one instance of the 15-puzzle can be directly applied to other instances of the domain. We propose and evaluate SING, a neural network-based instance-specific and single search space heuristic learning system for domain-independent, classical planning. SING is closely related to LHFCP, an approach to supervised learning of heuristics which generates training data using backward search (2015). Given a PDDL problem instance \(I\), LHFCP learns a heuristic \(h_{\text{nn}}\) for \(I\). To generate training data for \(h_{\text{nn}}\), LHFCP performs a series of backward searches from a goal state of \(I\) to collect a set of states and their approximate distances from the goal. After training, \(h_{\text{nn}}\) is used as the heuristic function by GBFS to solve \(I\). This does not require any additional training instances as input, nor pre-existing heuristics to bootstrap its performance. However, LHFCP performed comparably to blind search Geissman (2015), so achieving competitive performance with this approach remained an open problem. SING expands upon this basic approach in several ways: (1) improved backward search space using either (a) explicit search with inferred inverse operators or (b) regression, (2) depth-first search (vs. random walk), (3) boolean state representation, and (4) relative error loss function. We experimentally evaluate SING for learning domain-specific heuristics for domains where instances share a single state space, and show performance competitive with the Fast Forward heuristic (\(h_{\text{ff}}\)) Hoffmann and Nebel (2001), and the land mark count heuristic (\(h_{\text{lm}}\)) (Hoffmann, Porteous, and Sebastia 2004) on several domains. We also evaluate SING as an instance-specific heuristic learner, and show that the learned heuristics consistently outperforms blind search on a broad range of standard IPC benchmark domains, and performs competitively on some domains, even when the learning times are accounted for within the time limit. ## 2 Preliminaries and Background We consider domain-independent classical planning problems which can be defined as follows. A SAS+ planning task (Backstrom and Nebel 1995) is a 4-tuple, \(\Pi=\langle V,O,I,G,\rangle\), where \(V=x_{1},...,x_{n}\) is a set of state variables, each with an associated finite domain \(\mathit{Dom}(x_{i})\); A state \(s\) is a complete assignment of values to variables. \(\mathcal{S}\) is the set of all states. \(O\) is a set of actions, where each action \(a\in O\) is a tuple \((\mathit{pre}(a),\mathit{eff}(a))\), where \(\mathit{pre}(a)\) and \(\mathit{eff}(a)\) are sets of _partial_ state variable assignments \(x_{i}=v,v\in\mathit{Dom}(x_{i})\); \(I\in S\) is the initial state, and \(G\) is a partial assignment of state variables defining a goal condition (\(s\in S\) is a goal state if \(G\subset s\)). A plan for \(\Pi\) is a sequence of applicable actions which when applied to \(I\) results in a state which satisfies all goal conditions. Search-based planners seek a path from the start state to a goal state using a search algorithm such as best-first search guided by a heuristic state evaluation function. A natural approach to learn heuristic functions for search-based planning is a supervised learning framework consisting of the following stages: (1) **Training Sample Generation**: generate many state/distance pairs which will be used as training data. (2) **Training**: Train a heuristic function \(h\) which predicts distances from a given state to a goal. (3) **Search**: Use \(h\) as the heuristic evaluation function in a standard heuristic search algorithm such as GBFS. This paper focuses mostly on stage (1), training data generation. Ferber et al. (2020) investigated an approach where the training data was generated using forward search from the start states of training instances. They perform random walks (200 steps) from the initial state, and from each step visited in the random walk, they perform a forward search (a "teacher search" using a heuristic such as \(h_{\text{ff}}\)) to a goal in order to find the distance to the goal. If the teacher search finds a path to the goal, the states on the path as well as the distance-to-goal for the states on the path are added to the training data. This approach can be practical for shared search space heuristic learning, where the costs of the teacher searches can be amortized among many instances on the same search space. However, this is not practical for satisficing, single instance heuristic learning where there is only one problem solving episode, as requiring forward search to the goal in order to gather training data obviates the need to learn a heuristic for that particular instance. An alternative approach to generating training data uses backward search from the goal. A backward search starting at a goal state (provided in or guessed/derived from the problem specification) is performed, storing encountered states and their (estimated) distances from thoe goal as the training data. Arfaee, Zilles, and Holte (2011) used this approach in a bootstrap system for heuristic learning, which starts with a weak neural net heuristic \(h_{0}\) and generates increasingly more powerful heuristics by using the current heuristic to solve problem instances, using states generated during the search as training data for the next heuristic improvement step. If \(h_{0}\) is too weak to solve training set problems, they generate training data by random walks from the goal state to generate easy problem instances that can be solved by \(h_{0}\). Lelis et al. (2016) proposed BiSS-h, an improvement which uses a solution cost estimator instead of search for training data generation. Arfaee et al. and Lelis et al. evaluated their work on domain-specific solvers for the 24-puzzle, pancake puzzle, and the Rubik's cube. Geissmann (2015) investigated a backward-search approach to training data generation for domain-independent classical planning. His system, LHFCP, uses backward search to generate training data for learning a neural network heuristic function which estimates the distance from a state to a goal. To generate training data, LHFCP performs backward search (random walk) in an explicit search space. It generates the start state for backward search by generating a state which satisfies the goal conditions, with values unspecified by the goal condition filled in randomly. LHFCP relies on the operators in the original (forward) problem to perform backward search. Search using the heuristics learned by LHFCP across a wide range of IPC domains performed comparably to blind search (Geissman 2015). Geissmann also investigated a variation of LHFCP which applied BiSS-h to classical planning but reported poor results, attributed to difficulties in efficiently implementing BiSS for classical planning. Thus, a successful backward-search based approach to training data generation for domain-independent classical planning remained an open problem. ## 3 SING: An Improved, Backward-Search Based Heuristic Learning System We describe **S**Ingle search space **N**eural heuristic Generator (SING), a system which learns single-search space heuristics for domain-independent planning. SING learns a heuristic function \(h_{nn}(s)\), which takes as input a vector representation of a state \(s\), and returns a heuristic estimate of the distance from \(s\) to a closest goal. SING is implemented on top of the Fast Downward planner (FD) (Helmert 2006). SING uses backward search to generate training data, similar to LHFCP, but incorporates several significant differences in the state representation, backward search space formulation, and backward search strategy. Below, we describe each of these in details: ### State Representation The input to \(h_{\text{nn}}\) is a vector representation of a state. LHFCP used a multivalued SAS+ vector representation of the state, which is a natural representation to use, as FD uses the SAS+ representation internally. Another natural representation for the vector input to \(h_{\text{nn}}\) is based on the STRIPS propositional representation of the problem. A STRIPS planning task (Fikes and Nilsson 1971) is a 4-tuple, \(\Pi=\langle F,I,G,A,\rangle\) where \(F\) is a set of propositional facts, \(I\in 2^{F}\) is the initial state, \(G\in 2^{F}\) is a set of goal facts, and \(A\) is a set of actions. Each action \(a\in A\) has preconditions \(pre(a)\), add effects \(add(a)\), and delete effects \(del(a)\), which are sets of facts. A state \(s\in 2^{F}\) is set of facts, and \(s\) is a goal state if \(G\subseteq s\). Given a state \(s\in 2^{F}\), \(a\) is applicable iff \(pre(s)\subseteq s\). After applying \(a\) in \(s\), \(s\) transitions to \(s\cup add(a)\setminus del(a)\). A plan for \(\Pi\) is a sequence of applicable actions which make \(I\) transition to a goal state. The STRIPS representation corresponds directly to the classical planning subset of the standard PDDL domain description language, as PDDL uses boolean facts to represent the world state. In the SAS+ representation used by FD, each possible value of a variable represents a mutually exclusive set of facts in the underlying propositional problem. Each variable-value pair in FD represents a fact, negation of a fact, or negation of all facts represented by other values in the variable. Precomditions and effects of actions are also represented as the set of variable-value pairs. Since the variable/value naming conventions used in the SAS+ generated by the FD PDDL-to-SAS+ translator, conversion between the SAS+ finite-domain representation and the STRIPS propositional state representation is easy. Thus, \(h_{\text{nn}}\) can use either the boolean (STRIPS) or multivalued (SAS+) state vector representation as input during training and search. Since each input bit corresponds to a fact in the boolean encoding, it may enable a more accurate \(h_{\text{nn}}\) state evaluation function to be learned than the SAS+ multi-valued encoding. On the other hand, SAS+ encodings are more compact, which can significantly reduce the dimensionality of the state representation, which can result in faster NN evaluation, speeding up the search process. Thus, the choice of state vector representation poses a tradeoff between \(h_{\text{nn}}\) evaluation accuracy and \(h_{\text{nn}}\) evaluation speed, and SING can use either the multivalued SAS+ vector representation or the STRIPS boolean vector representation. ### Search Space and Operators for Training Sample Generation The task of training sample generation is to collect a training set \(T=\{(s_{1},e_{1}),...,(s_{r},e_{r})\}\) a set of \(r\) states and their (estimated) distance to a goal. The basic idea is to repeatedly start at a goal \(g\) and generate a sequence of states heading away from it (using a directed search or random walk), adding such states to the training data. In some search problems such as the sliding tiles puzzle, backward search is relatively straightforward as the goal state is given explicitly as input to the problem, and the operators available for the forward problem are sufficient to solve the backward problem. In domain-independent planning, backward search based training sample generation poses several issues. First, a goal condition, possibly satisfied by many goal states, is given instead of an explicit, unique goal state, so in general, it is not possible to simply "search backward from the goal state". Second, in general, the operators for the forward problem are not sufficient for backward search. LHPCP generates a start state for backward search by generating a state which satisfies the goal conditions, with values unspecified by the goal condition filled in randomly. It relies on the operators in the original (forward) problem to perform backward search. SING incorporates two approaches to backward search for training sample generation: (1) backward explicit search using derived inverse operators, and (2) regression. Explicit Backward Search with Derived Inverse OperatorsAs in LHPCP, a candidate start state for backward search is generated by first generating a partial state which satisfies all conditions in the goal condition, and then randomly assigning values to variables whose values are unspecified in the goal condition. Such a randomly generated candidate start state \(s\) might be invalid and unexpandable, i.e., no backward operators (see below) can be applied to \(s\). In that case, we simply generate another candidate state. This random initialization is performed for each backward sampling search. For the search operators, one simple approach is to use the same set of actions as for forward search, as in LHPCP [1]. However, this fails in domains where actions are not invertible such as visitall. Thus, operators for the backward search must be derived from the forward operators. Since preconditions and effects are represented as a set of variable-value pairs in FD, one naive method to generate inverse actions is to swap values of variables which appear in both preconditions and effects. Other variable-value pairs in preconditions and effects are treated as preconditions in the inverse action, because they must hold after application of the action. However, the inverse action does not change values of variables which appear in the original effects but not in the original preconditions. To address this issue, we use information available in the STRIPS formulation of the problem (as explained in Section 3.1, conversion among the PDDL problem description, its STRIPS formulation and the SAS+ formulation used internally by Fast Downward is straightforward). For action \(a\), we generate an inverse action \(a^{\prime}\) such that \(pre(a^{\prime})=(pre(a)\cup add(a))\setminus del(a)\), \(add(a^{\prime})=del(a)\), and \(del(a^{\prime})=add(a)\). We identify variable-value pairs which represent propositions as add effects, and pairs which represent negation of facts as delete effects. RegressionAnother approach to backward search is regression. In backward search using regression, we use the modified SAS+ representation by Alcazar et al. (2013). An action \(a\) is applicable to a state \(s\) if \(add(a)\cap s\neq\emptyset\wedge del(a)\cap s=\emptyset\). If an action \(a\) is applied to a state \(s\), \(s\) transitions to a state \(s^{\prime}=(s\setminus add(a))\cup\mathit{pre}(a)\). Normally, SAS+ variables represent mutex groups of the corresponding STRIPS propositions. In regression planning with SAS+, each variable has an additional, _undefined_ value. The starting node in regression space is the goal state, where variables unspecified by the goal condition have _undefined_ values. When an action \(a\) is applied to a state \(s\), if a variable \(v\) is included in \(add(a)\) but not in \(\mathit{pre}(a)\), \(v\) is set to _undefined_. When generating training data, a bit vector representation of states needs to be generated (Section 3.1). When converting the SAS+-based representation used by Fast Downward into a bit vector, unlike the other possible state values in the mutex group, undefined values are not explicitly represented in the bit vector. For example, suppose a state variable \(x\) in regression search has 2 possible actual values, \(v_{1}\) and \(v_{2}\), as well as "undefined". In the bit vector representation output for use as training data, \(x\) is represented by 2-bits, where the first bit represents \(x_{1}\), and the second bit represents \(x_{2}\), and there is no explicit third bit for the undefined value. Regression vs. explicit search spacesThe choice of regression vs. explicit spaces depends on the domain. Although regression is in a sense the "correct" way to perform backward search, the backward branching factor in regression space is very large in many domains. On the other hand, while explicit backward spaces sometimes have much smaller branching factor than regression, goal generation has risks. First, goal generation might fail to find a goal. Also, generated goals might not be true goal states reachable from the start state, and states reachable from such incorrect states are also unreachable from the start state. Such cliques (unreachable from the start state) can cause backward search to yield few or no states for training data. However, although the training data may include states which are unreachable from the start state, these may nevertheless be useful for learning an effective \(h_{\text{nn}}\) which evaluates "real" states during search, somewhat similar to how synthetic data generated by the adversary during training is useful for learning networks which correctly classify real data in GAN learning Goodfellow et al. (2014). ### Backward Search Strategy Given a start state for backward search (corresponding to a goal in the forward search space), we seek a set of training states \(T\) which are relatively far from goal \(g\) but with a reasonable estimate of their distance from \(g\) for training \(h_{\text{nn}}\). Breadth-first search (BFS) from \(g\) could be used to generate states \(T\) for which \(c(s,g)\), the exact distances from \(s\in S\) to \(g\) (assuming unit-cost domains) are known, but would limit the training data to states which are very close to \(g\). We need a search algorithm which can go much further from \(g\) than BFS, and for which the number of steps in the (inverted) path from \(s\in T\) to \(g\) is an approximation of \(c(s,g)\). One natural sampling/search strategy is random walk, as in LHFCP Geissman (2015). The number of steps from \(g\) at which \(s\) is encountered is used as an estimate of the true distance from \(g\) to \(s\). Although random walk is fast, distance estimates from random walk may be inaccurate if cycles are not detected. Loop detection can be implemented easily using a hash table, but in domains with many cycles, it can be difficult to sample nodes far from \(g\) if the random walk is restarted whenever a previously visited node is generated. Therefore, we use depth-first search (DFS) to extend a path from \(g\), using the depth at which \(s\) is encountered is used as an estimate of the true distance from \(g\) to \(s\), and all generated states are added to \(T\). Random tie-breaking among \(s\) is used to select the nodes among successors of \(s\), \(\mathit{Succ}(s)\), to expand. A hash table is used to prune duplicate nodes and prevent cycles. In domains with many cycles and dead ends, by backtracking (instead of restarting search) when a duplicate is detected, DFS can potentially sample more states which are further from \(g\) than random walk. The best choice of sampling search strategy depends on the domain. In some domains, DFS generates more accurate samples than random walk due to duplicate detection and backtracking, while in other domains DFS may incur large overheads due to backtracking and Random walk allows faster searches. In the experiments below, during training data generation we perform \(\mathit{nsearches}\) backward searches, stopping each search after \(\mathit{nsamples}\) states are collected, i.e., \(\mathit{nsearches}\times\mathit{nsamples}\) states are collected. ### Neural Network Architecture We use a standard feedforward network for \(h_{\text{nn}}\), using the ReLU activation function. Each layer is fully connected to the next layer. The input layer takes the state vector representing a state \(s\) as input. As discussed in Section 3.1, the state vector is either a boolean vector for the STRIPS representation of the problem instance, or a multivalued vector for the SAS+ representation of the instance, so the number of inputs is the same as the length of the state vector (\(|F|\) for STRIPS propositional representation, \(|V|\) for SAS+ multivalued representation). The output layer is a single node which returns \(h_{nn}(s)\), the heuristic evaluation value of state \(s\). Since \(h_{\text{nn}}\) will be called many times as the heuristic evaluation function for best-first search, a small network enabling fast evaluation is desirable. PyTorch 1.2.0 is used for training \(h_{\text{nn}}\), but for search, we use the Microsoft ONNX Runtime 0.4.0 to evaluate \(h_{\text{nn}}\). Both training and search use a single CPU core. Due to the simple network architecture as well as accelerated evaluation using the ONNX Runtime, \(h_{\text{nn}}\) can be evaluated relatively quickly, significantly faster than \(h_{\text{ft}}\) on most IPC domains, (see node expansion rates in Table 2). ### Loss Function Previous work on learning neural nets for classical planning used the standard Mean Square Error (MSE) regression loss function Geissman (2015); Ferber, Helmert, and Hoffmann (2020). Instead of MSE, we use a prediction _relative error_ sum loss function, \(f_{loss}=\sum_{i}abs(\hat{y}_{i}-y_{i})/(y_{i}+1)\), which is the sum of the _relative_ error of the predicted (\(\hat{y}_{i}\)) values compared to the training data (\(y_{i}\)). ## 4 Evaluation: Domain-Specific Heuristic Learning on Shared Search Spaces In domains where multiple instances share the same space, it is possible to learn reusable \(h_{\text{nn}}\) networks that can be used across many instances, so the cost of learning a heuristic can be amortized across instances. For example, all instances of the \(N\)-puzzle (for a particular value of \(N\)) share the same search space. We evaluated SING as a shared search space, single model learner on the following PDDL domains: * 24-puzzle: PDDL encodings of the standard 50-instance benchmark set for Korf and Felner (2002) * 35-puzzle: 50 randomly generated instances * blocks25: 100 blocks instances with 25 blocks generated using the generator from [10] * pancake: 100 randomly generated instances with 14 pancakes. For each domain above, we ran the learning phase (training data generation and \(h_{\text{nn}}\) training) _once_ to learn a heuristic \(h_{\text{nn}}\) for the domain. For 24-puzzle, we used the C4 configuration (Table 1 shows configuration details), and training data generation took 7 seconds and training took 61 seconds. For blocks25, we used the C5 configuration, training data generation took 502 seconds and training took 228 seconds. For pancake, we used the C4 configuration, training data generation took 21 seconds and training took 377 seconds. Note that for these 3 domains, we tried several SING configurations (i.e., manual tuning) and report the results for the best configuration. We are currently investigating automated tuning (hyperparameter optimization) to optimize the best configuration for a given domain. Table 2 and Figures 1-2 compare the coverage, node expansions, and runtime (on solved instances) of GBFS using \(h_{\text{nn}}\), \(h_{\text{ff}}\), \(h_{\text{lm}}\) with a 30 min time limit per instance and 8GB RAM limit using an Intel(R) Xeon(R) CPU E5-2680 v2. \(h_{\text{nn}}\) had or tied for the highest coverage on all 4 domains. On blocks25 and pancake, \(h_{\text{nn}}\) had the highest coverage. On 24-puzzle, 35-puzzle and pancake, \(h_{\text{nn}}\) had the lowest median run time. Thus, \(h_{\text{nn}}\) achieved competitive performance on all of these domains compared to both \(h_{\text{ff}}\) and \(h_{\text{lm}}\) in this shared search space evaluation setting. Note that while \(h_{\text{nn}}\) and \(h_{\text{ff}}\) expanded a comparable number of nodes, \(h_{\text{nn}}\) had a significantly higher median node expansion rate than \(h_{\text{ff}}\) resulting in faster runtimes. Figure 3 compares heuristic accuracy (\(h\)-value minus true distance) for a set of 4400 states for \(h_{\text{nn}}\), \(h_{\text{ff}}\), \(h_{\text{lm}}\), and \(h_{\text{gc}}\) (goal count). For states with true distance \(\leq 30\) from the goal state, \(h_{\text{nn}}\) is fairly accurate. This accuracy and the fast evaluation speed due to the simple neural network enables efficient, fast search. ## 5 Evaluation: Instance-Specific Learning In Section 4, we evaluated SING for learning domain-specific heuristics which could be reused on many instances sharing the same search space, so the evaluation focused on search time, assuming that the time spent learning \(h_{\text{nn}}\) can be amortized across multiple instances. Next, we evaluate SING as an instance-specific learner in an IPC Satisficing track setting, where SING is given 30 minutes total for all phases, including learning (including training data collection and training) and search. Each run of SING starts from scratch - _nothing is reused across instances, learning costs are not amortized, and the heuristic is learned specifically for solving a given instance once._ We evaluate SING on a large set of standard benchmarks from the IPC, all with unit-cost actions. All runs were given a total 30 minutes for time limit both learning and search (i.e., includes training data collection, training, and search using \(h_{\text{nn}}\)) and 8GB RAM per instance. We evaluated the SING configurations in Table 1. As baselines for comparison, we also evaluated blind search, the goal count heuristic (\(h_{\text{gc}}\)), the Fast Forward heuristic (\(h_{\text{ff}}\)) [10], and the landmark count heuristic (\(h_{\text{lm}}\)) [10]. As an additional baseline we also evaluate SING/L, a configuration of SING which is very similar to LHFCP [1] (see Table 1. This configuration is the same as C3, except that instead of the derived inverse operators (Section 3.2), SING/L uses only the actions available in the forward model. Table 3 shows the coverage results (# of instances solved). SING configurations C2, C3, C4, C5 significantly outperform blind search, showing that SING successfully learned some useful heuristic information. The SING/L (LHFCP) configuration performed comparably to blind search, consistent with the results in [1]. Configuration C3, which differs from SING/L only in that action inversion is used, has much higher coverage than SING/L, showing the effectiveness of action inversion. C2 outperforms \(h_{\text{ff}}\) on 5 domains and outperforms \(h_{\text{lm}}\) on 2 domains. C3 outperforms \(h_{\text{ff}}\) on 5 domains and \(h_{\text{lm}}\) on 1 domains. C4 outperforms \(h_{\text{ff}}\) on 4 domains, and C5 outperforms \(h_{\text{ff}}\) on 3 domains. Thus although none of the SING configurations are competitive with \(h_{\text{ff}}\) and \(h_{\text{lm}}\) with respect to overall coverage, these results indicate that there are some domains where competitive performance can be obtained with a 30 minute limit, including the time to learn an instance-specific heuristic function entirely from scratch without a teacher. ## 6 Ablation Study To understand the relative impact of each of the new components of SING compared to LHFCP, we performed an ablation study comparing the following configurations: (1) C5': Configuration C5 (Table 1) with fewer training samples (100k instead of 400k), (2) C5'/rw : same as C5', except using random walk instead of DFS, (3) C5'/sas : same as C5', except using SAS+ instead of boolean state representation, (4) C5'/reg : same as C5', except using regression instead of explicit search state, (5) C5'/orig : same as C5', \begin{table} \begin{tabular}{|l|c c c c c c c|} \hline name & state & backward & rev. & inversion & NN \# of & NN nodes & samples \# \\ vector & space & search & hidden & hidden & & \\ \hline C2 & boolean & regression & DFS & yes & 1 & 16 & \(10^{5}\) \\ C3 & SAS+ & explicit & rand. walk & yes & 1 & 16 & \(10^{5}\) \\ C4 & boolean & explicit & DFS & yes & 4 & 64 & \(10^{5}\) \\ C5 & boolean & explicit & DFS & yes & 1 & 16 & \(4\times 10^{5}\) \\ \hline SING/L & SAS+ & explicit & rand. walk & no & 1 & 16 & \(10^{5}\) \\ \hline \end{tabular} \end{table} Table 1: SING configurations used in experiments. “state vector”: vector representation of states. “backward space”: search space for training data generation backward search. “rev. search” : search strategy for Training data generation backward search. “NN # of hidden”: # of hidden nodes in \(h_{\text{nn}}\). “NN nodes hidden”: # of nodes per hidden layer. “samples #” : # of sample states collected in the training data collection phase using the sampling search. C2, C3 and C4 perform 500 searches, with a limit of 200 samples/search (\(10^{5}\) samples). C5 performs 800 searches, with a limit of 500 samples/search (\(4\times 10^{5}\) samples). except using original operators only (no action inversion), and (6) C5'/mse : same as C5', except using MSE instead of the relative error sum loss function for NN training. All configurations were run with a 30min, 2GB limit on the same IPC instances used in the above experiment. The coverages of the configurations were 604 for C5', 563 for C5'/rw, 481 for C5'/sas, 651 for C5'/reg, 420 for C5'/orig, and 559 for C5'/mse. This shows that the use of DFS in backward search, the use of boolean state representation, the use of action inversion, and the use of relative error sum loss function all have a significant positive impact on performance. On the other hand, the effect of using regression vs explicit state search for the backward search during training data generation is highly domain-dependent, with regression performing better on some domains and explicit search on others, as can be seen by comparing configurations C2 (which is the same as C5'/reg) vs. C5 in Table 3. ## 7 Related Work A broad survey of learning for domain-independent planning is (Celorrio et al., 2012). Satzger and Kramer (2013) developed a neural network based, domain-specific heuristic for classical planning. They used random problem generators to create instances for training the neural network. Their training process also relies on the use of an oracle (the FD planner with an admissible heuristic) to provide true distance from a state to a goal. Shen et al. (2020) proposed an approach to learning domain-independent (as well as domain-dependent) heuristics using Hypergraph Networks. They showed that it was possible to successfully learn domain-independent heuristics which performed well even on domains which were not in the training data. As this approach uses a hypergraph based on the delete relaxation of the original planning instance, it is quite different from the minimalist approach taken in SING, which does not use any such derived features and uses only the raw state vector. The training data generation method is forward search based, similar to the forward approach of Ferber et al. described in Section 2 (Ferber, Helmert, and Hoffmann, 2020). In addition, while their work focuses on generalization capability and search efficiency (node expansions) across domains, with runtime competitiveness left as future work, our work seeks to achieve runtime competitiveness using a simple NN architecture. Random-walk sampling of the search space of determin \begin{table} \begin{tabular}{c|c c c|c c c|c c c|c c} \hline \hline \multicolumn{3}{c}{coverage rate} & \multicolumn{3}{c}{median \#expansions} & \multicolumn{3}{c}{median \#exp. per second} & \multicolumn{3}{c}{median runtime} \\ & \(h_{\text{nn}}\) & \(h_{\text{ff}}\) & \(h_{\text{lm}}\) & \(h_{\text{nn}}\) & \(h_{\text{ff}}\) & \(h_{\text{lm}}\) & \(h_{\text{nn}}\) & \(h_{\text{ff}}\) & \(h_{\text{lm}}\) & \(h_{\text{ff}}\) & \(h_{\text{lm}}\) \\ \hline 24-puzzle & **100.0** & **100.0** & **100.0** & **5,514** & 9,232 & 67,859 & 10,649 & 3,862 & **39,633** & **0.52** & 2.31 & 1.59 \\ 35-puzzle & **100.0** & **100.0** & **100.0** & 122,463 & **57,045** & 1,650,552 & 9,313 & 3,749 & **74,487** & **12.95** & 15.20 & 21.86 \\ blocks & **84.0** & 73.0 & 83.0 & 353,856 & 332,974 & **33,658** & 14,926 & 2,830 & **24,054** & 26.07 & 126.14 & **1.56** \\ pancake & **100.0** & 48.0 & **100.0** & **74,873** & 324,925 & 1,620,030 & 25,261 & 912 & **134,248** & **2.93** & 347.55 & 10.60 \\ Average & **96.0** & 80.2 & 95.8 & **139,177** & 181,044 & 843,025 & 15,038 & 2,838 & **68,106** & 10.61 & 122.80 & **8.90** \\ \hline \hline \end{tabular} \end{table} Table 2: Domain-specific heuristics: Reusing a single learned model across many instances of the same shared search space domain. The (sampling, training) times were (28s, 210s) for 24-puzzle, (276s, 764s) for 35-puzzle, (502s, 228s) for blocks25, and (21s, 377s) for pancake. Figure 1: Runtime (seconds) for 24-puzzle, 35-puzzle, and blocks25. \(h_{\text{nn}}\) vs. \(h_{\text{ff}}\) and \(h_{\text{lm}}\). Figure 2: Runtime (seconds) for pancake (14 pancakes, 100 instances). \(h_{\text{nn}}\) vs. \(h_{\text{ff}}\) and \(h_{\text{lm}}\). istic planning problems for the purpose of learning a control policy for a reactive agent was proposed in [12]. This differs from SING in that SING learns a heuristic function which estimates distances to a goal state and and guide search (GBFS), instead of a reactive policy. There is also a rapidly growing body of work on learning neural network based policies for probabilistic domains (c.f., [13, 14, 15, 16, 17]), which is also related to learning heuristic evaluation functions for deterministic domains. ## 8 Conclusion We investigated a supervised learning approach to learning a heuristic evaluation for search-based, domain-independent classical planning, where the training data is generated using backward search. Although LHFCP, a previous system, followed the same basic approach, it was performed comparably to blind search. SING pushes this approach much further using (1) backward search for training data generation using regression, as well as derived inverse operator for explicit space search, (2) DFS-based backward search for training data generation, (3) a propositional input vector representation, and (4) a relative error loss function. We showed that SING can achieve performance competitive with \(h_{\text{ff}}\) and \(h_{\text{lm}}\) on several domains, both in shared search space scenarios where heuristics can be reused across domains, as well as single-instance learning where both learning and search using the learned heuristic must be performed within a given time limit. SING is a relatively simple, minimalist system. SING uses _only_ a PDDL description of a single problem instance as input. No additional problem generators or training instances are used. Learning is from scratch, and unlike the forward search based training data generation approach investigated by [12], SING does not use any standard heuristics during training data generation. It uses a very simple feedforward neural network architecture, with no feature engineering. The only "features" used by SING are the raw state vectors. SING does not exploit any structures used by standard classical planning heuristics such as delete relaxations and causal graphs in either the learning or the search phases. Previous work used features derived/extracted from human-developed heuristics such as \(h_{\text{ff}}\) and explored how learning could be used to exploit such features in new ways [15, 16, 17, 18, 19, 20]. By pushing the performance envelope for a more minimal approach our results provide a baseline for future work on heuristic learning using more sophisticated features and methods. As discussed in Section 3.2, explicit backward search (as opposed to regression) for training data generation can generate states which are not reachable from the start state. Nevertheless, our results show that SING configurations which use explicit backward search perform quite well on some domains. In future work, we will investigate in detail how unreachable states in the training data affect the quality of the learned heuristic and the performance of the (forward) search using the learned heuristic.
2302.05828
Graph Neural Network-Inspired Kernels for Gaussian Processes in Semi-Supervised Learning
Gaussian processes (GPs) are an attractive class of machine learning models because of their simplicity and flexibility as building blocks of more complex Bayesian models. Meanwhile, graph neural networks (GNNs) emerged recently as a promising class of models for graph-structured data in semi-supervised learning and beyond. Their competitive performance is often attributed to a proper capturing of the graph inductive bias. In this work, we introduce this inductive bias into GPs to improve their predictive performance for graph-structured data. We show that a prominent example of GNNs, the graph convolutional network, is equivalent to some GP when its layers are infinitely wide; and we analyze the kernel universality and the limiting behavior in depth. We further present a programmable procedure to compose covariance kernels inspired by this equivalence and derive example kernels corresponding to several interesting members of the GNN family. We also propose a computationally efficient approximation of the covariance matrix for scalable posterior inference with large-scale data. We demonstrate that these graph-based kernels lead to competitive classification and regression performance, as well as advantages in computation time, compared with the respective GNNs.
Zehao Niu, Mihai Anitescu, Jie Chen
2023-02-12T01:07:56Z
http://arxiv.org/abs/2302.05828v1
# Graph Neural Network-Inspired Kernels for Gaussian Processes in Semi-Supervised Learning ###### Abstract Gaussian processes (GPs) are an attractive class of machine learning models because of their simplicity and flexibility as building blocks of more complex Bayesian models. Meanwhile, graph neural networks (GNNs) emerged recently as a promising class of models for graph-structured data in semi-supervised learning and beyond. Their competitive performance is often attributed to a proper capturing of the graph inductive bias. In this work, we introduce this inductive bias into GPs to improve their predictive performance for graph-structured data. We show that a prominent example of GNNs, the graph convolutional network, is equivalent to some GP when its layers are infinitely wide; and we analyze the kernel universality and the limiting behavior in depth. We further present a programmable procedure to compose covariance kernels inspired by this equivalence and derive example kernels corresponding to several interesting members of the GNN family. We also propose a computationally efficient approximation of the covariance matrix for scalable posterior inference with large-scale data. We demonstrate that these graph-based kernels lead to competitive classification and regression performance, as well as advantages in computation time, compared with the respective GNNs. ## 1 Introduction Gaussian processes (GPs) (Rasmussen and Williams, 2006) are widely used in machine learning, uncertainty quantification, and global optimization. In the Bayesian setting, a GP serves as a prior probability distribution over functions, characterized by a mean (often treated as zero for simplicity) and a covariance. Conditioned on observed data with a Gaussian likelihood, the random function admits a posterior distribution that is also Gaussian, whose mean is used for prediction and the variance serves as an uncertainty measure. The closed-form posterior allows for exact Bayesian inference, resulting in great attractiveness and wide usage of GPs. The success of GPs in practice depends on two factors: the observations (training data) and the covariance kernel. We are interested in semi-supervised learning, where only a small amount of data is labeled while a large amount of unlabeled data can be used together for training (Zhu, 2008). In recent years, graph neural networks (GNNs) (Zhou et al., 2020; Wu et al., 2021) emerged as a promising class of models for this problem, when the labeled and unlabeled data are connected by a graph. The graph structure becomes an important inductive bias that leads to the success of GNNs. This inductive bias inspires us to design a GP model under limited observations, by building the graph structure into the covariance kernel. An intimate relationship between neural networks and GPs is known: a neural network with fully connected layers, equipped with a prior probability distribution on the weights and biases, converges to a GP when each of its layers is infinitely wide (Lee et al., 2018; de G. Matthews et al., 2018). Such a result is owing to the central limit theorem (Neal, 1994; Williams, 1996) and the GP covariance can be recursively computed if the weights (and biases) in each layer are iid Gaussian. Similar results for other architectures, such as convolution layers and residual connections, were subsequently established in the literature (Novak et al., 2019; Garriga-Alonso et al., 2019). One focus of this work is to establish a similar relationship between GNNs and the limiting GPs. We will derive the covariance kernel that incorporates the graph inductive bias as GNNs do. We start with one of the most widely studied GNNs, the graph convolutional network (GCN) (Kipf and Welling, 2017), and analyze the kernel universality as well as the limiting behavior when the depth also tends to infinity. We then derive covariance kernels from other GNNs by using a programmable procedure that corresponds every building block of a neural network to a kernel operation. Meanwhile, we design efficient computational procedures for posterior inference (i.e., regression and classification). GPs are notoriously difficult to scale because of the cubic complexity with respect to the number of training data. Benchmark graph datasets used by the GNN literature may contain thousands or even millions of labeled nodes (Hu et al., 2020). The semi-supervised setting worsens the scenario, as the covariance matrix needs to be (recursively) evaluated in full because of the graph convolution operation. We propose a Nystrom-like scheme to perform low-rank approximations and apply the approximation recursively on each layer, to yield a low-rank kernel matrix. Such a matrix can be computed scalably. We demonstrate through numerical experiments that the GP posterior inference is much faster than training a GNN and subsequently performing predictions on the test set. We summarize the contributions of this work as follows: 1. We derive the GP as a limit of the GCN when the layer widths tend to infinity and study the kernel universality and the limiting behavior in depth. 2. We propose a computational procedure to compute a low-rank approximation of the covariance matrix for practical and scalable posterior inference. 3. We present a programmable procedure to compose covariance kernels and their approximations and show examples corresponding to several interesting members of the GNN family. 4. We conduct comprehensive experiments to demonstrate that the GP model performs favorably compared to GNNs in prediction accuracy while being significantly faster in computation. ## 2 Related Work It has long been observed that GPs are limits of standard neural networks with one hidden layer when the layer width tends to infinity (Neal, 1994; Williams, 1996). Recently, renewed interests in the equivalence between GPs and neural networks were extended to deep neural networks (Lee et al., 2018; de G. Matthews et al., 2018) as well as modern neural network architectures, such as convolution layers (Novak et al., 2019), recurrent networks (Yang, 2019), and residual connections (Garriga-Alonso et al., 2019). The term NNGP (neural network Gaussian process) henceforth emerged under the context of Bayesian deep learning. Besides the fact that an infinite neural network defines a kernel, the training of a neural network by using gradient descent also defines a kernel--the neural tangent kernel (NTK)--that describes the evolution of the network (Jacot et al., 2018; Lee et al., 2019). Library supports in Python were developed to automatically construct the NNGP and NTK kernels based on programming the corresponding neural networks (Novak et al., 2020). GNNs are neural networks that handle graph-structured data (Zhou et al., 2020; Wu et al., 2021). They are a promising class of models for semi-supervised learning. Many GNNs use the message-passing scheme (Gilmer et al., 2017), where neighborhood information is aggregated to update the representation of the center node. Representative examples include GCN (Kipf and Welling, 2017), GraphSAGE (Hamilton et al., 2017), GAT (Velickovic et al., 2018), and GIN (Xu et al., 2019). It is found that the performance of GNNs degrades as they become deep; one approach to mitigating the problem is to insert residual/skip connections, as done by JumpingKnowledge (Xu et al., 2018), APPNP (Gasteiger et al., 2019), and GCNII (Chen et al., 2020). GP inference is too costly, because it requires the inverse of the \(N\times N\) dense kernel matrix. Scalable approaches include low-rank methods, such as Nystrom approximation (Drineas and Mahoney, 2005), random features (Rahimi and Recht, 2007), and KISS-GP (Wilson and Nickisch, 2015); as well as multi-resolution (Katzfuss, 2017) and hierarchical methods (Chen et al., 2017; Chen and Stein, 2021). Prior efforts on integrating graphs into GPs exist. Ng et al. (2018) define a GP kernel by combing a base kernel with the adjacency matrix; it is related to a special case of our kernels where the network has only one layer and the output admits a robust-max likelihood for classification. Hu et al. (2020) explore a similar route to us, by taking the limit of a GCN, but its exploration is less comprehensive because it does not generalize to other GNNs and does not tackle the scalability challenge. ## 3 Graph Convolutional Network as a Gaussian Process We start with a few notations used throughout this paper. Let an undirected graph be denoted by \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with \(N=|\mathcal{V}|\) nodes and \(M=|\mathcal{E}|\) edges. For notational simplicity, we use \(A\in\mathbb{R}^{N\times N}\) to denote the original graph adjacency matrix or any modified/normalized version of it. Using \(d_{l}\) to denote the width of the \(l\)-th layer, the layer architecture of GCN reads \[X^{(l)}=\phi\left(AX^{(l-1)}W^{(l)}+b^{(l)}\right), \tag{1}\] where \(X^{(l-1)}\in\mathbb{R}^{N\times d_{l-1}}\) and \(X^{(l)}\in\mathbb{R}^{N\times d_{l}}\) are layer inputs and outputs, respectively; \(W^{(l)}\in\mathbb{R}^{d_{l-1}\times d_{l}}\) and \(b^{(l)}\in\mathbb{R}^{1\times d_{l}}\) are the weights and the biases, respectively; and \(\phi\) is the ReLU activation function. The graph convolutional operator \(A\) is a symmetric normalization of the graph adjacency matrix with self-loops added (Kipf and Welling, 2017). For ease of exposition, it will be useful to rewrite the matrix notation (1) in element-sums and products. To this end, for a node \(x\), let \(z_{i}^{(l)}(x)\) and \(x_{i}^{(l)}(x)\) denote the pre- and post-activation value at the \(i\)-th coordinate in the \(l\)-th layer, respectively. Particularly, in an \(L\)-layer GCN, \(x^{(0)}(x)\) is the input feature vector and \(z^{(L)}(x)\) is the output vector. The layer architecture of GCN reads \[y_{i}^{(l)}(x)=\sum_{j=1}^{d_{l-1}}W_{ji}^{(l)}x_{j}^{(l-1)}(x), \quad z_{i}^{(l)}(x)=b_{i}^{(l)}+\sum_{v\in\mathcal{V}}A_{vv}y_{i}^{(l)}(v), \quad x_{i}^{(l)}(x)=\phi(z_{i}^{(l)}(x)). \tag{2}\] ### Limit in the Width The following theorem states that when the weights and biases in each layer are iid zero-mean Gaussians, in the limit on the layer width, the GCN output \(z^{(L)}(x)\) is a multi-output GP over the index \(x\). **Theorem 1**.: _Assume \(d_{1},\ldots,d_{L-1}\) to be infinite in succession and let the bias and weight terms be independent with distributions_ \[b_{i}^{(l)}\sim\mathcal{N}(0,\sigma_{b}^{2}),\quad W_{ij}^{(l)} \sim\mathcal{N}(0,\sigma_{w}^{2}/d_{l-1}),\quad l=1,\ldots,L.\] _Then, for each \(i\), the collection \(\{z_{i}^{(l)}(x)\}\) over all graph nodes \(x\) follows the normal distribution \(\mathcal{N}(0,K^{(l)})\), where the covariance matrix \(K^{(l)}\) can be computed recursively by_ \[C^{(l)}=\mathrm{E}_{z_{i}^{(l)}\sim\mathcal{N}(0,K^{(l)})}[\phi( z_{i}^{(l)})\phi(z_{i}^{(l)})^{T}], l=1,\ldots,L, \tag{3}\] \[K^{(l+1)}=\sigma_{b}^{2}\mathbbm{1}_{N\times N}+\sigma_{w}^{2}AC ^{(l)}A^{T}, l=0,\ldots,L-1. \tag{4}\] All proofs of this paper are given in the appendix. Note that different from a usual GP, which is a random function defined over a connected region of the Euclidean space, here \(z^{(L)}\) is defined over a discrete set of graph nodes. In the usual use of a graph in machine learning, this set is finite, such that the function distribution degenerates to a multivariate distribution. In semi-supervised learning, the dimension of the distribution, \(N\), is fixed when one conducts transductive learning; but it will vary in the inductive setting because the graph will have new nodes and edges. One special care of a graph-based GP over a usual GP is that the covariance matrix will need to be recomputed from scratch whenever the graph alters. Theorem 1 leaves out the base definition \(C^{(0)}\), whose entry denotes the covariance between two input nodes. The traditional literature uses the inner product \(C^{(0)}(x,x^{\prime})=\frac{x\cdot x^{\prime}}{d_{0}}\)(Lee et al., 2018), but nothing prevents us using any positive-definite kernel alternatively.1 For example, we could use the squared exponential kernel \(C^{(0)}(x,x^{\prime})=\exp\left(-\frac{1}{2}\sum_{j=1}^{d_{0}}\left(\frac{x_{j}-x _{j}^{\prime}}{\ell_{j}}\right)^{2}\right)\). Such flexibility in essence performs an implicit feature transformation as preprocessing. Footnote 1: Here, we abuse the notation and use \(x\) in place of \(x^{(0)}(x)\) in the inner product. ### Universality A covariance kernel is positive definite; hence, the Moore-Aronszajn theorem (Aronszajn, 1950) suggests that it defines a unique Hilbert space for which it is a reproducing kernel. If this space is dense, then the kernel is called _universal_. One can verify universality by checking if the kernel matrix is positive definite for any set of distinct points.2 For the case of graphs, it suffices to verify if the covariance matrix for all nodes is positive definite. Footnote 2: Note the conventional confusion in terminology between functions and matrices: a kernel function is positive definite (resp. strictly positive definite) if the corresponding kernel matrix is positive semi-definite (resp. positive definite) for any collection of distinct points. We do this job for the ReLU activation function. It is known that the kernel \(\mathrm{E}_{w\sim\mathcal{N}(0,I_{d})}[\phi(w\cdot x)\phi(w\cdot x^{\prime})]\) admits a closed-form expression as a function of the angle between \(x\) and \(x^{\prime}\), hence named the _arc-cosine_ kernel (Cho & Saul, 2009). We first establish the following lemma that states that the kernel is universal over a half-space. **Lemma 2**.: _The arc-cosine kernel is universal on the upper-hemisphere \(S=\left\{x\in\mathbb{R}^{d}:\|x\|_{2}=1,x_{1}>0\right\}\) for all \(d\geq 2\)._ It is also known that the expectation in (3) is proportional to the arc-cosine kernel up to a factor \(\sqrt{K^{(l)}(x,x)K^{(l)}(x^{\prime},x^{\prime})}\)(Lee et al., 2018). Therefore, we iteratively work on the post-activation covariance (3) and the pre-activation covariance (4) and show that the covariance kernel resulting from the limiting GCN is universal, for any GCN with three or more layers. **Theorem 3**.: _Assume \(A\) is irreducible and non-negative and \(C^{(0)}\) does not contain two linearly dependent rows. Then, \(K^{(0)}\) is positive definite for all \(l\geq 3\)._ ### Limit in the Depth The depth of a neural network exhibits interesting behaviors. Deep learning tends to favor deep networks because of their empirically outstanding performance, exemplified by generations of convolutional networks for the ImageNet classification (Krizhevsky et al., 2012; Wortsman et al., 2022); while graph neural networks are instrumental to be shallow because of the over-smoothing and over-squashing properties (Li et al., 2018; Topping et al., 2022). For multi-layer perceptrons (networks with fully connected layers), several previous works have noted that the recurrence relation of the covariance kernel across layers leads to convergence to a fixed-point kernel, when the depth \(L\rightarrow\infty\) (see, e.g., Lee et al. (2018); in Appendix B.5, we elaborate this limit). In what follows, we offer the parallel analysis for GCN. **Theorem 4**.: _Assume \(A\) is symmetric, irreducible, aperiodic, and non-negative with Perron-Frobenius eigenvalue \(\lambda>0\). The following results hold as \(l\rightarrow\infty\)._ 1. _When_ \(\sigma_{b}^{2}=0\)_,_ \(\rho_{\min}(K^{(l)})\nearrow 1\)_, where_ \(\rho_{\min}\) _denotes the minimum correlation between any two nodes_ \(x\) _and_ \(x^{\prime}\)_._ 2. _When_ \(\sigma_{w}^{2}<2/\lambda^{2}\)_, a subsequence of_ \(K^{(l)}\) _converges to some matrix._ 3. _When_ \(\sigma_{w}^{2}>2/\lambda^{2}\)_, let_ \(c_{l}=(\sigma_{w}^{2}\lambda^{2}/2)^{l}\)_; then,_ \(K^{(l)}/c_{l}\to vv^{T}\) _where_ \(v\) _is an eigenvector corresponding to_ \(\lambda\)_._ A few remarks follow. The first case implies that the correlation matrix converges monotonously to a matrix of all ones. As a consequence, up to some scaling \(c_{l}^{\prime}\) that may depend on \(l\), the scaled covariance matrix \(K^{(l)}/c_{l}^{\prime}\) converges to a rank-1 matrix. The third case shares a similar result, with the limit explicitly spelled out, but note that the eigenvector \(v\) may not be normalized. The second case is challenging to analyze. According to empirical verification, we speculate a stronger result--convergence of \(K^{(l)}\) to a unique fixed point--may hold. ## 4 Scalable Computation through Low-Rank Approximation The computation of the covariance matrix \(K^{(L)}\) through recursion (3)-(4) is the main computational bottleneck for GP posterior inference. We start the exposition with the mean prediction. We compute the posterior mean \(\widehat{y}_{*}=K^{(L)}_{sb}(K^{(L)}_{bb}+\epsilon I)^{-1}y_{b}\), where the subscripts \(b\) and \(*\) denote the training set and the prediction set, respectively; and \(\epsilon\), called the _nugget_, is the noise variance of the training data. Let there be \(N_{b}\) training nodes and \(N_{*}\) prediction nodes. It is tempting to compute only the \((N_{b}+N_{*})\times N_{b}\) submatrix of \(K^{(L)}\) for the task, but the recursion (4) requires the full \(C^{(L-1)}\) at the presence of \(A\), and hence all the full \(C^{(l)}\)'s and \(K^{(l)}\)'s. To reduce the computational costs, we resort to a low-rank approximation of \(C^{(l)}\), from which we easily see that \(K^{(l+1)}\) is also low-rank. Before deriving the approximation recursion, we note (again) that for the ReLU activation \(\phi\), \(C^{(l)}\) in (3) is the arc-cosine kernel with a closed-form expression: \[C^{(l)}_{xx^{\prime}}=\frac{1}{2\pi}\sqrt{K^{(l)}_{xx}K^{(l)}_{x^{\prime}x^{ \prime}}}\left(\sin\theta^{(l)}_{xx^{\prime}}+(\pi-\theta^{(l)}_{xx^{\prime}} )\cos\theta^{(l)}_{xx^{\prime}}\right)\quad\text{where}\quad\theta^{(l)}_{ xx^{\prime}}=\arccos\left(\frac{K^{(l)}_{xx^{\prime}}}{\sqrt{K^{(l)}_{xx}K^{(l)}_{x^{ \prime}x^{\prime}}}}\right). \tag{5}\] Hence, the main idea is that starting with a low-rank approximation of \(K^{(l)}\), compute an approximation of \(C^{(l)}\) by using (5), and then obtain an approximation of \(K^{(l+1)}\) based on (4); then, repeat. To derive the approximation, we use the subscript \(a\) to denote a set of landmark nodes with cardinality \(N_{a}\). The Nystrom approximation (Drineas & Mahoney, 2005) of \(K^{(0)}\) is \(K^{(0)}_{:a}(K^{(0)}_{aa})^{-1}K^{(0)}_{a:}\), where the subscript : denotes retaining all rows/columns. We rewrite this approximation in the Cholesky style as \(Q^{(0)}Q^{(0)}{}^{T}\), where \(Q^{(0)}=K^{(0)}_{:a}(K^{(0)}_{aa})^{-\frac{1}{2}}\) has size \(N\times N_{a}\). Proceed with induction. Let \(K^{(l)}\) be approximated by \(\widehat{K}^{(l)}=Q^{(l)}Q^{(l)}{}^{T}\), where \(Q^{(l)}\) has size \(N\times(N_{a}+1)\). We apply (5) to compute an approximation to \(C^{(l)}_{:a}\), namely \(\widehat{C}^{(l)}_{:a}\), by using \(\widehat{K}^{(l)}_{:a}\). Then, (4) leads to a Cholesky style approximation of \(K^{(l+1)}\): \[\widehat{K}^{(l+1)}=\sigma_{b}^{2}\mathbf{1}_{N\times N}+\sigma_{w}^{2}A \widehat{C}^{(l)}A^{T}\equiv Q^{(l+1)}Q^{(l+1)}{}^{T},\] where \(Q^{(l+1)}=\left[\sigma_{w}A\widehat{C}^{(l)}_{:a}(\widehat{C}^{(l)}_{aa})^{- \frac{1}{2}}\quad\sigma_{b}\mathbf{1}_{N\times 1}\right]\). Clearly, \(Q^{(l+1)}\) has size \(N\times(N_{a}+1)\), completing the induction. In summary, \(K^{(L)}\) is approximated by a rank-\((N_{a}+1)\) matrix \(\widehat{K}^{(L)}=Q^{(L)}{Q^{(L)}}^{T}\). The computation of \(Q^{(L)}\) is summarized in Algorithm 1. Once it is formed, the posterior mean is computed as \[\widehat{y}_{*}\approx\widehat{K}^{(L)}_{*b}(\widehat{K}^{(L)}_{bb}+\epsilon I )^{-1}y_{b}=Q^{(L)}_{*:}\left({Q^{(L)}_{b:}}^{T}Q^{(L)}_{b:}+\epsilon I\right) ^{-1}{Q^{(L)}_{b:}}^{T}y_{b}, \tag{6}\] where note that the matrix to be inverted has size \((N_{a}+1)\times(N_{a}+1)\), which is assumed to be significantly smaller than \(N_{b}\times N_{b}\). Similarly, the posterior variance is \[\widehat{K}^{(L)}_{**}-\widehat{K}^{(L)}_{*b}(\widehat{K}^{(L)}_{bb}+\epsilon I )^{-1}\widehat{K}^{(L)}_{b*}=\epsilon Q^{(L)}_{*:}\left({Q^{(L)}_{b:}}^{T}Q^{ (L)}_{b:}+\epsilon I\right)^{-1}{Q^{(L)}_{*:}}^{T}. \tag{7}\] The computational costs of \(Q^{(L)}\) and the posterior inference (6)-(7) are summarized in Table 1 \begin{table} \begin{tabular}{l c c} \hline & Time \(O(\cdot)\) & Storage \(O(\cdot)\) \\ \hline Computation of \(Q^{(L)}\) & \(LMN_{a}+LNN_{a}^{2}+LN_{a}^{3}\) & \(NN_{a}\) \\ Posterior mean (6) & \(N_{*}N_{a}+N_{b}N_{a}^{2}+N_{a}^{3}\) & \((N_{b}+N_{*})N_{a}\) \\ Posterior variance (7) & \(N_{*}N_{a}+N_{b}N_{a}^{2}+N_{a}^{3}\) & \((N_{b}+N_{*})N_{a}\) \\ \hline \end{tabular} \end{table} Table 1: Computational costs. \(M\): number of edges; \(N\): number of nodes; \(N_{b}\): number of training nodes; \(N_{*}\): number of prediction nodes; \(N_{a}\): number of landmark nodes; \(L\): number of layers. Assume \(N_{b}\gg N_{a}\). For posterior variance, assume only the diagonal is needed. ``` 0:\(Q^{(0)}\) such that \(K^{(0)}\approx{Q^{(0)}}{Q^{(0)}}^{T}\) 1:for\(l=0,\ldots,L-1\)do 2: Compute \(\widehat{K}^{(l)}_{\alpha}=Q^{(l)}_{:}Q^{(0)}_{a:}\) 3: Compute \(\widehat{C}^{(l)}_{\alpha}\) by (5), where \(C^{(l)}\) (resp. \(K^{(l)}\)) entries are replaced by \(\widehat{C}^{(l)}\) (resp. \(\widehat{K}^{(l)}\)) entries 4: Compute \(Q^{(l+1)}=\left[\sigma_{w}\widehat{A}\widehat{C}^{(l)}_{\alpha a}(\widehat{C} ^{(l)}_{aa})^{-\frac{1}{2}}\quad\sigma_{b}\mathbf{1}_{N\times 1}\right]\) 5:endfor ``` **Algorithm 1** Computing \(K^{(L)}\approx\widehat{K}^{(L)}={Q^{(L)}}{Q^{(L)}}^{T}\) ## 5 Composing Graph Neural Network-Inspired Kernels Theorem 1, together with its proof, suggests that the covariance matrix of the limiting GP can be computed in a composable manner. Moreover, the derivation of Algorithm 1 indicates that the low-rank approximation of the covariance matrix can be similarly composed. Altogether, such a nice property allows one to easily derive the corresponding covariance matrix and its approximation for a new GNN architecture, like writing a program and obtaining a transformation of it automatically through operator overloading (Novak et al., 2020): the covariance matrix is a transformation of the GNN and the composition of the former is in exactly the same manner and order as that of the latter. We call the covariance matrices _programmable_. For example, we write a GCN layer as \(X\gets A\phi(X)W+b\), where for notational simplicity, \(X\) denotes pre-activation rather than post-activation as in the earlier sections. The activation \(\phi\) on \(X\) results in a transformation of the kernel matrix \(K\) into \(g(K)\), defined as: \[g(K):=C=\mathrm{E}_{z\sim\mathcal{N}(0,K)}[\phi(z)\phi(z)^{T}], \tag{8}\] due to (3). Moreover, if \(K\) admits a low-rank approximation \(QQ^{T}\), then \(g(K)\) admits a low-rank approximation \(PP^{T}\) where \(P=\mathrm{Chol}(g(K))\) with \[\mathrm{Chol}(C):=C_{:a}C_{aa}^{-\frac{1}{2}}.\] The next operation--graph convolution--multiplies \(A\) to the left of the post-activation. Correspondingly, the covariance matrix \(K\) is transformed to \(AKA^{T}\) and the low-rank approximation factor \(Q\) is transformed to \(AQ\). Then, the operation--multiplying the weight matrix \(W\) to the right--will transform \(K\) to \(\sigma_{w}^{2}K\) and \(Q\) to \(\sigma_{w}Q\). Finally, adding the bias \(b\) will transform \(K\) to \(K+\sigma_{b}^{2}\mathbf{1}_{N\times N}\) and \(Q\) to \([Q\quad\sigma_{b}\mathbf{1}_{N\times 1}]\). Altogether, we have obtained the following updates per layer: \[\text{GCN}:\quad X \gets A\phi(X)W+b\] \[K \leftarrow\sigma_{w}^{2}Ag(K)A^{T}+\sigma_{b}^{2}\mathbf{1}_{N \times N}\] \[Q \leftarrow\left[\sigma_{w}A\,\mathrm{Chol}(g(QQ^{T}))\quad\sigma_{ b}\mathbf{1}_{N\times 1}\right].\] One may verify the \(K\) update against (3)-(4) and the \(Q\) update against Algorithm 1. Both updates can be automatically derived based on the update of \(X\). We summarize the building blocks of a GNN and the corresponding kernel/low-rank operations in Table 2. The independent-addition building block is applicable to skip/residual connections. For \begin{table} \begin{tabular}{l l l l} \hline Building block & Neural network & Kernel operation & Low-rank operation \\ \hline Input & \(X\gets X^{(0)}\) & \(K\gets C^{(0)}\) & \(Q\leftarrow\mathrm{Chol}(C^{(0)})\) \\ Bias term & \(X\gets X+b\) & \(K\gets K+\sigma_{b}^{2}\mathbf{1}_{N\times N}\) & \(Q\leftarrow[Q\quad\sigma_{b}\mathbf{1}_{N\times 1}]\) \\ Weight term & \(X\gets XW\) & \(K\gets\sigma_{w}^{2}K\) & \(Q\leftarrow\sigma_{w}Q\) \\ Mixed weight term & \(X\gets X(\alpha I+\beta W)\) & \(K\leftarrow(\alpha^{2}+\beta^{2}\sigma_{w}^{2})K\) & \(Q\leftarrow\sqrt{\alpha^{2}+\beta^{2}\sigma_{w}^{2}}Q\) \\ Graph convolution & \(X\gets AX\) & \(K\gets AKA^{T}\) & \(Q\gets AQ\) \\ Activation & \(X\leftarrow\phi(X)\) & \(K\gets g(K)\) & \(Q\leftarrow\mathrm{Chol}(g(QQ^{T}))\) \\ Independent addition & \(X\gets X_{1}+X_{2}\) & \(K\gets K_{1}+K_{2}\) & \(Q\leftarrow[Q_{1}\quad Q_{2}]\) \\ \hline \end{tabular} \end{table} Table 2: Neural network building blocks, kernel operations, and the low-rank counterpart. example, here is the composition for the GCNII layer (Chen et al., 2020) without a bias term, where a skip connection with \(X^{(0)}\) occurs: \[\text{GCNII}: X\leftarrow\left((1-\alpha)A\phi(X)+\alpha X^{(0)}\right)((1- \beta)I+\beta W)\] \[K\leftarrow\left((1-\alpha)^{2}Ag(K)A^{T}+\alpha^{2}K^{(0)} \right)((1-\beta)^{2}+\beta^{2}\sigma_{w}^{2})\] \[Q\leftarrow\left[(1-\alpha)A\operatorname{Chol}(g(QQ^{T})) \quad\alpha Q^{(0)}\right]\sqrt{(1-\beta)^{2}+\beta^{2}\sigma_{w}^{2}}.\] For another example of the composability, we consider the popular GIN layer (Xu et al., 2019), which we assume uses a 2-layer MLP after the neighborhood aggregation: \[\text{GIN}: X\leftarrow\phi(A\phi(X)W+b)W^{\prime}+b^{\prime}\] \[K\leftarrow\sigma_{w}^{2}g(B)+\sigma_{b}^{2}\mathbf{1}_{N\times N }\quad\text{where}\quad B=\sigma_{w}^{2}Ag(K)A^{T}+\sigma_{b}^{2}\mathbf{1}_{ N\times N}\] \[Q\leftarrow\left[\sigma_{w}\operatorname{Chol}(g(PP^{T}))\quad \sigma_{b}\mathbf{1}_{N\times 1}\right]\quad\text{where}\quad P=\left[\sigma_{w}A \operatorname{Chol}(g(QQ^{T}))\quad\sigma_{b}\mathbf{1}_{N\times 1}\right].\] Additionally, the updates for a GraphSAGE layer (Hamilton et al., 2017) are given in Appendix C. ## 6 Experiments In this section, we conduct a comprehensive set of experiments to evaluate the performance of the GP kernels derived by taking limits on the layer width of GCN and other GNNs. We demonstrate that these GPs are comparable with GNNs in prediction performance, while being significantly faster to compute. We also show that the low-rank version scales favorably, suitable for practical use. **Datasets.** The experiments are conducted on several benchmark datasets of varying sizes, covering both classification and regression. They include predicting the topic of scientific papers organized in a citation network (Cora, Citeseer, PubMed, and ArXiv); predicting the community of online posts based on user comments (Reddit), and predicting the average daily traffic of Wikipedia pages using hyperlinks among them (Chameleon, Squirrel, and Crocodile). Details of the datasets (including sources and preprocessing) are given in Appendix D. **Experiment environment, training details, and hyperparameters** are given in Appendix D. **Prediction Performance: GCN-based comparison.** We first conduct the semi-supervised learning tasks on all datasets by using GCN and GPs with different kernels. These kernels include the one equivalent to the limiting GCN (GCNGP), a usual squared-exponential kernel (RBF), and the GGP kernel proposed by Ng et al. (2018).3 Each of these kernels has a low-rank version (suffixed with -X). RBF-X and GGP-X4 use the Nystrom approximation, consistent with GCNGP-X. Footnote 3: We apply only the kernel but not the likelihood nor the variational inference used in Ng et al. (2018), for reasons given in Appendix D. Footnote 4: GGP-X in our notation is the Nyström approximation of the GGP kernel, different from a technique under the same name in Ng et al. (2018), which uses additionally the validation set to compute the prediction loss. GPs are by nature suitable for regression. For classification tasks, we use the one-hot representation of labels to set up a multi-output regression. Then, we take the coordinate with the largest output as the class prediction. Such an ad hoc treatment is widely used in the literature, as other more principled approaches (such as using the Laplace approximation on the non-Gaussian posterior) are too time-consuming for large datasets, meanwhile producing no noticeable gain in accuracy. Table 3 summarizes the accuracy for classification and the coefficient of determination, \(R^{2}\), for regression. Whenever randomness is involved, the performance is reported as an average over five runs. The results of the two tasks show different patterns. For classification, GCNGP-(X) is sligtly better than GCN and GGP-(X), while RBF-(X) is significantly worse than all others; moreover, the low-rank version is outperformed by using the full kernel matrix. On the other hand, for regression, GCNGP-(X) significantly outperforms GCN, RBF-(X), and GGP-(X); and the low-rank version becomes better. The less competitive performance of RBF-(X) is expected, as it does not leverage the graph inductive bias. It is attractive that GCNGP-(X) is competitive with GCN. **Prediction Performance: Comparison with other GNNs.** In addition to GCN, we conduct experiments with several popularly used GNN architectures (GCNII, GIN, and GraphSAGE) and GPs with the corresponding kernels. We test with the three largest datasets: PubMed, ArXiv, and Reddit, for the latter two of which a low-rank version of the GPs is used for computational feasibility. Table 4 summarizes the results. The observations on other GNNs extend similarly those on the GCN. In particular, on PubMed the GPs noticeably improve over the corresponding GNNs, while on ArXiv and Reddit the two families perform rather similarly. An exception is GIN for ArXiv, which significantly underperforms the GP counterpart, as well as other GNNs. It may improve with an extensive hyperparameter tuning. **Running time.** We compare the running time of the methods covered by Table 3. Different from usual neural networks, the training and inference of GNNs do not decouple in full-batch training. Moreover, there is not a universally agreed split between the training and the inference steps in GPs. Hence, we compare the total time for each method. Figure 1 plots the timing results, normalized against the GCN time for ease of comparison. It suggests that GCNGP(-X) is generally faster than GCN. Note that the vertical axis is in the logarithmic scale. Hence, for some of the datasets, the speedup is even one to two orders of magnitude. **Scalability.** For graphs, especially under the semi-supervised learning setting, the computational cost of a GP is much more complex than that of a usual one (which can be simply described as "cubic in the training set size"). One sees in Table 1 the many factors that determine the cost of our graph-based low-rank kernels. To explore the practicality of the proposed method, we use the timings gathered for Figure 1 to obtain an empirical scaling with respect to the graph size, \(M+N\). Figure 2 fits the running times, plotted in the log-log scale, by using a straight line. We see that for neither GCN nor GCNGP(-X), the actual running time closely follows a polynomial complexity. However, interestingly, the least-squares fittings all lead to a slope approximately 1, which agrees with a linear cost. Theoretically, only GCNGP-X and GCN are approximately linear with respect to \(M+N\), while GCNGP is cubic. **Analysis on the depth.** The performance of GCN deteriorates with more layers, known as the oversmoothing phenomenon. Adding residual/skip connections mitigates the problem, such as in GCNII. A natural question asks if the corresponding GP kernels behave similarly. Figure 3 shows that the trends of GCN and GCNII are indeed as expected. Interestingly, their GP counterparts both remain stable for depth \(L\) as large as 12. Our depth analysis (Theorem 4) suggests that in the limit, the GPs may perform less well because the kernel matrix may degenerate to rank 1. This empirical result indicates that the drop in performance may have not started yet. \begin{table} \begin{tabular}{c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{PubMed} & \multicolumn{2}{c|}{ArXiv} & \multicolumn{2}{c}{Reddit} \\ architecture & GNN & GNNGP & GNN & GNNGP-X & GNN & GNNGP-X \\ \hline GCN & 0.7649\(\pm\)0.0058 & 0.7960 & 0.6989\(\pm\)0.0016 & 0.7011\(\pm\)0.0011 & 0.9330\(\pm\)0.0006 & 0.9465\(\pm\)0.0003 \\ GCNII & 0.7558\(\pm\)0.0096 & 0.7840 & 0.7008\(\pm\)0.0021 & 0.6955\(\pm\)0.0011 & 0.9482\(\pm\)0.0007 & 0.9500\(\pm\)0.0003 \\ GIN & 0.7406\(\pm\)0.0112 & 0.7690 & 0.6340\(\pm\)0.0066 & 0.6652\(\pm\)0.0012 & 0.9398\(\pm\)0.0016 & 0.9428\(\pm\)0.0005 \\ GraphSAGE & 0.7535\(\pm\)0.0047 & 0.7900 & 0.6984\(\pm\)0.0021 & 0.6962\(\pm\)0.0007 & 0.9628\(\pm\)0.0007 & 0.9539\(\pm\)0.0003 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance comparison (Micro-F1) between GNNs and the corresponding GP kernels. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & GCN & GCNGP & GCNGP-X & RBF & RBF-X & GGP & GGP-X \\ \hline Cora & 0.8183\(\pm\)0.0055 & **0.8280** & 0.7980 & 0.5860 & 0.5850 & 0.7850 & 0.7410 \\ Citeseer & 0.6941\(\pm\)0.0079 & **0.7090** & 0.7080 & 0.6120 & 0.6090 & 0.7060 & 0.6470 \\ PubMed & 0.7649\(\pm\)0.0058 & **0.7960** & 0.7810 & 0.7360 & 0.7340 & 0.7820 & 0.7380 \\ ArXiv & 0.6990\(\pm\)0.0014 & OOM & **0.7011\(\pm\)0.0011** & OOM & 0.5382 & OOM & 0.6527 \\ Reddit & 0.9330\(\pm\)0.0006 & OOM & **0.9465\(\pm\)0.0003** & OOM & 0.5920 & OOM & 0.9058 \\ \hline Chameleon & 0.5690\(\pm\)0.0376 & 0.6720 & **0.6852** & 0.5554 & 0.5613 & 0.5280 & 0.5311 \\ Squirrel & 0.4243\(\pm\)0.0393 & 0.4926 & **0.4998** & 0.3187 & 0.3185 & 0.2440 & 0.2251 \\ Crocodile & 0.6976\(\pm\)0.0323 & 0.8002 & **0.8013** & 0.6643 & 0.6710 & 0.6952 & 0.6810 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance of GCNGP, in comparison with GCN and typical GP kernels. The Micro-F1 score is reported for classification tasks and \(R^{2}\) is reported for regression tasks. **Analysis on the landmark set.** The number of landmark nodes, \(N_{a}\), controls the approximation quality of the low-rank kernels and hence the prediciton accuracy. On the other hand, the computational costs summarized in Table 1 indicate a dependency on \(N_{a}\) as high as the third power. It is crucial to develop an empirical understanding of the accuracy-time trade-off it incurs. Figure 4 clearly shows that as \(N_{a}\) becomes larger, the running time increase is not linear, while the increase of accuracy diminishes as the landmark set approaches the training set. It is remarkable that using only 1/800 of the training set as landmark nodes already achieves an accuracy surpassing that of GCN, by using time that is only a tiny fraction of the time otherwise needed to gain an additional 1% increase in the accuracy. ## 7 Conclusions We have presented a GP approach for semi-supervised learning on graph-structured data, where the covariance kernel incorporates a graph inductive bias by exploiting the relationship between a GP and a GNN with infinitely wide layers. Similar to other neural networks priorly investigated, one can work out the equivalent GP (in particular, the covariance kernel) for GCN; and inspired by this equivalence, we formulate a procedure to compose covariance kernels corresponding to many other members of the GNN family. Moreover, every building block in the procedure has a low-rank counterpart, which allows for building a low-rank approximation of the covariance matrix that facilitates scalable posterior inference. We demonstrate the effectiveness of the derived kernels used for semi-supervised learning and show their advantages in computation time over GNNs. ## Acknowledgments Mihai Anitescu was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research (ASCR) under Contract DE-AC02-06CH11347. Jie Chen acknowledges supports from the MIT-IBM Watson AI Lab.
2310.07590
Recovering the E and B-mode CMB polarization at sub-degree scales with neural networks
Recovering the polarized cosmic microwave background (CMB) is crucial for shading light on Cosmic Inflation. Methods with different characteristics should be developed and optimized. We aim to use a neural network called CENN and train it for recovering the E and B modes of the CMB. We train the network with realistic simulations of 256x256 pixel squared patches at 100, 143 and 217 GHz Planck channels, which contain the CMB, thermal dust, synchrotron, PS and noise. We make several trainings sets: 30, 25 and 20 arcmin resolution patches at the same position in the sky. After being trained, CENN is able to recover the CMB signal at 143 GHz in Q and U patches. Then, we use NaMaster for estimating the EE and BB power spectrum for each input and output patches in the test dataset, as well as the difference between input and output power spectra and the residuals. We also test the methodology using a different foreground model at 5 arcmin resolution without noise. We recover the E-mode generally founding residuals bellow the input signal at all scales. In particular, we found a value of about 0.1 muK2 at l<200, decreasing below 0.01 muK2 at smaller scales. For the B-mode, we similarly recover the CMB with residuals between 0.01 and 0.001 muK2. We also train the network with 5 arcmin Planck simulations without noise, obtaining clearly better results with respect the previous cases. For a different foreground model, the recovery is similar, although B-mode residuals increase above the input signal. In general, we found that, the network performs better when training with the same resolution used for testing. Based on the results, CENN seems to be a promising for recovering both E and B modes at sub-degree scales in ground-base experiments such as POLARBEAR, SO and CMB-S4. Once extending its applicability at all sky, it could be an alternative component separation method for LiteBIRD satellite.
J. M. Casas, L. Bonavera, J. González-Nuevo, G. Puglisi, C. Baccigalupi, M. M. Cueli, D. Crespo, C. González-Gutiérrez, F. J. de Cos
2023-10-11T15:30:41Z
http://arxiv.org/abs/2310.07590v1
# Recovering the E and B-mode CMB polarization ###### Abstract Context:Recovering the polarized cosmic microwave background (CMB) is crucial for shading light on the exponential growth of the very early Universe called Cosmic Inflation. In order to recover that signal, methods with different characteristics should be developed and optimized. Aims:We aim to use a neural network called the CMB extraction neural network (CENN) and train it for recovering the E and B modes of the CMB at sub-degree scales. Methods:We train the network with realistic simulations of the 100, 143 and 217 GHz channels of _Planck_, which contain the CMB, thermal dust, synchrotron emission, point sources and instrumental white noise. Each simulation is formed by three patches of 256\(\times\)256 pixels with a 90 arcsec pixel value at latitude \(|b|>30^{\circ}\). We make several trainings sets with 10000 simulations each: 30, 25 and 20 arcmin resolution patches at the same position in the sky. After being trained, CENN is able to recover the CMB signal at 143 GHz in \(Q\) and \(U\) patches. Then, we use NAMacT for estimating the average \(EE\) and \(BB\) power spectrum for each input and output patches in the validation dataset, as well as the difference between input and output power spectra and the residuals. For generalization, we also test the methodology using a different foreground model at 5 arcmin resolution without noise. Results:We recover the CMB at 30, 25 and 20 arcmin resolution: for the E-mode, we generally found residuals below the input signal at all scales. In particular, we found a value of about \(10^{-1}\)\(\mu K^{2}\) at \(l<200\), decreasing below \(10^{-2}\)\(\mu K^{2}\) at smaller scales. For the B-mode, we similarly recover the CMB with residuals between \(10^{-2}\) and \(10^{-3}\)\(\mu K^{2}\). We also train the network with 5 arcmin _Planck_ simulations without noise, obtaining clearly better results with respect the previous cases. When tested the network performance with a different foreground model, the recovery is similar, although B-mode residuals increase above the input signal. In general, we found that, the network performs better when training with the same resolution used for testing. Conclusions:Based on these results, CENN seems to be a promising, alternative and flexible tool for recovering both \(E\) and \(B\)-mode polarization at sub-degree scales. Due to its performance at middle and small scales, it could be useful for its use in ground-base experiments such as POLARBEAR, SO and CMB-S4. Once extending its applicability at all sky, it could be an alternative component separation method for LiteBIRD satellite in order to help in the search for primordial B-modes. ## 1 Introduction The cosmic microwave background (CMB) is believed to be the afterglow of the Big Bang, dating back to about 380000 years after the birth of the Universe originating when photons decoupled from baryons and it became transparent to light. It is a key probe of the early times and it is known to be partially linear polarized, which can be described by the Stokes Q and U parameters (Hu & White 1997). However, the polarization can also be decomposed into gradient and curl modes, known as E and B modes respectively. This decomposition is typically done using the total angular momentum approach, as proposed by Kamionkowski et al. (1997) and Zaldarriaga & Seljak (1997). The E modes are associated with scalar (density) perturbations in the early universe, while the B modes are sourced by tensor (gravitational) perturbations, which arise from the gravitational lensing of the CMB by intervening large-scale structure, as so as inflationary gravitational waves. The B-mode polarization of the CMB is an important signal because it can provide information about the presence of primordial gravitational waves. They are a direct probe of inflationary cosmological models, which propose that the Universe underwent a period of exponential expansion in its first stages (Guth 1981, Linde 1982). The strength of the tensor perturbations is characterized by the tensor-to-scalar ratio parameter, \(r\), which is defined as the ratio of the amplitudes of the tensor and scalar per turbations. Several inflationary models predict different values of \(r\), which can be observationally constrained by observations of the CMB (Baumann, 2009). Over the past few years, different experiments have reconstructed the power spectrum of the E modes and their correlation with the total intensity field, particularly the _Planck_ satellite (Planck Collaboration V, 2020), which recovered that spectrum up to sub-degree angular scales. However, the B modes remain much more challenging to detect, as they are expected to be much weaker than the E modes. Currently, the upper limit on the tensor-to-scalar ratio is \(r<0.036\) at 95% confidence level using _Planck_, WMAP and BICEP/Keck Array 2018 data (Bicep/Keck Collaboration et al., 2021). However, upcoming experiments such as LiteBIRD and CMB-S4 are expected to improve this limit significantly, achieving sensitivities of \(\sigma(r)=0.001\)(LiteBIRD Collaboration et al., 2022) and \(\sigma(r)=0.0005\)(Abazajian et al., 2022) respectively. Considering a target value for \(r\) between \(10^{-3}\) and \(10^{-4}\), the detection of B modes in the CMB polarization is a challenging task due to several systematic uncertainties and contamination from foregrounds (Baccigalupi, 2003; Krachmalnicoff et al., 2016), which are microwave Galactic emissions and extragalactic sources. Galactic foregrounds are mainly divided into thermal dust (see Hensley & Draine (2021) for a comprehensive review) and synchrotron emission (Krachmalnicoff et al., 2018; Fuskeland et al., 2021). Extragalactic sources are principally blazars with their jets aligned in the line of sight of the instrument, and their emission is highly linear polarized due to synchrotron radiation emitted from their active galactic nuclei (Tucci & Toffolatti, 2012; Puglisi et al., 2018). Therefore, ongoing research is focused on improving not only the sensitivity of future experiments but developing new techniques for foreground removal, including the use of multi-frequency observations and sophisticated component separation methods. Some of them are described in the works by Leach et al. (2008) and Fuskeland et al. (2023), and vary between parametric Bayesian frameworks such as Commander(Eriksen et al., 2008), SMICA(Delabrouille et al., 2003) or FGBuster(Stompor et al., 2009; Puglisi et al., 2022), template-based methods such as SEVEM (Fernandez-Cobos et al., 2012), moment-expansion methods (Chluba et al., 2017; Vacher et al., 2022) and minimum variance methods such as NILC(Delabrouille et al., 2009), GNILC(Remazeilles et al., 2011) and cMILC(Remazeilles et al., 2021). Machine learning (ML) has become increasingly important in various fields due to its ability to learn patterns and make predictions from large amounts of data. One of the most popular ML models are neural networks, which can learn non-linear behaviors from data (Goodfellow et al., 2016). They are composed of interconnected computational units called neurons, which are organized in layers. The learning process of a neural network is an optimization problem where the objective is to minimize a loss function at the end of their architecture. The training of a neural network involves the flow of information from the input layer through the hidden ones and towards the output layer. Each neuron applies a non-linear activation function to the information it receives, allowing only certain information to pass through. After the output layer, the loss function is calculated using a target value in the data called the label. The backpropagation algorithm (Rumelhart et al., 1986) is then used to update the weights and biases of the neurons, allowing the network to improve its predictions. This process is repeated over several epochs until the loss function is minimized, indicating that the network has learned the patterns in the data. Convolutional neural networks (CNN) have become a powerful tool in image analysis and recognition due to their ability to learn and extract features from images. In a CNN, each layer produces a convolution between the input image and a set of filters in order to produce feature maps. The filters consist of small matrices of weights called kernels that slide over the input image, allowing the network to learn different features at different positions in the image. Then, during the backpropagation process, both the filters and the kernels become updated. Furthermore, fully-convolutional neural networks (FCN) are an evolution of CNN that have been extensively used for image segmentation tasks. They are formed by a series of convolutional layers that process the input image, followed by layers that perform deconvolutions to produce an output segmentation map of the same size as the input image. They have shown great success in various image segmentation tasks, including medical and object detection tasks. Recently, neural networks have been applied to various problems in astrophysics, including component separation with remarkable accuracy and efficiency (Petroff et al., 2020; Jeffrey et al., 2022; Casas et al., 2022; Wang et al., 2022; Yan et al., 2023). Good performance have been also achieved in recognizing foreground models (Farsian et al., 2020), detecting extragalactic point sources in single- (Bonavera et al., 2021) and multi-frequency (Casas et al., 2022) total intensity _Plank_-like data and for constraining their polarization flux densities and angles using polarization _Planck_ data (Casas et al., 2023). Also, they have been used for simulating dust foregrounds (Krachmalnicoff & Puglisi, 2021) or inpainting foreground maps (Puglisi & Bai, 2020). In this work, we aim to train the cosmic microwave background extraction neural network (CENN) presented in Casas et al. (2022) with polarization data for recovering the CMB signal in patches of the sky. The work is organized as follows: Section 2 describes several simulated datasets we use for training and validating the network and Section 3 describes the adopted methodology. The results are presented in Section 4 and our conclusions can be seen in Section 5. ## 2 Simulations Our training, validation and testing datasets are formed by realistic simulations of the microwave \(Q\) and \(U\) sky at 100, 143 and 217 GHz as seen by _Planck_. They were downloaded from the _Planck_ Legacy Archive website 1. The \(N_{side}=2048\) maps were cutted using the methodology described in Krachmalnicoff & Puglisi (2021) in quadratic patches of 256\(\times\)256 pixels, with a pixel size of 90 arcsec, which imply patches of 6.4 deg\({}^{2}\) of area. Each signal in the patch is smoothed with gaussian filters of 30, 25 and 20 arcmin, depending on the test dataset, in order to improve the low signal-to-noise ratio of each pixel due to the low sensitivity of the _Planck_ instrument. Footnote 1: [http://pla.esac.esa.int/pla/#home](http://pla.esac.esa.int/pla/#home) Each patch is formed by a lensed CMB signal, random white instrumental noise at _Planck_ levels, that is, 1.96, 1.17 and 1.75 \(\mu K_{CMB}\) at 100, 143 and 217 GHz respectively (before smoothing the data) and Galactic and extragalactic foregrounds. The Galactic foregrounds are thermal dust and synchrotron emission. The first one, following the PLA documentation, is simulated with a realization of the Vansyngel et al. (2017) model at 353 GHz and extrapolated to the lower frequencies used in this work by using _Planck_ 2015 dust maps and _Planck_ 2013 dust spectral index maps. The synchrotron emission, also following the PLA documentation. is simulated by following a power law scaling with a spatially varying spectral index for such emission. Therefore, they are similar to the d6s1 foreground model of the Python Sky Model (PySM2, Thorne et al. (2017), Zonca et al. (2021)). The only extragalactic foreground taken into account are point sources, which are injected into each patch by following the total intensity C2E model from Tucci et al. (2011) and the software CORRSKY (Gonzalez-Nuevo et al. 2005) and extrapolating their polarization by assuming a log-normal distribution with \(\mu=1.0\) and \(\sigma=0.7\) parameters at 143 GHz from Bonavera et al. (2017). Footnote 2: [https://github.com/galsci/pysm](https://github.com/galsci/pysm) Additionally, we tested CENN against simulations with a different foreground model (see Section 4.4). In that case, the test dataset includes thermal dust with an emissivity varying spatially on degree scales and synchrotron emission index steepening off the Galactic plane, which is exactly the d4s2 foreground model downloaded from the PySM. Therefore, we simulate three kind of datasets for training and testing the network. The first one, used for training, is formed by 10000 simulations. The second one, formed by 1000 simulations, is used for validating the network during training. The best trained model obtained from this validation dataset (the one with the lower loss) is then used for testing the network against new data. In particular, we do four trainings, one at 30 arcmin resolution, another one at 25 arcmin, another one at 20 arcmin and the last one at 5 arcmin without noise, having several validation and test sets for each case. An example of the input and label \(Q\) and \(U\) simulated patches at 143 GHz at 30 arcmin is shown in Fig. 1, first two columns. An example of the multi-frequency set of patches for the same simulation is shown in Fig. 2. ## 3 Methodology In this work, we have developed a methodology based on fully-convolutional neural networks for recovering the polarized CMB signal from noisy background patches of the sky. The details of how these kind of neural networks are able to perform image segmentation on multi-frequency data are explained mathematically and physically in Goodfellow (2010) and Casas et al. (2022a) respectively, and the reader is encouraged to looking for these works. The neural network is trained to read a set of three patches of the microwave sky and output a patch with only the CMB signal. The architecture, represented in Fig. 2, is used for training simultaneously both \(Q\) and \(U\) data. The network is formed by four convolutional blocks with 8, 2, 4 and 2 kernels of sizes of 9, 9, 7 and 7, respectively. The number of filters is 8, 16, 64 and 128, respectively. Each layer has a subsampling factor of 2 and a padding type "Same" for adding space around the input data or the feature map in order to deal with possible loss in width and/or height dimension in the feature maps after having applied the filters. The convolutional blocks are connected to four deconvolutional ones to help the network to predict low-level features by taking high-level features into account, previously inferred by the convolutional blocks, as explained in Wei et al. (2021) and Casas et al. (2022a). The deconvolutional blocks have 2, 2, 2 and 4 kernels of sizes of 3, 5, 7 and 7, respectively. The number of filters is 64, 16, 8, and 1 with the same subsampling and padding type than the convolutional ones. In all layers, the activation function is leaky ReLU. In the first convolutional block, CENN reads three patches at 100, 143 and 217 GHz of the polarized microwave sky, splitting the information into its first 8 feature maps after convolving it with randomly initialized filters and weights. The information allowed to pass by the activation function function is convolved by the next blocks. Then, three additional deconvolutional blocks, subsequently connected to the convolutional ones, as shown in Fig 2, segmentate the CMB signal from 128 small feature maps. A fourth deconvolutional block is added to reconstruct the CMB signal from 8 final feature maps to a patch of the same dimension than the input ones. In that final layer, a Mean Squared Error loss function \[MSE=\frac{1}{2}|y-y^{\prime}|^{2} \tag{1}\] is introduced, where \(y\) is a matrix formed by the predicted CMB pixel values and \(y^{\prime}\) is the CMB signal at 143 GHz. This function computes a loss, which is then used to estimate a gradient value, which is needed for updating both kernels and filters on Figure 1: Example of one simulation used for training CENN in \(Q\) (top row) and \(U\) (bottom row) polarization at 30 arcmin resolution. The left column shows the input patch formed by all the emissions, the second one represents only the CMB signal, used for minimizing the loss function during training. The third column shows the output CMB after validating CENN and the fourth one shows the residual patch, which is computed as the difference between input and output CMB. The units in all the patches are \(\mu K_{CMB}\). each layer of the architecture using the backpropagation and the AdaGrad optmizer. All the information forming the train dataset flows forward and backward the network with a batch size of 8. Once 500 epochs are completed, the training is complete and the CMB could be disentangled from the total input sky patches. ## 4 Results Once trained, CENN is able to read three patches of the sky and produce an output image with the clean CMB patch with the same dimensions of the training ones. In this section, the network is tested with similar and different conditions with respect to the training ones. More particularly, in Section 4.1, the network is trained and tested with data smoothed at 30 arcmin. Section 4.2 shows the comparison between training the network at 25 arcmin and training at 30 arcmin, being both networks tested at 25 arcmin. Section 4.3 presents the same analysis but for training and testing at 20 arcmin resolution. It should be noted that increasing the resolution allows to reach higher multipoles, although decreasing the signal-to-noise ratio: using 30 arcmin allows us to reach \(l\sim 600\), a resolution of 25 arcmin allows to achieve \(l\sim 700\) and 20 arcmin to get to about \(l\sim 800\). Section 4.4 shows how the model recovers the CMB at smaller angular scales by training and testing at 5 arcmin-_Planck_ data without instrumental noise, which can be also seen as an approximation to future CMB experiments. In the same section, we also change the foreground model of the simulations for testing generalization. ### Training at 30 arcmin Once the network has been trained at 30 arcmin, we apply it to a testing set of simulations, different from the training set. It outputs 1000 patches containing its prediction of the CMB. \(Q\) and \(U\) CMB simulated and output patches are then combined using NaMaster for analyzing the E and B-mode power spectra. We use the methodology proposed in (Krachmalnicoff and Puglisi, 2021) to extract the patches for the test set. However, we realize that this procedure leaves a clear E-to-B leakage, when it is applied to maps encoding CMB signal only. In fact, (Krachmalnicoff and Puglisi, 2021) employed it to Galactic emission maps and they did not find no leakage because the \(E-\) and \(B-\) mode signal is comparable specifically for thermal dust. We find that the leakage reduces sensibly for patches smaller \(\Delta\theta\lesssim 7\) and we therefore decide to use square \(256\times 256\) patches. It should be noted that, due to the limited size of the patch, the signal at large scales, below \(l<50\), cannot be recovered with our methodology. In order to recover larger scales, it should be better to use approaches such as the one in Wang et al. (2022) and Yan et al. (2023), or extending the neural network performance at all sky by using different methodologies such those proposed in Krachmalnicoff and Tomasi (2019) or Perraudin et al. (2019). In order to represent the results, we have rebinned the power spectra between \(l=50\) and \(l=600\), 700 and 800 depending on the smoothing we use for testing the network (30, 25 and 20 arcmin, respectively). For each bin, the mean and the standard deviation are used to represent the signal and its uncertainty. The analysis present in this work is mainly due attending to the residuals power spectra, which is computed as the average of the power spectrum of the residual patch (formed as the difference between input and recovered ones). More results about the difference between input and recovered signal is described in the Appendix A. Fig 3 shows the EE and BB power spectra of the residuals in black, compared against the input power spectra in blue. The respective uncertainties are represented as coloured areas. We also represent as grey dashed lines the highest instrumental noise level between the three channels seen by CENN when testing, corresponding to the 100 GHz channel (Planck Collaboration et al., 2018). As shown in the left panel, CENN accurately recovers the E-mode since the input signal is generally above the residuals. Moreover, at large scales (\(l<200\)), residuals are approximately Figure 2: Architecture of CENN. It has four convolutional blocks connected to other four deconvolutional ones, which are trained to read 3 input images of the microwave sky and recovery the CMB signal of the central frequency channel (in this case corresponding to 143 GHz). \(10^{-1}\mu K^{2}\), decreasing to \(10^{-2}\mu K^{2}\) while increasing the scale to \(l>500\). At smaller scales, residuals are below \(10^{-2}\mu K^{2}\). In this case, attending to the noise levels, residuals are mainly artifacts generated by CENN due to Galactic contamination. On the other hand, as shown in the right panel, the performance is similar for the B-mode: in that case, residuals are about \(2\times 10^{-3}\,\mu K^{2}\) for \(l<400\), decreasing to \(5\times 10^{-4}\mu K^{2}\) at \(l>500\), that is, generally one order of magnitude below the input signal, represented in blue. In fact, as shown, noise levels are above the residuals at all scales, showing that instrumental noise is not affecting CENN when recovering the B-mode. ### Training at 25 arcmin After analyzing how CENN performs at 30 arcmin, we decide to study its performance when training at a lower resolution. In this section, we create new train and test datasets with 10000 and 1000 simulations, respectively at 25 arcmin resolution. Firstly, we apply the network of the previous section trained with 30 arcmin to these patches, and then we compare its performance with the network trained with 25 arcmin data. Figure 4 represent EE and BB residuals power spectra, being the E-mode on the left panel and the B-mode on the right one. Again, the comparison between both input and recovered signals is described in the Appendix A. We derive similar conclusions as in the previous case: E-mode residuals are generally lower than the input signal (in blue), with a value of about \(10^{-1}\,\mu K^{2}\) at \(l<200\), descending to \(10^{-2}\mu K^{2}\) while decreasing the scale. The B-mode is also accurately recovered, especially at \(l<600\), when residuals are lower than the input signal. As shown, the network is more sensitive to noise (represented as grey dashed line) at high (\(l<100\)) and small (\(500<l<700\)) scales. Furthermore, residuals are generally lower in both modes when training the network at 25 arcmin resolution with respect to the 30 arcmin case. Therefore, it seems to be better to train the network at the same conditions than for testing while the instrumental noise level is still lower than the input signal. ### Training at 20 arcmin Based on the results shown in the last case, we decided to train the network with patches at 20 arcmin resolution, testing not only this network but also the previous ones (30 and 25 arcmin) at this resolution. Figure 5 shows the EE and BB power spectra residuals, respectively, being the E-mode on the left panel and the B-mode on the right one in both cases. As in the previous cases, the comparison between both input and recovered signals is described in the Appendix A. For the E-mode residuals, they are slightly higher at this training resolution with respect to the lower resolution cases, especially at middle and smaller scales (\(l>400\)). At larger scales, 20 and 30 arcmin residuals are similar while 25 arcmin ones are one order of magnitude higher than those cases. For the B-mode, residuals are about one order of magnitude above the CMB signal. However, although the noise levels are higher than the input signal, CENN recovers the B-mode for the other training cases with residuals one order of magnitude lower than the input. It seems that, training with higher resolution allows to obtain partially better results, at least until instrumental noise dominates the signal. In any case, based on these results, it seems that the network performs better when training with lower noise levels, although tested with higher ones. ### Training at 5 arcmin and testing against d4s2 foreground model Finally, we analyze the performance of CENN when the instrumental noise is negligible with respect to the CMB signal, which is likely the case with future CMB experiment. Moreover, we take the opportunity to assess CENN results when the foreground model is different from the one used in the network training. Therefore, we train CENN at 5 arcmin without noise, by producing two testing datasets: one with the same foreground model as in the training, and another with the d4s2 foreground model, in order to study the performance of the network recovering the E and B modes with a different microwave sky. After recovering \(Q\) and \(U\) CMB maps and applying NaMaster as in the previous cases, we plot in Figure 6 the EE and BB power spectra residuals, being the E-mode on the left panel and the B-mode on the right one. Figure 3: EE and BB residuals after training CENN at 30 arcmin resolution and tested at the same resolution. Left panel: EE power spectra in the input simulations (blue line) and residual patch average power spectra from CENN at 30 arcmin (black line). Right panel: the same but for the B-mode. In all cases, coloured areas shown the standard deviation of each bin, considered as the uncertainty of the model. Grey dashed line shows the higher instrumental noise levels seen by CENN. Figure 6 shows the residuals in black and brown for the train and d4s2 foreground models, respectively. Without the instrumental noise, there is a clear improvement on the CENN results, both in angular resolution and residual levels. In the E-mode, there is a contamination of the foregrounds, over \(1\,\mu K^{2}\), at large scales (\(l<200\)) in the residual patches. However, the performance improves while decreasing the angular scales, reaching residuals between \(10^{-1}\) and \(10^{-2}\,\mu K^{2}\) for \(l>500\). This behavior is similar for both foreground models. For the B-mode, when testing with the same foreground model of the training dataset, CENN successfully recovers the B-mode signal with residuals of \(7\times 10^{-3}\,\mu K^{2}\). However, when testing with the different d4s2 foreground model, residuals increase to \(3\times 10^{-2}\mu K^{2}\), that is, CENN produced an artificialackage with residual of the same order as the input signal. ## 5 Conclusions In this work, we aim to use a new methodology based on neural networks called the cosmic microwave background extraction neural network (CENN), which showed a good performance recovering the total intensity CMB signal in _Planck_-like simulations in a previous work (Casas et al. 2022a). Actually, we re-train the neural network for recovering the E and B mode CMB polarization signal at sub-degree scales. CENN is simultaneously trained with 10000 realistic simulations of \(Q\) and \(U\)_Planck_ maps consisting of 256\(\times\)256 pixel patches with a pixel size of 90 arcsec. Each simulation is formed by three patches at 100, 143 and 217 GHz with the CMB signal, thermal dust and synchrotron simulations from the PLA, injected PS and instrumental noise at _Planck_ levels. The CMB signal at 143 GHz of each simulation is also used as a label for minimizing the loss function during training. All signals were smoothed with different Gaussian filters depending on the test, in order to improve the signal-to-noise-ratio on each pixel. The performance of the network is mainly analyzed by using the power spectra of the residual patch (input-output), while the comparisson between input and output signal is presented in the Appendix A. Firstly, CENN was trained with 30 arcmin resolution patches. The E-mode become recovered with residuals between \(10^{-1}\) and \(10^{-2}\mu K^{2}\) for the E-mode residuals, and a value be Figure 4: EE and BB residuals after training CENN at 25 and 30 arcmin resolution and tested at 25 arcmin. Left panel: EE power spectra in the input simulations (blue line), residual patch average power spectra from CENN at 25 (black line) and 30 arcmin (brown line). Right panel: the same but for the B-mode. In all cases, coloured areas shown the standard deviation of each bin, considered as the uncertainty of the model. Grey dashed line shows the higher instrumental noise levels seen by CENN. Figure 5: EE and BB residuals after training CENN at 20, 25 and 30 arcmin resolution and tested at 20 arcmin. Left panel: EE power spectra in the input simulations (blue line), residual patch average power spectra from CENN at 20 (black line), 25 (brown line) and 30 arcmin (yellow line). Right panel: the same but for the B-mode. In all cases, coloured areas shown the standard deviation of each bin, considered as the uncertainty of the model. Grey dashed line shows the higher instrumental noise levels seen by CENN. tween \(2\times 10^{-3}\mu K^{2}\) at \(l<400\) and \(5\times 10^{-4}\mu K^{2}\) at \(l>400\) for the B-mode ones. Secondly, we compared the network trained with 25 arcmin resolution data with the previous one, both tested with 25 arcmin patches at the same position in the sky, obtaining similar residuals with respect to the previous case at large scales (\(l<200\)), while slightty lower at \(l>200\). On the other hand, the B-mode was slightly better recovered when trained with 25 arcmin patches. That is, the network is capable to learn smaller structures along the patch. Thirdly, we compared the network trained with 20 arcmin resolution data with both 25 and 30 trained ones. In this case, the testing is done against 20 arcmin patches at the same location in the sky than in the previous cases. The residuals from the networks trained at lower resolution are generally lower than the input signal at about all scales for the E-mode, while the 20 arcmin network present residuals of one order of magnitude higher than the input signal for the B-mode. Although the network is capable to learn smaller structures when trained with higher resolution data, noise levels are crucial during the training procedure. Finally, we trained the network with 5 arcmin resolution simulations without instrumental noise, but varying the foreground model. There is a clear improvement on the CENN results in terms of angular resolution and residual levels. In this case, we found that a different sky does not vary the performance of the neural network for the E-mode, obtaining similar residuals in both cases. However, we found a worse performance for the B-mode recovery, since the network introduced artifacts of the order of the input signal. With these results about CMB recovery in polarization with neural networks we can firstly conclude that, fully-convolutional neural networks seem to be better for recovering structures at small scales than at middle and large scales. In all cases, but especially when we do not have any smoothing, the network have enough information in a small patch of the sky for always improving its performance when analyzing smaller scales. Previous works (Casas et al. 2022a, Casas et al. 2022b and Casas et al. 2023) present similar conclusions. As previously mentioned, future work extending the application of neural networks at all sky should be made to apply them in experiments such as LiteBIRD. The current CENN is better suited for experiments working with partial sky coverage, such as ACT, SPT, SO, POLARBEAR or the CMB-S4. In any case, the evolution of the current CENN approach should be to use bigger patches. With the methodology used in this work, we cannot use bigger patches without having an E-to-B leakage. Moreover, with our patch sizes, we only can establish upper limits for the B-mode, at least at the resolutions used in this work. However, following other works which recover the CMB-signal, we must increase the patch size in order to solve this limitation, and we will be deal with in the future. We also found that instrumental noise is relatively the more easily contaminant to segmentate from the CMB, at least whenever the noise levels are bellow the signal. In fact, as seen in Sect. 4.2 (training at 25 arcmin), CENN trained and applied to higher resolutions can be used as long as the noise level is reasonable with respect to the signal. When the noise becomes important as in Sect. 4.3 (training at 20 arcmin), it is better to train the network in good conditions, i.e. lower resolutions to get better results. This also affects to the large scale recovering, as seen in the comparison between 20, 25 and 30 arcmin training. When training with lower levels of instrumental noise, although tested with higher contamination, the recovering is more accurate than training at the same conditions than during the testing when instrumental noise dominate the signal. This fact could be relevant for studying a future constrain of the tensor-to-scalar ratio with neural networks. Based on these results, it seems to be better to train a neural network without noise, although being included in the patches of the testing set. This seems reasonable since we think a neural network might get "confused" by the noise presence while optimizing its weights. In any case, this should be tested in future works in order to start using neural networks for systematics cleaning, which could be extremely relevant for future CMB experiments searching for the primordial B-mode signal. Another relevant aspect is that, as expected, B-mode recovery is sensitive to the use of a different foreground model. In order to mitigate this limitation, what can be done is to have various networks trained with different foreground models before their application to the data and analyze and compare the outputs from the different networks, for example, in terms of power spectra. ###### Acknowledgements. JMC, LB, JGN, MMC and DC acknowledge financial support from the PID2021-125630NB-I00 project funded by MCIN/AEI/10.13039/S01100011033 / FEDER, UE. JMC also acknowledges financial support from the SV-PA-21-AYUD/2021/51301 Figure 6: EE and BB residuals after training CENN at 5 arcmin resolution without noise. Left panel: EE power spectra in the input simulations (blue line), residual patch average power spectra from CENN for d6s1 (black line) and d4s2 (brown line) foreground models. Right panel: the same but for the B-mode. In all cases, coloured areas shown the standard deviation of each bin, considered as the uncertainty of the model. project. LB also acknowledges the CNS2022-135748 project funded by MCIN/AEI/U10.13039/5011000110333 and by the EU "NextGeneration/PETR". GP acknowledges [...], CB acknowledges [...], CGC and FJDC acknowledge financial support from PID2021-127318/18-000 project. The authors thank conversations and valuable comments with Evan Allys, Jens Chluba, Brandon S. Hensley, Nial Jeffrey, Nicoletta Krachmalnicoff and Bruce Partridge. This research has made use of the python packages Ratiplotlib (Hunter, 2007), Pandas (Webs McKinney, 2010), keras (Chollet, 2015), and numpy (Olipphant, 2006), also the HEALPix (Gonski et al., 2005) and Healpy (Zonca et al., 2019) packages.
2306.13203
Neural Network Pruning for Real-time Polyp Segmentation
Computer-assisted treatment has emerged as a viable application of medical imaging, owing to the efficacy of deep learning models. Real-time inference speed remains a key requirement for such applications to help medical personnel. Even though there generally exists a trade-off between performance and model size, impressive efforts have been made to retain near-original performance by compromising model size. Neural network pruning has emerged as an exciting area that aims to eliminate redundant parameters to make the inference faster. In this study, we show an application of neural network pruning in polyp segmentation. We compute the importance score of convolutional filters and remove the filters having the least scores, which to some value of pruning does not degrade the performance. For computing the importance score, we use the Taylor First Order (TaylorFO) approximation of the change in network output for the removal of certain filters. Specifically, we employ a gradient-normalized backpropagation for the computation of the importance score. Through experiments in the polyp datasets, we validate that our approach can significantly reduce the parameter count and FLOPs retaining similar performance.
Suman Sapkota, Pranav Poudel, Sudarshan Regmi, Bibek Panthi, Binod Bhattarai
2023-06-22T21:03:50Z
http://arxiv.org/abs/2306.13203v1
# Neural Network Pruning for Real-time Polyp Segmentation ###### Abstract Computer-assisted treatment has emerged as a viable application of medical imaging, owing to the efficacy of deep learning models. Real-time inference speed remains a key requirement for such applications to help medical personnel. Even though there generally exists a trade-off between performance and model size, impressive efforts have been made to retain near-original performance by compromising model size. Neural network pruning has emerged as an exciting area that aims to eliminate redundant parameters to make the inference faster. In this study, we show an application of neural network pruning in polyp segmentation. We compute the importance score of convolutional filters and remove the filters having the least scores, which to some value of pruning does not degrade the performance. For computing the importance score we use the Taylor First Order (TaylorFO) approximation of the change in _network output_ for the removal of certain filters. Specifically, we employ a gradient-normalized backpropagation for the computation of the importance score. Through experiments in the polyp datasets, we validate that our approach can significantly reduce the parameter count and FLOPs retaining similar performance. Keywords:Polyp Segmentation Real-time Colonoscopy Neural Network Pruning ## 1 Introduction Polyp segmentation [7, 6, 37] is a crucial research problem in the medical domain involving dense classification. The primary aim of segmenting polyps in the colonoscopy and endoscopy is to identify pathological abnormalities in body parts such as the colon, rectum, etc. Such abnormalities can potentially lead to adverse effects causing colorectal cancer, thus inviting fatal damage to health. Statistics show that between 17% and 28% of colon polyps are overlooked during normal colonoscopy screening procedures, with 39% of individuals having at least one polyp missed, according to several recent studies [27, 22]. However, timely diagnosis of a polyp can lead to timely treatment. It has been calculated that a 1% improvement of polyp detection rate reduces colorectal cancer by 3% [3]. Realizing the tremendous upside of early polyp diagnosis, medical AI practitioners have been trying to utilize segmentation models to assist clinical personnel. However, the latency of the large segmentation model has been the prime bottleneck for successful deployment. Utilizing smaller segmentation models is an option, but doing so compromises the performance of the model. In the case of bigger models, there is a good chance model learns significant redundancies thereby leaving room for improvement in performance. In such a scenario, we can prune the parameters of the model to reduce its size for the inference stage. Neural Network Pruning has established itself as an exciting area to reduce the inference time of larger models. Neural Network Pruning [26, 14, 10, 2] is one of the methods to reduce the parameters, compute, and memory requirements. This method differs significantly from knowledge distillation [16, 12] where a small model is trained to produce the output of a larger model. Neural Network Pruning is performed at multiple levels; (i) weight pruning [35, 14, 13] removes per parameter basis while (ii) neuron/channel [43, 24] pruning removes per neuron or channel basis and (iii) block/group [11, 25] pruning removes per a block of networks such as residual block or sub-network. Weight pruning generally achieves a very high pruning ratio getting similar performance only with a few percentages of the parameters. This allows a high network compression and accelerates the network on specialized hardware and CPUs. However, weight pruning in a defined format such as N:M block-sparse helps in improving the performance on GPUs [29]. Pruning network at the level of neurons or channels helps reduce the parameters with similar performance, however, the pruning ratio is not that high. All these methods can be applied to the same model as well. In this work, we are particularly interested in neuron-level pruning. Apart from the benefit of reduced parameter, memory, and computation time (or FLOPs), neuron or channel level pruning, the number of neurons in a neural network is small compared to the number of connections and can easily be pruned by measuring the global importance [26, 15, 34, 28, 44]. We focus on the global importance as it removes the need to inject bias about the number of neurons to prune in each layer. This can simplify our problem to remove less significant neurons globally, allowing us to extend it to differently organized networks such as VGG, ResNet, UNet or any other Architecture. However, in this work, we focus only on the layer-wise, block-wise and hierarchical architecture of UNet [38]. Our experiment on Kvasir Segmentation Dataset using UNet model shows that we can successfully prune \(\approx\) 1K Neurons removing \(\approx\)14% of parameters and reducing FLOPs requirement to \(\approx\) 0.5x the original model with approximately the same performance of the original (from 0.59 IoU to 0.58 IoU). That is half the computational requirement of previous model with negligible performance loss. ## 2 Related works ### Real-time Polyp Segmentation Convolution-based approaches [38, 46, 30] have mostly dominated the literature while recently attention-based models [6, 23] have also been gaining traction in polyp segmentation. A number of works have been done in the area of real-time settings too. One of the earliest works [39], evidencing the ability of deep learning models for real-time polyp, has shown to achieve 96% accuracy in screening colonoscopy. Another work [41] utilizing a multi-threaded system in a real-time setting, has shown the deep learning models' ability to process at 25 fps with 76.80 \(\pm\) 5.60 ms latency. Specialized architectures for polyp segmentation have also been studied in the medical imaging literature accounting for real-time performance. MSNet [45] introduced a subtraction unit, performing inference on 352x352 at 70 fps, instead of the usual addition as used in many works such as UNet [38], UNet++ [46], etc. Moreover, NanoNet [21] introduced a novel architecture tailor-made for real-time polyp segmentation primarily relying on a lightweight model hence compromising the learning capacity. SANet [42] has been shown to achieve strong performance with an inference speed of about 72 FPS. It showed samples collected under different conditions show inconsistent colors, causing the feature distribution gap and overfitting issue. Another work [36] used 2D gaussian instead of binary maps to better detect flat and small polyps which have unclear boundaries. ### Neural Network Pruning Works in pruning have somewhat lagged behind in medical imaging as compared to other domains. A recent work [1] has focused its study on reducing the computational cost of model retraining after post-pruning. DNNDeepening-Pruning [8] proposed the two-stage model development algorithm to build the small model. In the first stage, the residual layers are added until the overfitting starts and in the latter stage, pruning of the model is done with some user instructions. Furthermore, [9] has demonstrated evolution strategy-based pruning in generative adversarial networks (GAN) framework for medical imaging diagnostics purposes. In biomedical image segmentation, [19] applied a pruning strategy in U-Net architecture achieving 2x speedup trading off a mere 2% loss in mIOU(mean Intersection Over Union) on PhC-U373 and DIC-HeLa dataset. STAMP [4] tackles the low data regime through online simultaneous training and pruning achieving better performance with a UNet model of smaller size as compared to the unpruned one. In histological images, the superiority of layer-wise pruning and network-wide magnitude pruning has been shown for smaller and larger compression ratios respectively [32]. For medical image localization tasks, pruning has also been used to automatically and adaptively identify hard-to-learn examples [18]. In our study, we make use of pruning to reduce the model's parameters. Previous works showed that global importance estimation can be computed using one or all of forward(activation) [17], parameter(weight) [14] or backward(gradient) [40, 31, 5] signals. Some of the previous techniques use Feature Importance propagation [44] or Gradient propagation [28] to find the neuron importance. Others use both activation and gradient information for pruning [34, 33]. Although there are methods using such signals for pruning at initialization [40], we limit our experiment to the pruning of trained models for a given number of neurons. In this work, we use importance metric similar to Taylor First Order (Taylor-FO) approximations [34, 33] but from heuristics combining both forward and backward signals. The forward signal, namely the activation of the neuron, and the backward signal, the gradient. We use a normalized gradient signal to make the contribution of each example similar for computing the importance score. ## 3 Methodology In this section, we discuss the pruning method in detail, and the application of the pruning method for polyp segmentation tasks, specifically focusing on the UNet architecture. However, it can be applied to other architecture as well. Instead of pruning all layers, we specifically target the convolutional layers for pruning. It is important to note that the term 'neurons' refers to the channels in the context of pruning convolutional layers. Furthermore, we present a method to select the pruned model that is best suited for the task at hand. ### Pruning Method Previous works on global importance-based post-training pruning of neurons focus on using forward and backward signals. Since most of these methods are based on Taylor approximation of the change in loss after removing a neuron or group of parameters, these methods require input and target value for computing Figure 1: Left: Unpruned UNet Model. Right: Model After Purning convolution filters with low importance score. _The exact number of pruned filters is 956, extracted from experiment shown in Fig 2 (top)._ the importance. Instead, we tackle the problem of pruning from the perspective of overall function output without considering the loss. **Forward Signal:** The forward signal is generally given by the pre-activation \((x_{i})\). If a pre-activation is zero, then it has no impact on the output of the function, i.e. the output deviation with respect to the removal of the neuron is zero. If the incoming connection of a neuron is zero-weights, then the neuron can be removed, i.e. it has no significance. If the incoming connection is non-zero then the neuron has significance. Forward signal takes into consideration how data affects a particular neuron. **Backward Signal:** The backward signal is generally given by back-propagating the loss. If the outgoing connection of the neuron is zeros, then the neuron has no significance to the function, even if it has positive activation. The gradient\((\delta x_{i})\) provides us with information on how the function or loss will change if the neuron is removed. **Importance Metric:** Combining the forward and backward signal we can get the influence of the neuron on the loss or the function for given data. Hence, the importance metric \((I_{i})\) of each neuron \((n_{i})\) for dataset of size \(M\) is given by \(I_{i}=\frac{1}{M}\sum_{n=1}^{M}x_{i}.\delta x_{i}\), where \(x_{i}\) is the pre-activation and \(\delta x_{i}\) is its gradient. It fulfills the criterion that importance should be low if incoming or outgoing connections are zeros and higher otherwise. _Problem 1:_ This importance metric \((I_{i})\) is similar to Taylor-FO [34]. However, the metric gives low importance when the gradient is negative, which to our application, is a problem as the function will be changed significantly, even if it lowers the loss. Hence, we calculate the square of importance metric to make it positive. The squared importance metric \((I_{i}^{s})\) is computed as below: \[I_{i}^{s}=\frac{1}{M}\sum_{n=1}^{M}\left(x_{i}.\delta x_{i}\right)^{2}\] _Problem 2:_ During the computation of the gradients, some input examples produce a higher magnitude of gradient, and some input examples produce a lower magnitude of the gradient. Since the magnitude is crucial for computing the importance, different inputs contribute differently to the overall importance score. To this end, we normalize the gradient to the same magnitude of 1. Doing so makes the contribution of each data point equal for computing the importance. **Pruning Procedure:** Consider that pruning is performed using dataset \(\mathbf{D}\in[\mathbf{x}_{0},\mathbf{x}_{1},...\mathbf{x}_{N}]\) of size \(N\). We have a Convolutional Neural Network (CNN) whose output is given by: \(\mathbf{y}_{n}=f_{CNN}(\mathbf{x}_{n})\). We first compute the gradient w.r.t \(\mathbf{y}_{n}\) for all \(\mathbf{x}_{n}\) for given target \(\mathbf{t}_{n}\) as: \[\Delta\mathbf{y}_{n}=\frac{\delta E(\mathbf{y}_{n},\mathbf{t}_{n})}{\delta \mathbf{y}_{n}}\] We then normalize the gradient \(\Delta\mathbf{y}_{n}\) as: \[\Delta\mathbf{\hat{y}}_{n}=\frac{\Delta\mathbf{y}_{n}}{\left\|\Delta\mathbf{y }_{n}\right\|}\] This gradient \(\Delta\mathbf{\hat{y}}_{n}\) is then backpropagated through the \(f_{CNN}\) network to compute the squared Importance score (\(I_{i}^{s}\)) of each convolution filter. ### Pruning UNet for Polyp-Segmentation UNet [38] is generally used for Image Segmentation Tasks. It consists of only Convolutional Layers including Upsampling and Downsampling layers organized in a hierarchical structure as shown in Figure 1. We compute the Importance Score for each Convolutional layer and prune the least important ones. Removing a single convolution filter removes a channel of the incoming convolution layer and the outgoing convolution channel. When used with many channels, we can get a highly pruned UNet with only a slight change in performance. This method can be used to drastically reduce the computation and memory requirements without degrading the performance even without fine-tuning the pruned model. A single computation of Importance Score allows us to prune multiple numbers of neurons and select sparsity with the best FLOPs (or Time-taken) and IoU trade-off. ### Measuring Pruning Performance Performance metrics are crucial for measuring the effectiveness of different pruning algorithms. Some of them are listed below. **FLOPs:** FLOP stands for floating point operation. Floating point operation refers to the mathematical operations performed on floating point numbers. FLOP measures model complexity, with a higher value indicating a computationally expensive model and a lower value indicating a computationally cheaper model with faster inference time. We evaluate an algorithm's efficiency by how many FLOPs it reduces. **Parameters:** Parameters represent learnable weights and biases typically represented by floating point numbers. Models with many parameters need a lot of memory, while models with fewer parameters need less memory. The effectiveness of the pruning algorithm is measured by the reduction in the model's parameters. **Time-taken:** It is the actual wall-clock inference time of model. We measure time taken before and after pruning the network. Time-taken is practical but not the most reliable metric for efficiency gain as it might vary with device and with different ML frameworks. ## 4 Experiments We conduct the experiment for Polyp-Segmentation model pruning using the Kvasir Dataset [20]. We use the pretrained UNet Model for segmentation and prune the Convolutional Filters of the Network to reduce the computational cost as shown in Figure 2. **Procedure:** First, we compute the importance score for each neuron/channel on a given dataset. Secondly, we prune the \(P\) least important neurons of total \(N\) by importance metric (\(I^{s}\)) given by our method. We measure the resulting accuracy and plot the Number of neurons pruned as shown in Figure 2. The pruning is performed using one split of the test dataset and the IoU is measured on another split of the test dataset. Although the pruned models could be finetuned to see an increase in performance, we do not finetune pruned model in our case. We analyse the change in performance (IoU), the efficiency achieved (FLOPs) and the compression (the number of parameters) for different values of the _number-of-neurons-pruned_ in the UNet Model. **Observation:** The experiments show that model generally consists of redundant and less important convolutional channels, which can be pruned with little to no effect on the output of the model. We see in Figure (2 left) that about 50% (\(\approx\)1500 out of 2944) of the neurons can be pruned before the IoU starts to decrease drastically. Furthermore, this result is observed for a varying numbers of data points, which suggests that pruning under different settings creates different pruned architectures, while still following the same pattern of performance retention after pruning of an increasing number of neurons (up to some point). Figure 2: **(Top)** row is the Number of Neurons Pruned vs IoU and Parameters plot. **(Bot)** row is the Number of Neurons Pruned vs Time-taken and Giga-FLOPs plot. Here, Time-taken is measured in seconds for the inference of 100 samples with 10 batch size. **(Left)** column shows pruning performance using 39 data samples for importance estimation. A sample pruning of 956 neurons reduces the FLOPs to 0.477\(\times\) and parameters to 0.864\(\times\) while retaining performance to 0.99\(\times\) the original performance (\(\approx\)0.5795 IoU). The time taken is reduced by \(\approx\)30%. **(Right)** column shows pruning performance using 235 data samples for importance estimation. A sample pruning of 736 neurons reduces the FLOPs to 0.54\(\times\) and parameters to 0.922\(\times\) while retaining the same performance (\(\approx\)0.5879 IoU). Here, we manage to reduce time taken by \(\approx\)26%. The qualitative evaluation (see Figure 3) of the Pruned UNet Model on the Polyp-Segmentation Dataset shows that the pruned model makes slight changes in the output of the unpruned model while preserving most of the important characteristics. We find that these slight changes can be improvements or degradation to the original model outputs but without significantly distorting the model output. ## 5 Conclusion In this work, we propose to use Neuron Level Pruning in the application of the polyp segmentation task for the first time. The benefit of proposed channels or filter pruning can be realized immediately with parallel hardware like GPUs to significantly reduce the computation cost to less than 50% without degrading the performance. Such a reduction in computational cost automatically leads to the potential application in a real-time setting. Computer-assisted treatment of patients, especially during medical tasks like colonoscopy, requires low latency with satisfactory performance such that the pace of treatment is not hindered. Since the polyp's nature can exhibit significant variability during colonoscopy, real-time polyp segmentation models can indeed provide medical personnel useful insights into locating the abnormal growths in the colon thereby assisting in early diagnosis. Moreover, the advanced visualizations aided through real-time diagnosis can indeed lead to determining appropriate treatment approaches. Figure 3: Qualitative comparison of Polyp Segmentation before and after pruning of the UNet model. The pruned model samples are generated from experiment in Fig (2 _left_ with 956 neurons pruned). Moreover, it also allows safe, methodical, and consistent diagnosis of patients. Our work paves the path for off-the-shelf models to be significantly accelerated through neural network pruning in tasks requiring fast inference such as medical imaging, reducing the inference and storage cost. To sum up, in this work, we explore a promising research direction of neural network pruning demonstrating its efficacy in polyp segmentation. We validate our approach of neural network pruning with various experiments by almost retaining the original performance. ## 6 Acknowledgement This work is partly funded by the EndoMapper project by Horizon 2020 FET (GA 863146).
2308.09907
Imputing Brain Measurements Across Data Sets via Graph Neural Networks
Publicly available data sets of structural MRIs might not contain specific measurements of brain Regions of Interests (ROIs) that are important for training machine learning models. For example, the curvature scores computed by Freesurfer are not released by the Adolescent Brain Cognitive Development (ABCD) Study. One can address this issue by simply reapplying Freesurfer to the data set. However, this approach is generally computationally and labor intensive (e.g., requiring quality control). An alternative is to impute the missing measurements via a deep learning approach. However, the state-of-the-art is designed to estimate randomly missing values rather than entire measurements. We therefore propose to re-frame the imputation problem as a prediction task on another (public) data set that contains the missing measurements and shares some ROI measurements with the data sets of interest. A deep learning model is then trained to predict the missing measurements from the shared ones and afterwards is applied to the other data sets. Our proposed algorithm models the dependencies between ROI measurements via a graph neural network (GNN) and accounts for demographic differences in brain measurements (e.g. sex) by feeding the graph encoding into a parallel architecture. The architecture simultaneously optimizes a graph decoder to impute values and a classifier in predicting demographic factors. We test the approach, called Demographic Aware Graph-based Imputation (DAGI), on imputing those missing Freesurfer measurements of ABCD (N=3760) by training the predictor on those publicly released by the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA, N=540)...
Yixin Wang, Wei Peng, Susan F. Tapert, Qingyu Zhao, Kilian M. Pohl
2023-08-19T05:03:35Z
http://arxiv.org/abs/2308.09907v1
# Imputing Brain Measurements Across Data Sets via Graph Neural Networks ###### Abstract Publicly available data sets of structural MRIs might not contain specific measurements of brain Regions of Interests (ROIs) that are important for training machine learning models. For example, the curvature scores computed by Freesurfer are not released by the Adolescent Brain Cognitive Development (ABCD) Study. One can address this issue by simply reapplying Freesurfer to the data set. However, this approach is generally computationally and labor intensive (e.g., requiring quality control). An alternative is to impute the missing measurements via a deep learning approach. However, the state-of-the-art is designed to estimate randomly missing values rather than entire measurements. We therefore propose to re-frame the imputation problem as a prediction task on another (public) data set that contains the missing measurements and shares some ROI measurements with the data sets of interest. A deep learning model is then trained to predict the missing measurements from the shared ones and afterwards is applied to the other data sets. Our proposed algorithm models the dependencies between ROI measurements via a graph neural network (GNN) and accounts for demographic differences in brain measurements (e.g. sex) by feeding the graph encoding into a parallel architecture. The architecture simultaneously optimizes a graph decoder to impute values and a classifier in predicting demographic factors. We test the approach, called _D_emographic _A_ware _G_raph-based _I_mputation (_DAGI_), on imputing those missing Freesurfer measurements of ABCD (N=3760; minimum age 12 years) by training the predictor on those publicly released by the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA, N=540). 5-fold cross-validation on NCANDA reveals that the imputed scores are more accurate than those generated by linear regressors and deep learning models. Adding them also to a classifier trained in identifying sex results in higher accuracy than only using those Freesurfer scores provided by ABCD. Keywords:Brain measurements Feature imputation Graph representation learning. ## 1 Introduction Neuroscience heavily relies on ROI measurements extracted from structural magnetic resonance imaging (MRI) to encode brain anatomy [24]. However, public releases of brain measurements might not contain those that are important for a specific task. For example, the Freesurfer scores [6] publicly released by the ABCD study do not contain curvature measurements of cortical regions [3], which might be useful for identifying sex differences. While one could theoretically reapply the Freesurfer pipeline to generate those missing measurements, it requires substantial computational resources and manual labor, as, for example, the Freesurfer scores from thousands of MRIs would have to be quality controlled. A more efficient solution is to learn to impute missing brain measurements from the existing ones. Imputation involves estimating or filling in missing or incomplete data values based on the available data, thereby creating a complete dataset suitable for further analysis or modeling. Examples of popular approaches for imputing measurements are MICE [26] and k-nearest neighbors [8]. The state-of-the-art in this domain relies on deep learning models, such as using generative autoencoders [25] or graph convolutional networks [28, 23]. However, such methods assume that missing values are randomly distributed within a matrix capturing all measurements of a data set (refer to Figure 1 (a)). If each column now represents a measurement, estimating missing values in a column then partly relies on rows (or samples) for which that measurement exists. Here we aim to solve the issue that the entire column does not contain any values (Figure 1 (b)), i.e., some specific measurements are absent throughout an entire dataset. One could address this issue by combining the data set with the missing values with one that contains them, which then relates to the scenario in Figure 1 (a). However, the imputation now explicitly depends on the data set with the missing scores so if that data set is updated (e.g., ABCD yearly releases) so do all imputations, which could result in scores conflicting with those imputed based on earlier versions of the data set. We instead address this challenge by re-framing the imputation problem as a prediction task on a single (public) data set, such as NCANDA [1], that contains the missing measurements and shares some ROI measurements with the data set of interest. A deep learning model can then be trained on NCANDA to predict the curvature scores from the measurements that are shared with ABCD. Afterwards, the trained model is applied to ABCD (or other data sets that share those scores) to predict the missing curvature scores on ABCD. Consequently, our primary objective is to determine the most accurate mapping from the currently available shared measurements to the missing ones. Measurements of the same ROI (e.g., cortical thickness and volume) are highly dependent, and measurements of adjacent regions are more likely to be correlated than those from distant regions [16, 10]. To explicitly account for such dependencies, our prediction model is based on a graph neural network (GNN) [22] called Graph Isomorphism Network [29]. In our graph, each node represents an ROI and adjacent ROIs are connected via edges. In addition to modeling adjacency of ROIs, our prediction model also accounts for the dependencies between demographic factors and ROI measurements [11, 21]. For example, women tend to have higher gyrification in frontal and parietal regions than men, which results in the curvature of those ROIs being different between the sexes [13]. We account for this difference by feeding the GNN encodings into a parallel architecture that simultaneously optimizes a graph decoder for imputing values and a classifier for identifying sex. We apply our approach, called _D_emographic _A_ware _G_raph-based _I_mputation (_DAGI_), to impute Freesurfer measurements that are available in the NCANDA data set but are missing in ABCD (i.e., "mean curvature" and "Gaussian curvature") by explicitly taking advantage of those that are shared among them (i.e., "average thickness", "surface area" and "gray matter volume" ). Using 5-fold cross-validation, we then show on NCANDA that the accuracy of the imputed scores is significantly higher than those generated by linear regressors and deep learning models. Furthermore, We identify the brain ROIs important in the imputation task by visualizing the learned graph structure via GNNExplainer [30]. On the ABCD data set, adding the scores to a classifier in identifying sex results in significantly higher accuracy than only using those provided by ABCD or using those imputed by combing the ABCD with the NCANDA data set (Figure 1 (a)). ## 2 Method Let's assume that the first data set is represented by a matrix \(X^{1}\in\mathbb{R}^{v\times d}\) containing the cortical measurements of \(v\) regions, where \(d\) cortical measurements \(X_{i}\in\mathbb{R}^{d}\) are extracted from each region \(i\). Furthermore, let \(X^{2}\in\mathbb{R}^{v\times p}\) be the data matrix of the second data set, which is based on the same parcellation but contains a different set of measurements for each region, of which \(p(<d)\) are those also found in \(X^{1}\). Let \(X^{o}_{i}\in\mathbb{R}^{1\times p}\) be the \(p\) shared measures across datasets, and \(X^{m}_{i}\in\mathbb{R}^{1\times q}\) be the remaining \(q\) measurements only available in \(X^{1}\). Thus, \(X^{1}\) can be divided into \(X^{O}=[X^{o}_{1},...,X^{o}_{v}]^{T}\) and \(X^{M}=[X^{m}_{1},...,X^{m}_{v}]^{T}\). Our goal is to learn an imputation mapping \(X^{O}\to X^{M}\) so that we can impute the missing measurements on the second data set. To generate an accurate mapping, we first design a GNN implementation that accounts for dependencies among brain ROI measurements and in parallel consider demographic variations (e.g. sex) within those ROI measurements via a classifier. Figure 1: Scenarios of missing values : (a) missing values are being randomly distributed across the data set or (b) specific measurements are absent from a data set, which is the problem we aim to solve here. ### Graph-based Imputation We view the \(v\) regions as the nodes of a graph with \(X^{O}\) as node features. To capture adjacency among cortical ROIs and simplify training, we construct a sparse graph by adding an edge between two brain regions if they share a boundary on the cortical surface. This undirected graph with \(v\) nodes is then encoded by an "adjacency matrix" \(\mathbf{A}\in\mathbb{R}^{v\times v}\), where \(\mathbf{A}_{ij}\) is 1 if and only if nodes \(i\) and \(j\) are connected. As \(\mathbf{A}\) does not change across subjects, then each subject is encoded by the graph \(G=<X^{O},\mathbf{A}>\), whose node features are the subject-specific measurements \(X^{O}\). Given a graph \(G\), we aim to learn its encoding into node embeddings \(h_{G}\in\mathbb{R}^{v\times r}\) that is optimized for imputing missing ROI measurements and predicting the label, i.e., demographic factor sex (see Figure 2). The node embeddings are learned by a Graph Isomorphism Network (GIN) [29], which compares favorably to conventional GNNs such as GCN [9] in capturing high-order relationships across features of neighboring ROIs [29]. Each layer of a GIN learns the relationships between neighboring ROIs by first summing up the feature vectors of adjacent nodes. These new vectors are then mapped to hidden vectors via a multi-layer perceptron (MLP). The hidden vector \(h_{i}^{k}\) of a particular node \(i\) at the \(k\)-th layer is then defined as : \[h_{i}^{k}:=\text{MLP}\left((1+\varepsilon)\cdot h_{i}^{k-1}+\sum_{j\in\mathcal{ N}_{i}}h_{j}^{k-1}\right), \tag{1}\] where \(\mathcal{N}_{i}\) denotes nodes adjacent to node \(i\) (according to \(\mathbf{A}\)) and the weight \(\varepsilon\) of a node compared to its neighbors is learned. The node embeddings of the last layer \(h_{G}:=\{h_{i}\}_{i\in v}\) are then fed into a graph decoder, which again is a GIN. The decoder is trained to reconstruct the missing measurements \(X^{M}\) using \(h_{G}\) obtained from "shared measurements" \(X^{O}\) by deriving the mapping function \(f(\cdot)\) so that the predicted value \(\widehat{X}^{M}:=f(h_{G})\) Figure 2: Overview of our model: A GNN encodes both adjacency and measurements of brain ROIs into node embeddings, which are utilized by a graph decoder to impute missing values \(X^{M}\). The parallel (upper) branch refines the node representations by differentiating between the sexes. minimizes the loss function \[\mathcal{L}_{imp}:=\left\|X^{M}-f(h_{G})\right\|^{2}, \tag{2}\] where \(\|\cdot\|\) is the Euclidean distance. ### Demographic Aware Graph-based Imputation As mentioned, we implement a classifier in parallel to the graph decoder (Figure 2). Given the subject-specific node embedding \(h_{G}\) and label \(y_{G}\) (e.g., female or male), this classifier aims to learn a function \(g(\cdot)\) that maps the node embeddings of \(G\) to label \(y_{G}\), i.e., \(\widehat{y}_{G}:=g(h_{G})\). As shown in Figure 2, our model first applies a global mean pooling operation to \(h_{G}\) in order to extract the graph embedding required for the MLP to perform the classification [7]. The loss function optimized by the classifier is then \[\mathcal{L}_{cls}:=y_{G}\log\left(g(h_{G})\right)+(1-y_{G})\log\left(1-g(h_{G} )\right). \tag{3}\] To minimize this loss, the node embeddings \(h_{G}\) are optimized with respect to representing demographic differences. Explicitly accounting for demographic differences then improves the accuracy of the imputation task as the demographic factors (i.e., sex) estimated by the classifier provide additional information further constraining the search space. Thus, the overall loss function minimized by DAGI combines imputation and classification loss, i.e., \[\mathcal{L}_{total}:=\mathcal{L}_{imp}+\mathcal{L}_{cls}. \tag{4}\] ### Implementation We implement the model in PyTorch using the Adam optimizer with a learning rate of 0.01. The batch size is set to 32 and the number of epochs is 300. The dimension of node embedding \(r\) is 32. Our graph encoder is composed of two GIN layers, each containing an MLP with two fully-connected layers. Our graph decoder contains one GIN layer with four fully-connected layers. Following each GIN layer, we apply ReLU functions and batch normalization to enhance stability. Codes will be available at [https://github.com/Wangyixinxin/DAGI](https://github.com/Wangyixinxin/DAGI) ## 3 Experimental Results In this section, we evaluate DAGI on the NCANDA and ABCD data sets (described in Section 3.1). On NCANDA (Section 3.2), we determine the accuracy of the imputed measurements by our and other approaches by comparing them with real measurements via 5-fold cross-validation. We highlight the crucial role of explicitly accounting for the relationship between ROIs and the demographic factor sex in the imputation process by visualizing the learned embeddings and examining the discrepancy in the imputed measurements across the sexes. In an out-of-sample test on ABCD (Section 3.3), the curvature scores are not provided so we infer the accuracy from a classifier identifying sex just based on ABCD measurements, by including also our imputed ones, and by adding those imputed by alternative approaches that combine NCANDA and ABCD dataset in the training process. ### Dataset We utilize two publicly available datasets to evaluate our proposed model. The first data set (Release: NCANDA_PUBLIC_BASE_STRUCTURAL_V01 [20]) consists of baseline Freesurfer measurements of all 540 participants (270 females and 270 males) of NCANDA [1] that are between the ages 12-18 years and report no-to-low alcohol drinking in the past year. The Freesurfer score for each of the 34 bilateral cortical regions defined according to the Desikan-Killiany Atlas [5] consists of 5 regional measurements: average thickness, surface area, gray matter volume, mean curvature, and Gaussian curvature. The second public data release is the Data Release 4.0 of ABCD dataset [3], from which we use data from all 3760 adolescents (1682 females and 2078 males) collected between ages 12 to 13.8 years for our analysis. In addition to the average thickness, surface area and gray matter volume, ABCD released the "sulcal depth" but does not contain the two curvature scores released by NCANDA. Imputing those curvature scores from the three shared ones is the goal here. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{Mean Curvature} & \multicolumn{3}{c}{Gaussian Curvature} \\ \cline{2-7} & MSE & MAE & MRE & MSE & MAE & MRE \\ & (e\({}^{-3}\)) & (e\({}^{-2}\)) & & (e\({}^{-4}\)) & (e\({}^{-2}\)) & \\ \hline Linear Regression [18] & & & & & & \\ Direct & 9.40 & 2.95 & 40.36 & 3.15 & 1.63 & 15.68 \\ ROI-based & 8.52 & 2.12 & 31.77 & 2.24 & 1.12 & 9.58 \\ \hline Multi-layer Perceptron [2] & 8.89 & 2.56 & 35.65 & 2.99 & 1.58 & 12.90 \\ \hline GI & & & & & & \\ GCN [9] & 9.80 & 3.01 & 45.29 & 3.05 & 1.60 & 14.51 \\ GIN [29] & 7.87 & 1.99 & 28.65 & 1.88 & 1.05 & 7.22 \\ DAGI (Proposed) & **7.71** & **1.92** & **26.77** & **1.19** & **0.81** & **5.41** \\ \hline \hline \end{tabular} \end{table} Table 1: Imputation accuracy based on 5-fold cross-validation on NCANDA. GI refers to the implementation of DAGI without the classifier. The best results are shown in **bold**. Compared to DAGI, all error scores are significantly higher (\(p\leq 0.05\) based on two-sided paired t-test) with the exception of the MSE and MAE associated with the mean curvature scores produced by GIN. ### Experiments on NCANDA #### 3.2.1 Quantitative Comparison: In NCANDA, we measure the accuracy of our imputed measurements by performing 5-fold cross-validation and then record for each measurement type the average Mean Squared Error (MSE) and Mean Absolute Error (MAE) across all subjects. Based on MAE, we also compute the Mean Relative Error (MRE) to have an error score that is indifferent to the scale of the inferred measurements. To put those accuracy scores into context, we repeat the 5-fold cross-validation for other approaches. Specifically, we impute the measurements via an MLP [2] and a linear regression model [18] (a.k.a., direct linear regression). As not all measurements across ROIs necessarily have a direct relationship with one another, the "ROI-based Linear Regression" separately fits a linear model to each ROI so that it imputes missing measurements as the linear combinations of observed measures within each individual region. We investigate our modeling choices by imputing scores without the classifier (referring to as Graph Imputation, or GI) and by replacing the GIN with a GCN [9]. We apply two-sided paired t-tests between the error scores recorded for the proposed DAGI and each alternative approach and label p-values \(\leq\) 0.05 as being significantly different. According to Table 1, the two approaches oblivious to ROIs, i.e., linear regression and MLP, received relatively high error scores indicating the importance of accounting for ROI-specific characteristics in the imputation process. This observation is further supported as their error scores are significantly higher (p\(<\)0.0017 across all scores) than those of the ROI-based linear regression. Significantly lower MRE scores than the ROI-based linear regression are recorded for GIN (p\(<\)0.0001), which supports our choice for encoding adjacency between ROIs in a graph structure. This encoding of the graph structure is significantly more accurate (p\(<\)0.0085 across all scores) than the alternative based on the GCN model. The MRE is further significantly reduced (p\(<\)0.0001) by guiding the training of the imputation model using the sex classifier, i.e., produced by DAGI. In summary, DAGI reported the lowest error scores across all metrics, which supports our modeling choices. #### 3.2.2 The Importance of the Classifier for Imputation: To gain a deeper understanding of the importance of modeling demographic factors (i.e., sex) for imputing each curvature score, Figure 3 (a) plots the Wasserstein distance [27] between the sex-specific distributions of the imputed measurements for DAGI and GI (i.e., DAGI with GIN and without classifier). We choose the Wasserstein distance as it is a fairly robust metric that ignores outliers by comparing the overall shape of distributions. While for both curvature scores the distance for DAGI is higher for the majority of ROIs (20 out of 34 ROIs for "mean curvature" and 19 ROIs for "Gaussian curvature"), the difference compared to GIN across all regions is significant (p = 0.03, two-sided paired t-test) only with respect to the "Gaussian curvature". This finding supports that sex is important for imputations for both curvature scores but more so for the "Gaussian curvature", which would also explain why in Table 1 all error scores of DAGI are significantly lower for this curvature score (than GI) but for the mean curvature it is only the MRE that is significantly lower. **Visualizing the Node Embeddings:** Next we investigate the importance of modeling sex and ROI adjacency for the imputation task by visualizing the node embeddings of the two implementations. Shown in Figure 3 (b) are the t-SNE plots [15] of those embeddings, where each dot represents an imputed ROI measurement of an NCANDA subject and in the top row the color refers to a specific ROI. While the embeddings by GI are clearly separated by region (Figure 3 (b) left, first row), they fail to distinguish measurements by sex, i.e., blue and orange dots overlap with each other (Figure 3 (b) left, second row). Our approach, (Figure 3 (b) right), effectively distinguishes the sexes in the latent space (first row) while also keeping separate clusters for the ROIs (second row) as highlighted by the red circles. This separation is important for imputing the ROI measurements according the to error scores reported in Table 1. **Visualizing the Brain Graph:** We investigate the importance of specific ROIs in imputing the measurements by visualizing the graph structure via the GNNExplainer [30]. GNNExplainer defines the subgraph most important for the task at hand as the one whose predicted distribution maximizes the mutual information with the one derived from the original graph. Figure 4 visualizes this subgraph with red edges (i.e., the connection between ROIs). The importance of individual nodes (i.e., ROI) is encoded by their radius. It is striking that the subgraph of DAGI (Figure 4 (c)) is a combination of the graphs of the other two models, i.e., the importance of nodes is similar to those of the approach with Figure 3: The importance of the classifier for imputation. (a) Wasserstein distance between sexes with respect to imputed ROI curvature scores. The distances are higher for DAGI (vs. GI) and that difference is significant with respect to the Gaussian Curvature according to the two-sided paired t-test; (b) t-SNE visualization of node embeddings color-coded by sex (first row) and by ROIs (second row). Embeddings of DAGI (right column) have clearer sex differences (e.g., highlighted by red circles) and larger separation between ROIs (e.g., blue circles) compared to embeddings of GI. solely Demographic Aware module, referring to as DA (Figure 4 (a)) while the importance of edges agrees with the model that only relies on the imputation model, i.e., GI in Figure 4 (b). This suggests that individual ROIs are more important for classification while the interaction between ROIs is more important for imputation. Based on those plots, we conclude that identifying sex is mostly driven by pars opercularis, rostral middle frontal, and superior frontal regions, which is in line with the literature [14, 12]. However, imputation heavily relies on the interaction between neighboring regions (such as between post central and insula regions). ### Out-of-sample Test on ABCD. Using DAGI trained on NCANDA (i.e., the most accurate model according to Table 1), we now impute the missing curvature scores on ABCD. Given the lack of "ground truth" with respect to the missing ABCD measurements, we indirectly evaluate the quality of the imputed values by comparing the accuracy of a classifier identifying sex on the 3760 ABCD participants with and without utilizing the imputed measurements. This experimental setup is based on the observation that if the imputed measurements are accurate then they should hold pertinent and discriminatory details that could be utilized for downstream tasks, such as sex classification. The sex classifier is a three-layer MLP model, whose balanced test accuracy is measured via 5-fold cross-validation. In order to remove the confounding effect of brain size on sex classification, we normalize the "average thickness", "surface area" and "gray matter volume" measurements by the supratentorial volume [19, 20]. Note, the imputed curvature scores are left unchanged since they are not confounded by brain size as their Pearson correlation [4] with the supratentorial volume is insignificant for all regions (maximum correlation is 0.053, p\(<\)0.01). Figure 4: Graph node and edge importance according to GNNExplainer [30]. Each node corresponds to an ROI. Larger nodes represent higher contributions with the most influential ones highlighted by a yellow circle. Red edges are those of the subgraph deemed most important for the task at hand. According to the figure, individual ROIs are more important for sex classification ((a) and (c)), while the relationship between ROIs is more important for imputation ((b) and (c)). According to Table 2, the balanced accuracy of the classifier just on the ABCD measurements is 83.8 %, which then significantly improves (p=0.008, McNemar's test [17]) to 84.5 % once the imputed scores are added. To put the improvement into context, we also record the classification accuracy with respect to curvature scores generated by the traditional imputation methods MICE [26] and the deep learning-based GINN [23]. Since these methods are originally designed for randomly missing values (Figure 1 (a)) and thus cannot work on the ABCD dataset alone, we train them to impute missing values on matrices containing both the NCANDA and ABCD measurements. Surprisingly, the inclusion of the curvature measurements imputed by MICE and GINN results in significantly lower classification accuracy than DAGI (p\(<\)0.01, McNemar's test). The accuracy is even worse than the classifier solely based on ABCD scores. This suggests that they fail to accurately impute the curvature scores and instead mislead the classifier by making the data more noisy. This might be attributed to the fact that these methods are typically designed for randomly distributed missing values, and thus may not be suitable for our specific scenario where specific measurements are entirely missing in a dataset (Figure 1 (b)). For this scenario, the significant improvement achieved via the curvature scores predicted by DAGI demonstrates the utility of imputing brain measurements for enhancing downstream tasks. ## 4 Conclusion The accuracy of classifiers (e.g. identifying sex from brain ROI measurements) applied to publicly available data can be negatively impacted by the absence of entire measurements from that data set. Instead of imputing the scores by merging the data set with ones that contain the measurements, we propose to rephrase the problem as a prediction task in which we learn to predict missing measurements from those that are shared across data sets. We do so by coupling a graph neural network capturing the relationship between brain regions and a classifier to model demographic differences in ROI brain measurements. Compared to existing technology, our proposed method is significantly more accurate in imputing curvature scores on NCANDA. Imputing the measurements on ABCD and then feeding them into a classifier also result in more accurate sex \begin{table} \begin{tabular}{l l l} \hline \hline Measurements Used by Classifier & & Accuracy \\ \hline \hline \multirow{2}{*}{Only ABCD scores} & \multicolumn{2}{c}{0.838} \\ \cline{2-3} & MICE [26] (trained on NCANDA \& ABCD) & 0.811 \\ \cline{2-3} & GINN [23] (trained on NCANDA \& ABCD) & 0.832 \\ \cline{2-3} & DAGI (trained on NCANDA only) & **0.845** \\ \hline \hline \end{tabular} \end{table} Table 2: Balanced accuracy of an MLP classifying sex based on ABCD with and without imputed brain measurements. The best results are shown in **bold**. All accuracies are significantly lower than DAGI (p-value \(\leq 0.01\) according to McNemar’s test). identification than solely relying on the ROI measurements provided by ABCD. Overall, our framework provides a novel and effective approach for imputing missing measurements across data sets as it is only trained once on the data set that contains the values. This might also have important implications for generalizing neuroscientific findings of deep learning approach across data sets as they could now rely on the same set of measurements. #### Acknowledgments This work was partly supported by funding from the National Institute of Health (DA057567, AA021697, AA017347, AA010723, AA005965, and AA028840), the DGIST R&D program of the Ministry of Science and ICT of KOREA (22-KU Joint-02), Stanford School of Medicine Department of Psychiatry and Behavioral Sciences Faculty Development and Leadership Award, and by the Stanford HAI Google Cloud Credit.